The document provides guidance on architectural best practices for building systems on AWS. It discusses general design principles such as stopping guessing capacity needs, testing systems at production scale, and automating to enable architectural experimentation. It also covers principles for allowing evolutionary architectures and driving architectures using data. The document then outlines the five pillars of the Well-Architected Framework: operational excellence, security, reliability, performance efficiency, and cost optimization. For each pillar, it lists relevant design principles and best practices questions.
AWS, Google Cloud, Azure, and every other public and private cloud come with their individual sets of strengths and weaknesses, but they have one thing in common: they make it easy and fast for enterprises to spin up Kubernetes clusters. Meanwhile, development and application teams make their own cloud choices, often on a per-project basis. This leads to a fragmented landscape of differently architected Kubernetes stacks, managed by separate teams and with separate toolchains for development, operations, and security.
These slides, based on the webinar hosted by leading IT research firm Enterprise Management Associates (EMA) and Red Hat, explain how to optimally harness Kubernetes as the catalyst for IT transformation.
Overcoming Regulatory & Compliance Hurdles with Hybrid Cloud EKS and Weave Gi...Weaveworks
In this webinar we will be discussing how Dream 11, the world’s largest fantasy sports platform, and its large-scale distributed cloud can meet regulatory requirements while still taking advantage of the benefits that cloud native technologies like EKS and Weave GitOps present.
Topics we are covering include:
How you can utilize EKSD (AWS’ open source EKS distribution) and EKS (managed Kubernetes in the cloud) to establish common operational workflows that minimize operational overhead
How to lower operational costs with the use of ephemeral cloud environments for development, testing and even production
How to maintain compliance by enabling clear operational controls and auditability
Cloud-Native Fundamentals: Accelerating Development with Continuous IntegrationVMware Tanzu
DevOps. Microservices. Containers. These terms have a lot of buzz for their role in cloud-native application development and operations. But, if you haven't automated your tests and builds with continuous integration (CI), none of them matter.
Continuous integration is the automation of building and testing new code. Development teams that use CI can catch bugs early and often; resulting in code that is always production ready. Compared to manual testing, CI eliminates a lot of toil and improves code quality. At the end of the day, it's those code defects that slip into production that slow down teams and cause apps to fall over.
The journey to continuous integration maturity has some requirements. Join Pivotal's James Ma, product manager for Concourse, and Dormain Drewitz, product marketing to learn about:
- How Test-Driven Development feeds the CI process
- What is different about CI in a cloud-native context
- How to measure progress and success in adopting CI
Dormain is a Senior Director of Product and Customer Marketing with Pivotal. She has published extensively on cloud computing topics for ten years, demystifying the changing requirements of the infrastructure software stack. She’s presented at the Gartner Application Architecture, Development, and Integration Summit; Open Source Summit; Cloud Foundry Summit, and numerous software user events.
James Ma is a product manager at Pivotal and is based out of their office in Toronto, Canada. As a consultant for the Pivotal Labs team, James worked with Fortune 500 companies to hone their agile software development practices and adopt a user-centered approach to product development. He has worked with companies across multiple industries including: mobile e-commerce, finance, heath and hospitality. James is currently a part of the Pivotal Cloud Foundry R&D group and is the product manager for Concourse CI, the continuous "thing do-er".
Presenters : Dormain Drewitz & James Ma, Pivotal
Building high volume software factories is all about combining workflow and automation functionality to ensure that each application development team is able to repeatedly deliver secure, high quality, feature rich iterations and operate them on scalable, highly available cloud infrastructure.
Attendees will learn how GitLab and Amazon Web Services (AWS) integrate together to provide best of breed development workflows and rock solid cloud application infrastructure.
Overview:
Hard lessons for CI / CD from how Ford automated automobile manufacturing.
GitLab CI / CD is a factory toolkit for software manufacturing.
GitLab CI/CD accelerates time to automation maturity with premade assembly lines and components.
GitLab CI/CD accelerates AppSec (DevSecOps) time to maturity with premade Security assembly lines.
How to have a tortured transformation to software manufacturing.
GitLab rich CI / CD workflows ensure cross-team (Dev, Ops, Sec) collaborative engagement and compliance with change gating controls with auditability.
GitLab CI / CD integrates with AWS infrastructure with multiple possible points of integration.
AWS, Google Cloud, Azure, and every other public and private cloud come with their individual sets of strengths and weaknesses, but they have one thing in common: they make it easy and fast for enterprises to spin up Kubernetes clusters. Meanwhile, development and application teams make their own cloud choices, often on a per-project basis. This leads to a fragmented landscape of differently architected Kubernetes stacks, managed by separate teams and with separate toolchains for development, operations, and security.
These slides, based on the webinar hosted by leading IT research firm Enterprise Management Associates (EMA) and Red Hat, explain how to optimally harness Kubernetes as the catalyst for IT transformation.
Overcoming Regulatory & Compliance Hurdles with Hybrid Cloud EKS and Weave Gi...Weaveworks
In this webinar we will be discussing how Dream 11, the world’s largest fantasy sports platform, and its large-scale distributed cloud can meet regulatory requirements while still taking advantage of the benefits that cloud native technologies like EKS and Weave GitOps present.
Topics we are covering include:
How you can utilize EKSD (AWS’ open source EKS distribution) and EKS (managed Kubernetes in the cloud) to establish common operational workflows that minimize operational overhead
How to lower operational costs with the use of ephemeral cloud environments for development, testing and even production
How to maintain compliance by enabling clear operational controls and auditability
Cloud-Native Fundamentals: Accelerating Development with Continuous IntegrationVMware Tanzu
DevOps. Microservices. Containers. These terms have a lot of buzz for their role in cloud-native application development and operations. But, if you haven't automated your tests and builds with continuous integration (CI), none of them matter.
Continuous integration is the automation of building and testing new code. Development teams that use CI can catch bugs early and often; resulting in code that is always production ready. Compared to manual testing, CI eliminates a lot of toil and improves code quality. At the end of the day, it's those code defects that slip into production that slow down teams and cause apps to fall over.
The journey to continuous integration maturity has some requirements. Join Pivotal's James Ma, product manager for Concourse, and Dormain Drewitz, product marketing to learn about:
- How Test-Driven Development feeds the CI process
- What is different about CI in a cloud-native context
- How to measure progress and success in adopting CI
Dormain is a Senior Director of Product and Customer Marketing with Pivotal. She has published extensively on cloud computing topics for ten years, demystifying the changing requirements of the infrastructure software stack. She’s presented at the Gartner Application Architecture, Development, and Integration Summit; Open Source Summit; Cloud Foundry Summit, and numerous software user events.
James Ma is a product manager at Pivotal and is based out of their office in Toronto, Canada. As a consultant for the Pivotal Labs team, James worked with Fortune 500 companies to hone their agile software development practices and adopt a user-centered approach to product development. He has worked with companies across multiple industries including: mobile e-commerce, finance, heath and hospitality. James is currently a part of the Pivotal Cloud Foundry R&D group and is the product manager for Concourse CI, the continuous "thing do-er".
Presenters : Dormain Drewitz & James Ma, Pivotal
Building high volume software factories is all about combining workflow and automation functionality to ensure that each application development team is able to repeatedly deliver secure, high quality, feature rich iterations and operate them on scalable, highly available cloud infrastructure.
Attendees will learn how GitLab and Amazon Web Services (AWS) integrate together to provide best of breed development workflows and rock solid cloud application infrastructure.
Overview:
Hard lessons for CI / CD from how Ford automated automobile manufacturing.
GitLab CI / CD is a factory toolkit for software manufacturing.
GitLab CI/CD accelerates time to automation maturity with premade assembly lines and components.
GitLab CI/CD accelerates AppSec (DevSecOps) time to maturity with premade Security assembly lines.
How to have a tortured transformation to software manufacturing.
GitLab rich CI / CD workflows ensure cross-team (Dev, Ops, Sec) collaborative engagement and compliance with change gating controls with auditability.
GitLab CI / CD integrates with AWS infrastructure with multiple possible points of integration.
E4: Building Your First Predix App (Predix Transform 2016)Predix
http://predixtransform.com
How do you build your first Predix app or service? This session provides the essentials. We'll provide a step-by-step demo on building a simple app using PX and consuming some of the fundamental Predix services like UAA. We'll also cover the Predix mobile, and provide a tour of the Predix.io developer portal.
Containers: Give Me The Facts, Not The Hype - AppD Summit EuropeAppDynamics
Docker, Kubernetes, Rancher… just a few of the container technologies out there. The buzz around containers is still growing, as they can make a seismic impact on release velocity. But what’s the best way to add containers to your technology stack? Get the low down from a container expert who will separate the facts from fiction. What’s the best path to scale your adoption and usage? How do you guard against user privilege escalation? How do containers fit into a DevOps approach?
In this talk, Liz Rice will:
-Explain what’s involved in the lifecycle stages: Develop, Registry, and Deploy
-Build a container live on stage, by writing one in a few lines of Go code
-Flag container security risks and give tips on how to achieve peace of mind
For more information, visit: www.appdynamics.com
The good, the bad, and the ugly of migrating hundreds of legacy applications ...Josef Adersberger
Wir haben bei der Allianz innerhalb von 17 Monaten eine Container Plattform in der Public Cloud aufgebaut und in einem ersten Schritt 144 Java Legacy Anwendungen cloud-ready gemacht und dorthin migriert. Im Vortrag zeigen wir, was dabei unsere Erfolgsrezepte und größten Hindernisse waren. Es geht dabei unter anderen darum, wie man eine große Anwendungslandschaft auf ihre Cloud-Readiness hin analysiert und wie man eine industrialisierte Migration von Anwendungen auf eine Cloud Plattform etabliert.
O futuro das empresas passa pelas constantes transformações digitais e, para isso, é fundamental manter aplicações que atendam às exigências dos clientes e, sobretudo, seguras.
Nesse cenário, nasceu o conceito de DevSecOps, descrevendo um conjunto de práticas para integração entre as equipes de desenvolvimento de software.
Nesta palestra, entenderemos mais sobre conceitos e como aplicar DevSecOps na prática.
Provocaremos discussões “saudáveis” sobre o modelo tradicional de desenvolvimento e este modelo ágil que está trazendo uma grande mudança de paradigma na construção de aplicações.
Dynatrace: The untouchables - the Dynatrace offering here and nowDynatrace
It's almost impossible to keep up with the rate of innovation that our global R&D teams deliver, and 2017 has been one for the record books. In this session a collection of our 'untouchable' tech geniuses are going to serve you up a rapid fire run-down on what's hot right now in Dynatrace.
Migrating from Self-Managed Kubernetes on EC2 to a GitOps Enabled EKSWeaveworks
Did your company start down the path of building a cloud native platform using Kubernetes with the goal of enabling developers to innovate faster and increase productivity, but then run into challenges keeping it operating in an optimal way?
In this session, Weaveworks will discuss how to migrate from self-managed Kubernetes on EC2 to a GitOps managed Shared Services Platform (SSP) on EKS. A SSP built on EKS and managed with Weave GitOps provides developers and operators with common workflows to update both applications and infrastructure. With every change in version control, full audit trails are available, and security is enforced. While at the same time enabling easier rollbacks and faster mean-time-to-recovery (MTTR). In short, a Weave GitOps managed SSP increases developer velocity while boosting stability.
How to operate a hybrid Kubernetes architecture, using managed EKS in the AWS Cloud and EKS-Distro on premises.
How to structure your infrastructure repository to efficiently manage multiple teams.
How to use Kubernetes RBAC to provide secure cluster multi-tenancy.
How to use GitOps to promote releases across a hybrid set of independent clusters.
How to accomplish data and operational sovereignty.
http://Predixtransform.com
This session will provide an in-depth look at Predix availability and reliability from a cloud infrastructure and support perspective. We will also discuss the Predix approach for managing industrial data at scale.
Pivotal Cloud Foundry 2.3: A First LookVMware Tanzu
Join us for a look at the capabilities of Pivotal Cloud Foundry (PCF) 2.3. In addition to demos and expert Q&A, we’ll review the latest features of Pivotal’s flagship app platform, including the following:
- Polyglot service discovery
- Service instance sharing
- Operations manager improvements
- New pathways protected by TLS
- Spring Cloud Services 2.0
- Improvements to PAS for Windows and Steeltoe.io
We’ll also review PKS updates for Pivotal’s Kubernetes service. Attend this session with Jared Ruckle and Pieter Humphrey to learn how PCF helps your peers build better software.
Presenters : Pieter Humphrey & Jared Ruckle, Pivotal
Cloud-Native Operations with Kubernetes and CI/CDVMware Tanzu
Operations practices have historically lagged behind development. Agile and Extreme Programming have become common practice for development teams. In the last decade, the DevOps and SRE movements have brought these concepts to operations, borrowing heavily from Lean principles such as Kanban and Value Stream Mapping. So, how does all of this play out if we’re using Kubernetes?
In this class, Paul Czarkowski, Principal Technologist at Pivotal, will explain how Kubernetes enables a new cloud-native way of operating software. Attend to learn:
● what cloud-native operations are;
● how to build a cloud-native CI/CD stack; and
● how to deploy and upgrade an application from source to production on Kubernetes.
Presenter:
Paul Czarkowski, Principal Technologist, Pivotal Software
Next Generation Vulnerability Assessment Using Datadog and SnykDevOps.com
Vulnerability assessment for teams can often be overwhelming. The dependency graph could be thousands of packages depending on the application. Triaging vulnerability data and prioritizing actions has historically been a very manual process, until now. With Datadog and Snyk, learn how to trace security and performance issues by leveraging continuous profiling capabilities for actionable insight that help developers remediate problems.
Join us on Thursday, January 21 for a unique opportunity to learn more about continuous profiling, vulnerability management, and the benefit to customers from using both of these products. In this webinar, you will:
Bust some myths around continuous profiling and learn how Datadog differentiates itself
See decorated traces in action for sample Java applications and understand how Snyk + Datadog reduce time to triage supply chain vulnerabilities
Learn roadmap information for upcoming public announcements from both partners
OSMC 2017 | Icinga2 in a 24/7 Broadcast Environment by Dave KempeNETWAYS
I will present some war stories and implementation details from our Icinga2 deployments into television broadcast environments. From plugins we needed to develop, to challenges in effecting change in staff practices I will walk through the projects and share my experiences on the way.
This will be a useful talk for anyone looking to run a monitoring project and the approach used to get management and general staff on board.
Then we will cover the implementation of distributed monitoring in Icinga2 with strict firewalls, building dashboards using Nagvis and integration of Opsgenie for alerting.
In addition, the process of training staff and using the Windows Agent installer to deploy Icinga to various windows servers will also be covered.
Deploying software and controlling infrastructure quickly and safely is a hard task.
In this talk, Brice Fernandes, Customer Success Engineer at Weaveworks, discusses GitOps, an operational model for Kubernetes and beyond to speed up development, while retaining extremely strong security guarantees. Brice describes and shows several open source tools developed at Weaveworks to support this approach. You will have a good idea of how to use the GitOps principles to create software pipelines that are fast, safe, and reproducible, while creating clear and high quality audit trails.
Check out the full presentation on YouTube: https://youtu.be/QdCwUUtcj4I
GitOps & the deployment branching models
DevOps D-day Marseille 2021:
GitOps is starting to be a well-known approach to delivering your software, but it does not provide a framework for representing different target environments or a solution for propagating changes from stage to stage. So what are the solutions to describe the Dev, QA or Production environment and especially how to propagate changes from one environment to another in an efficient, automated and secure way in a GitOps framework?
Training for AWS Solutions Architect at http://zekelabs.com/courses/amazon-web-services-training-bangalore/.This slide describes about design principles, best practices and AWS key services for the five pillars of well architected framework such as operational excellence, security, reliability, performance efficiency and cost optimization
___________________________________________________
zekeLabs is a Technology training platform. We provide instructor led corporate training and classroom training on Industry relevant Cutting Edge Technologies like Big Data, Machine Learning, Natural Language Processing, Artificial Intelligence, Data Science, Amazon Web Services, DevOps, Cloud Computing, Docker, kubernetes, microservices and Frameworks like Django,Spring, Ruby on Rails, Angular 2 and many more to Professionals.
Reach out to us at www.zekelabs.com or call us at +91 8095465880 or drop a mail at info@zekelabs.com
E4: Building Your First Predix App (Predix Transform 2016)Predix
http://predixtransform.com
How do you build your first Predix app or service? This session provides the essentials. We'll provide a step-by-step demo on building a simple app using PX and consuming some of the fundamental Predix services like UAA. We'll also cover the Predix mobile, and provide a tour of the Predix.io developer portal.
Containers: Give Me The Facts, Not The Hype - AppD Summit EuropeAppDynamics
Docker, Kubernetes, Rancher… just a few of the container technologies out there. The buzz around containers is still growing, as they can make a seismic impact on release velocity. But what’s the best way to add containers to your technology stack? Get the low down from a container expert who will separate the facts from fiction. What’s the best path to scale your adoption and usage? How do you guard against user privilege escalation? How do containers fit into a DevOps approach?
In this talk, Liz Rice will:
-Explain what’s involved in the lifecycle stages: Develop, Registry, and Deploy
-Build a container live on stage, by writing one in a few lines of Go code
-Flag container security risks and give tips on how to achieve peace of mind
For more information, visit: www.appdynamics.com
The good, the bad, and the ugly of migrating hundreds of legacy applications ...Josef Adersberger
Wir haben bei der Allianz innerhalb von 17 Monaten eine Container Plattform in der Public Cloud aufgebaut und in einem ersten Schritt 144 Java Legacy Anwendungen cloud-ready gemacht und dorthin migriert. Im Vortrag zeigen wir, was dabei unsere Erfolgsrezepte und größten Hindernisse waren. Es geht dabei unter anderen darum, wie man eine große Anwendungslandschaft auf ihre Cloud-Readiness hin analysiert und wie man eine industrialisierte Migration von Anwendungen auf eine Cloud Plattform etabliert.
O futuro das empresas passa pelas constantes transformações digitais e, para isso, é fundamental manter aplicações que atendam às exigências dos clientes e, sobretudo, seguras.
Nesse cenário, nasceu o conceito de DevSecOps, descrevendo um conjunto de práticas para integração entre as equipes de desenvolvimento de software.
Nesta palestra, entenderemos mais sobre conceitos e como aplicar DevSecOps na prática.
Provocaremos discussões “saudáveis” sobre o modelo tradicional de desenvolvimento e este modelo ágil que está trazendo uma grande mudança de paradigma na construção de aplicações.
Dynatrace: The untouchables - the Dynatrace offering here and nowDynatrace
It's almost impossible to keep up with the rate of innovation that our global R&D teams deliver, and 2017 has been one for the record books. In this session a collection of our 'untouchable' tech geniuses are going to serve you up a rapid fire run-down on what's hot right now in Dynatrace.
Migrating from Self-Managed Kubernetes on EC2 to a GitOps Enabled EKSWeaveworks
Did your company start down the path of building a cloud native platform using Kubernetes with the goal of enabling developers to innovate faster and increase productivity, but then run into challenges keeping it operating in an optimal way?
In this session, Weaveworks will discuss how to migrate from self-managed Kubernetes on EC2 to a GitOps managed Shared Services Platform (SSP) on EKS. A SSP built on EKS and managed with Weave GitOps provides developers and operators with common workflows to update both applications and infrastructure. With every change in version control, full audit trails are available, and security is enforced. While at the same time enabling easier rollbacks and faster mean-time-to-recovery (MTTR). In short, a Weave GitOps managed SSP increases developer velocity while boosting stability.
How to operate a hybrid Kubernetes architecture, using managed EKS in the AWS Cloud and EKS-Distro on premises.
How to structure your infrastructure repository to efficiently manage multiple teams.
How to use Kubernetes RBAC to provide secure cluster multi-tenancy.
How to use GitOps to promote releases across a hybrid set of independent clusters.
How to accomplish data and operational sovereignty.
http://Predixtransform.com
This session will provide an in-depth look at Predix availability and reliability from a cloud infrastructure and support perspective. We will also discuss the Predix approach for managing industrial data at scale.
Pivotal Cloud Foundry 2.3: A First LookVMware Tanzu
Join us for a look at the capabilities of Pivotal Cloud Foundry (PCF) 2.3. In addition to demos and expert Q&A, we’ll review the latest features of Pivotal’s flagship app platform, including the following:
- Polyglot service discovery
- Service instance sharing
- Operations manager improvements
- New pathways protected by TLS
- Spring Cloud Services 2.0
- Improvements to PAS for Windows and Steeltoe.io
We’ll also review PKS updates for Pivotal’s Kubernetes service. Attend this session with Jared Ruckle and Pieter Humphrey to learn how PCF helps your peers build better software.
Presenters : Pieter Humphrey & Jared Ruckle, Pivotal
Cloud-Native Operations with Kubernetes and CI/CDVMware Tanzu
Operations practices have historically lagged behind development. Agile and Extreme Programming have become common practice for development teams. In the last decade, the DevOps and SRE movements have brought these concepts to operations, borrowing heavily from Lean principles such as Kanban and Value Stream Mapping. So, how does all of this play out if we’re using Kubernetes?
In this class, Paul Czarkowski, Principal Technologist at Pivotal, will explain how Kubernetes enables a new cloud-native way of operating software. Attend to learn:
● what cloud-native operations are;
● how to build a cloud-native CI/CD stack; and
● how to deploy and upgrade an application from source to production on Kubernetes.
Presenter:
Paul Czarkowski, Principal Technologist, Pivotal Software
Next Generation Vulnerability Assessment Using Datadog and SnykDevOps.com
Vulnerability assessment for teams can often be overwhelming. The dependency graph could be thousands of packages depending on the application. Triaging vulnerability data and prioritizing actions has historically been a very manual process, until now. With Datadog and Snyk, learn how to trace security and performance issues by leveraging continuous profiling capabilities for actionable insight that help developers remediate problems.
Join us on Thursday, January 21 for a unique opportunity to learn more about continuous profiling, vulnerability management, and the benefit to customers from using both of these products. In this webinar, you will:
Bust some myths around continuous profiling and learn how Datadog differentiates itself
See decorated traces in action for sample Java applications and understand how Snyk + Datadog reduce time to triage supply chain vulnerabilities
Learn roadmap information for upcoming public announcements from both partners
OSMC 2017 | Icinga2 in a 24/7 Broadcast Environment by Dave KempeNETWAYS
I will present some war stories and implementation details from our Icinga2 deployments into television broadcast environments. From plugins we needed to develop, to challenges in effecting change in staff practices I will walk through the projects and share my experiences on the way.
This will be a useful talk for anyone looking to run a monitoring project and the approach used to get management and general staff on board.
Then we will cover the implementation of distributed monitoring in Icinga2 with strict firewalls, building dashboards using Nagvis and integration of Opsgenie for alerting.
In addition, the process of training staff and using the Windows Agent installer to deploy Icinga to various windows servers will also be covered.
Deploying software and controlling infrastructure quickly and safely is a hard task.
In this talk, Brice Fernandes, Customer Success Engineer at Weaveworks, discusses GitOps, an operational model for Kubernetes and beyond to speed up development, while retaining extremely strong security guarantees. Brice describes and shows several open source tools developed at Weaveworks to support this approach. You will have a good idea of how to use the GitOps principles to create software pipelines that are fast, safe, and reproducible, while creating clear and high quality audit trails.
Check out the full presentation on YouTube: https://youtu.be/QdCwUUtcj4I
GitOps & the deployment branching models
DevOps D-day Marseille 2021:
GitOps is starting to be a well-known approach to delivering your software, but it does not provide a framework for representing different target environments or a solution for propagating changes from stage to stage. So what are the solutions to describe the Dev, QA or Production environment and especially how to propagate changes from one environment to another in an efficient, automated and secure way in a GitOps framework?
Training for AWS Solutions Architect at http://zekelabs.com/courses/amazon-web-services-training-bangalore/.This slide describes about design principles, best practices and AWS key services for the five pillars of well architected framework such as operational excellence, security, reliability, performance efficiency and cost optimization
___________________________________________________
zekeLabs is a Technology training platform. We provide instructor led corporate training and classroom training on Industry relevant Cutting Edge Technologies like Big Data, Machine Learning, Natural Language Processing, Artificial Intelligence, Data Science, Amazon Web Services, DevOps, Cloud Computing, Docker, kubernetes, microservices and Frameworks like Django,Spring, Ruby on Rails, Angular 2 and many more to Professionals.
Reach out to us at www.zekelabs.com or call us at +91 8095465880 or drop a mail at info@zekelabs.com
An introduction to the devsecops webinar will be presented by me at 10.30am EST on 29th July,2018. It's a session focussed on high level overview of devsecops which will be followed by intermediate and advanced level sessions in future.
Agenda:
-DevSecOps Introduction
-Key Challenges, Recommendations
-DevSecOps Analysis
-DevSecOps Core Practices
-DevSecOps pipeline for Application & Infrastructure Security
-DevSecOps Security Tools Selection Tips
-DevSecOps Implementation Strategy
-DevSecOps Final Checklist
Modern Security Operations aka Secure DevOps @ All Day DevOps 2017Madhu Akula
We will discuss the what, why and the how of running modern security operations. We will take a look at the pain points in a DevOps life cycle and see the benefits of pragmatic security solutions. Attendees will get an idea about where and how to start devsecops for secure devops pipeline.
This talk is focused on the what, why and the how of running security operations in the modern world. The way attacks are changing and developers are moving ahead with the next generation technologies is blazingly fast. However, traditional operations still exist. It then becomes imperative to make changes in the way security operations should run to defend against attackers and work with developers and modern businesses. In this talk, we will see what are the real world problems faced by organisations, how we can rapidly adapt to changes by modifying the culture and methodologies while relying on processes, tools and techniques.
Infrastructure as Code Maturity Model v1Gary Stafford
Systematically Evolving an Organization’s Infrastructure . The original version of the IaC Maturity Model. See the latest version here: https://www.slideshare.net/garystafford/how-mature-is-your-infrastructure.
Using AWS Well Architectured Framework for Software Architecture Evaluations ...Alexandr Savchenko
Event Lint: https://pages.awscloud.com/EMEA-field-OE-AWS-Cloud-Week-2020-reg-event.html
When you start thinking about innovations and prepare evaluations plan for AWS architecture, first of all you want to define answers to a lot of questions such as: “What methods should I use (interviews or automation tools)?”, “What questions should I ask and what categories should they cover?”, “Can I use some automation tools to define correct receipts?”, “What best practices should I recommend after evaluation and what will be the best way to implement these improvements?”.
AWS Well-Architecture Framework has answers to all of these questions and can help you to evaluate, build or improve your infrastructure and software architecture. It's a very important tool that will be useful in different phases of SDLC and you can use this on a regular basis.
This speech will expose principles of architecture evaluation using AWS WAF, show structure of framework, general design principles and common categories, materials which will help you learn this framework and AWS architecture more deeply.
Unleash Team Productivity with Real-Time Operations (DEV203-S) - AWS re:Inven...Amazon Web Services
For today’s digital organizations, even a few minutes of downtime can mean millions of dollars lost and customers who go elsewhere. To keep up with customer expectations, organizations must handle and prioritize real-time operations at a scale that didn’t exist before. However, developing this competency is easier said than done, especially without a solid understanding of the capabilities needed to drive real-time operations across cloud and on-premises environments. In this session, we explore how innovations around machine learning, automation, and analytics, when combined with modern incident management best practices, can improve operational performance, team productivity, and drive business results. This session is brought to you by AWS partner, PagerDuty, Inc.
1. Overview of DevOps
2. Infrastructure as Code (IaC) and Configuration as code
3. Identity and Security protection in CI CD environment
4. Monitor Health of the Infrastructure/Application
5. Open Source Software (OSS) and third-party tools, such as Chef, Puppet, Ansible, and Terraform to achieve DevOps.
6. Future of DevOps Application
A short summary describing the major guiding principles of each of the five pillars and key actions that can be taken based on the key points mentioned
I presented some practical aspects of adopting SRE for your organization & how Kubernetes can help in that journey, based on my experience in building the SRE practice at WSO2. The WSO2 SRE team runs the WSO2 Choreo & Asgardeo clouds.
Presentazione dello speech tenuto da Carmine Spagnuolo (Postdoctoral Research Fellow - Università degli Studi di Salerno/ ACT OR) dal titolo "Technology insights: Decision Science Platform", durante il Decision Science Forum 2019, il più importante evento italiano sulla Scienza delle Decisioni.
Partnership to Capture Indonesia ERP Cloud Trend OpportunitiesSutedjo Tjahjadi
Datacomm, Acumatica & Partners Community gathered to discuss how to foster the adoption of Acumatica ERP Cloud applications in Indonesia Market. The market primary concern is security & datacenter location. Datacomm Cloud Business - (cloud.datacomm.co.id) Enterprise - Secure - Local philosophy was shared to address the issue.
Should you make the move to microservices?
How do you avoid the gotchas and overcome the complexities when you do?
We’ll do a deep dive into architecture principles, container orchestration, impacts to CI workflows, monitoring, auto-scaling clusters, and more to shed light on the real-world realities of implementing these powerful new technologies.
You'll learn:
When’s the right time to move to microservices
Why Kubernetes for container orchestration
How to overcome the most common challenges
Pro tip: How to provision your first cluster in minutes
Similar to AWS Well-Architected Framework (nov 2017) (20)
Study Notes - Event-Driven Data Management for MicroservicesRick Hwang
Microservices from Design to Deployment (https://www.nginx.com/resources/library/designing-deploying-microservices/)
- CH05 Event-Driven Data Management for Microservices
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
An Approach to Detecting Writing Styles Based on Clustering Techniquesambekarshweta25
An Approach to Detecting Writing Styles Based on Clustering Techniques
Authors:
-Devkinandan Jagtap
-Shweta Ambekar
-Harshit Singh
-Nakul Sharma (Assistant Professor)
Institution:
VIIT Pune, India
Abstract:
This paper proposes a system to differentiate between human-generated and AI-generated texts using stylometric analysis. The system analyzes text files and classifies writing styles by employing various clustering algorithms, such as k-means, k-means++, hierarchical, and DBSCAN. The effectiveness of these algorithms is measured using silhouette scores. The system successfully identifies distinct writing styles within documents, demonstrating its potential for plagiarism detection.
Introduction:
Stylometry, the study of linguistic and structural features in texts, is used for tasks like plagiarism detection, genre separation, and author verification. This paper leverages stylometric analysis to identify different writing styles and improve plagiarism detection methods.
Methodology:
The system includes data collection, preprocessing, feature extraction, dimensional reduction, machine learning models for clustering, and performance comparison using silhouette scores. Feature extraction focuses on lexical features, vocabulary richness, and readability scores. The study uses a small dataset of texts from various authors and employs algorithms like k-means, k-means++, hierarchical clustering, and DBSCAN for clustering.
Results:
Experiments show that the system effectively identifies writing styles, with silhouette scores indicating reasonable to strong clustering when k=2. As the number of clusters increases, the silhouette scores decrease, indicating a drop in accuracy. K-means and k-means++ perform similarly, while hierarchical clustering is less optimized.
Conclusion and Future Work:
The system works well for distinguishing writing styles with two clusters but becomes less accurate as the number of clusters increases. Future research could focus on adding more parameters and optimizing the methodology to improve accuracy with higher cluster values. This system can enhance existing plagiarism detection tools, especially in academic settings.
Online aptitude test management system project report.pdfKamal Acharya
The purpose of on-line aptitude test system is to take online test in an efficient manner and no time wasting for checking the paper. The main objective of on-line aptitude test system is to efficiently evaluate the candidate thoroughly through a fully automated system that not only saves lot of time but also gives fast results. For students they give papers according to their convenience and time and there is no need of using extra thing like paper, pen etc. This can be used in educational institutions as well as in corporate world. Can be used anywhere any time as it is a web based application (user Location doesn’t matter). No restriction that examiner has to be present when the candidate takes the test.
Every time when lecturers/professors need to conduct examinations they have to sit down think about the questions and then create a whole new set of questions for each and every exam. In some cases the professor may want to give an open book online exam that is the student can take the exam any time anywhere, but the student might have to answer the questions in a limited time period. The professor may want to change the sequence of questions for every student. The problem that a student has is whenever a date for the exam is declared the student has to take it and there is no way he can take it at some other time. This project will create an interface for the examiner to create and store questions in a repository. It will also create an interface for the student to take examinations at his convenience and the questions and/or exams may be timed. Thereby creating an application which can be used by examiners and examinee’s simultaneously.
Examination System is very useful for Teachers/Professors. As in the teaching profession, you are responsible for writing question papers. In the conventional method, you write the question paper on paper, keep question papers separate from answers and all this information you have to keep in a locker to avoid unauthorized access. Using the Examination System you can create a question paper and everything will be written to a single exam file in encrypted format. You can set the General and Administrator password to avoid unauthorized access to your question paper. Every time you start the examination, the program shuffles all the questions and selects them randomly from the database, which reduces the chances of memorizing the questions.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...ssuser7dcef0
Power plants release a large amount of water vapor into the
atmosphere through the stack. The flue gas can be a potential
source for obtaining much needed cooling water for a power
plant. If a power plant could recover and reuse a portion of this
moisture, it could reduce its total cooling water intake
requirement. One of the most practical way to recover water
from flue gas is to use a condensing heat exchanger. The power
plant could also recover latent heat due to condensation as well
as sensible heat due to lowering the flue gas exit temperature.
Additionally, harmful acids released from the stack can be
reduced in a condensing heat exchanger by acid condensation. reduced in a condensing heat exchanger by acid condensation.
Condensation of vapors in flue gas is a complicated
phenomenon since heat and mass transfer of water vapor and
various acids simultaneously occur in the presence of noncondensable
gases such as nitrogen and oxygen. Design of a
condenser depends on the knowledge and understanding of the
heat and mass transfer processes. A computer program for
numerical simulations of water (H2O) and sulfuric acid (H2SO4)
condensation in a flue gas condensing heat exchanger was
developed using MATLAB. Governing equations based on
mass and energy balances for the system were derived to
predict variables such as flue gas exit temperature, cooling
water outlet temperature, mole fraction and condensation rates
of water and sulfuric acid vapors. The equations were solved
using an iterative solution technique with calculations of heat
and mass transfer coefficients and physical properties.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Water billing management system project report.pdfKamal Acharya
Our project entitled “Water Billing Management System” aims is to generate Water bill with all the charges and penalty. Manual system that is employed is extremely laborious and quite inadequate. It only makes the process more difficult and hard.
The aim of our project is to develop a system that is meant to partially computerize the work performed in the Water Board like generating monthly Water bill, record of consuming unit of water, store record of the customer and previous unpaid record.
We used HTML/PHP as front end and MYSQL as back end for developing our project. HTML is primarily a visual design environment. We can create a android application by designing the form and that make up the user interface. Adding android application code to the form and the objects such as buttons and text boxes on them and adding any required support code in additional modular.
MySQL is free open source database that facilitates the effective management of the databases by connecting them to the software. It is a stable ,reliable and the powerful solution with the advanced features and advantages which are as follows: Data Security.MySQL is free open source database that facilitates the effective management of the databases by connecting them to the software.
Forklift Classes Overview by Intella PartsIntella Parts
Discover the different forklift classes and their specific applications. Learn how to choose the right forklift for your needs to ensure safety, efficiency, and compliance in your operations.
For more technical information, visit our website https://intellaparts.com
7. ● Stop guessing your capacity needs
○ Eliminate guessing about your infrastructure capacity needs.
○ You can use as much or as little capacity as you need, and scale up and down automatically.
○ With cloud computing, these problems can go away.
● Test systems at production scale
○ In the cloud, you can create a production-scale test environment on demand, complete your
testing
○ Pay for the test environment when it’s running.
● Automate to make architectural experimentation eastier
○ Automation allows you to create and replicate your systems at low cost and avoid the expense
of manual effort.
○ You can track changes to your automation, audit the impact, and revert to previous
parameters when necessary
General Design Principles
7
8. ● Allow for evolutionary architectures:
○ In a traditional environment, architectural decisions are often implemented as static, one-time
events, with a few major versions of a system during its lifetime.
○ As a business and its context continue to change, these initial decisions might hinder the
system’s ability to deliver changing business requirements.
○ In the cloud, the capability to automate and test on demand lowers the risk of impact from
design changes. This allows systems to evolve over time so that businesses can take
advantage of innovations as a standard practice.
General Design Principles
8
9. ● Drive architectures using data:
○ In the cloud you can collect data on how your architectural choices affect the behavior of your
workload.
○ This lets you make fact-based decisions on how to improve your workload.
○ Your cloud infrastructure is code, so you can use that data to inform your architecture
choices and improvements over time.
General Design Principles
9
10. ● Improve through game days:
○ Test how your architecture and processes perform by regularly scheduling game days to
simulate events in production.
○ This will help you understand where improvements can be made and can help develop
organizational experience in dealing with events.
General Design Principles
補充:
1. 維運單位新人教育訓練
2. CI / CD / DevOps
10
16. Operational Excellence
includes the ability to run and monitor systems to deliver
business value and to continually improve supporting
processes and procedures.
16
17. Design Principles
17
Perform operations as code:
● In the cloud, you can apply the same engineering discipline that you use for application code to
your entire environment.
● You can define your entire workload (applications, infrastructure, etc.) as code and update it with
code.
● You can script your operations procedures and automate their execution by triggering them in
response to events.
● By performing operations as code, you limit human error and enable consistent responses to
events.
補充:
1. `as Code` => 用 工程 方法
18. Annotate documentation:
● In an on-premises environment, documentation is created by hand (手作的), used by people, and
hard to keep in sync with the pace of change.
● In the cloud, you can automate the creation of documentation after every build (or automatically
annotate hand-crafted documentation).
● Annotated documentation can be used by people and systems.
● Use annotations as an input to your operations code.
Design Principles
18
19. Make frequent, small, reversible changes:
● Design workloads to allow components to be updated regularly.
● Make changes in small increments that can be reversed if they fail (without affecting customers
when possible).
Design Principles
19
Introduction to DevOps on AWS
21. Refine operations procedures frequently:
● As you use operations procedures, look for opportunities to improve them.
● As you evolve (推演) your workload, evolve your procedures appropriately.
● Set up regular game days to review and validate that all procedures are effective and that teams are
familiar with them.
Design Principles
21
22. Anticipate failure
● Perform “pre-mortem” exercises to identify potential sources of failure so that they can be removed
or mitigated (緩解).
● Test your failure scenarios and validate your understanding of their impact.
● Test your response procedures to ensure that they are effective and that teams are familiar with
their execution.
● Set up regular game days to test workloads and team responses to simulated events.
Design Principles
22
補充:
1. Design for failure
2. SRE CH13: Things break; that’s life.
23. Chaos Engineering
● Chaos: 混屯工程
● Netflix 提出的概念,Chaos Monkey
● 任意破壞基礎設施,系統能夠自動恢復
● Resilience as a Service (恢復能力)
23
24. 推薦閱讀:Site Reliability Engineering
1. SRE CH13 - Emergency Response
2. SRE CH14 - Managing Incidents
3. SRE CH15 - Learning from Failure
4. 警急事件 by Rick
Learn from all operational failures:
● Drive improvement through lessons learned from all operational events and failures.
● Share what is learned across teams and through the entire organization
Design Principles
24
25. Best Practices
● OPS 1: What factors drive your operational priorities?
● OPS 2: How do you design your workload to enable operability?
● OPS 3: How do you know that you are ready to support a workload?
● OPS 4: What factors drive your understanding of operational health?
● OPS 5: How do you manage operational events?
● OPS 6: How do you evolve (推演) operations?
25
27. Security(是動詞、是攻擊)
includes the ability to protect information, systems, and assets while delivering
business value through risk assessments and mitigation strategies.
27
28. ● Implement a strong identity foundation
● Enable traceability
● Apply security at all layers
● Automate security best practices
● Protect data in transit and at rest
● Prepare for security events
Design Principles
28
29. 補充:AWS IAM
Implement a strong identity foundation:
● Implement the principle of least privilege and enforce separation of duties with appropriate
authorization for each interaction with your AWS resources.
● Centralize privilege management and reduce or even eliminate reliance on long term credentials.
Design Principles
29
30. Enable traceability:
● Monitor, alert, and audit actions and changes to your environment in real time.
● Integrate logs and metrics with systems to automatically respond and take action.
Design Principles
30
補充:AWS CloudWatch, CloudTrial, VPC Flow
Log
31. Apply security at all layers:
● Rather than just focusing on protecting a single outer layer, apply a defense-in-depth approach with
other security controls.
● Apply to all layers, for example, edge network, virtual private cloud (VPC), subnet, load balancer,
every instance, operating system, and application.
Design Principles
31
32. Automate security best practices:
● Automated software-based security mechanisms improve your ability to securely scale more rapidly
and cost effectively.
● Create secure architectures, including the implementation of controls that are defined and managed
as code in version-controlled templates.
Design Principles
32
33. Protect data in transit and at rest:
● Classify your data into sensitivity levels and use mechanisms, such as encryption and tokenization
where appropriate.
● Reduce or eliminate direct human access to data to reduce risk of loss or modification.
Design Principles
33
34. Prepare for security events:
● Prepare for an incident by having an incident management process that aligns to your organizational
requirements.
● Run incident response simulations and use tools with automation to increase your speed for
detection, investigation, and recovery.
Design Principles
34
35. ● Identity and Access Management
● Detective Controls
● Infrastructure Protection
● Data Protection
● Incident Response
Definition
35
36. Best Practice
SEC 1: How are you protecting access to and use of the AWS account root user credentials?
SEC 2: How are you defining roles and responsibilities of system users to control human access to the AWS Management
Console and API?
SEC 3: How are you limiting automated access to AWS resources (for example, applications, scripts, and/or third-party
tools or services)?
SEC 4: How are you capturing and analyzing logs?
SEC 5: How are you enforcing network and host-level boundary protection?
SEC 6: How are you leveraging AWS service-level security features?
SEC 7: How are you protecting the integrity of the operating system?
SEC 8: How are you classifying your data?
SEC 9: How are you encrypting and protecting your data at rest?
SEC 10: How are you managing keys?
SEC 11: How are you encrypting and protecting your data in transit?
SEC 12: How do you ensure that you have the appropriate incident response?
36
38. Reliability
includes the ability of a system to recover from infrastructure or service
disruptions (中斷), dynamically acquire computing resources to meet
demand, and mitigate disruptions (減輕中斷) such as misconfigurations or
transient network issues.
38
39. Design Principles
1. Test recovery procedures
2. Automatically recover from failure
3. Use horizontal scalability to increase system availability
4. Automatically add/remove resources as needed to avoid capacity saturation
5. Manage change in automation
39
40. Test Recovery Procedures
1. In an on-premises environment, testing is often conducted to prove the system works in a particular scenario.
2. Testing is not typically used to validate recovery strategies.
3. In the cloud, you can test how your system fails, and you can validate your recovery procedures.
4. You can use automation to simulate different failures or to recreate scenarios that led to failures before.
5. This exposes failure pathways that you can test and rectify before a real failure scenario, reducing the risk of
components failing that have not been tested before.
40
41. Automatically recover from failure
● By monitoring a system for key performance indicators (KPIs), you can trigger automation when a
threshold is breached.
● This allows for automatic notification and tracking of failures, and for automated recovery processes
that work around or repair the failure.
● With more sophisticated automation, it’s possible to anticipate and remediate failures before they
occur
41
42. Scale horizontally to increase aggregate system availability
Replace one large resource with multiple small resources to reduce the impact of a single failure on the
overall system.
Distribute requests across multiple, smaller resources to ensure that they don’t share a common point of
failure.
42
43. Stop guessing capacity
A common cause of failure in on-premises systems is resource saturation, when the demands placed on a
system exceed the capacity of that system (this is often the objective of denial of service attacks).
In the cloud, you can monitor demand and system utilization, and automate the addition or removal of
resources to maintain the optimal level to satisfy demand without over- or under-provisioning.
43
44. Manage change in automation
Changes to your infrastructure should be done using automation.
The changes that need to be managed are changes to the automation.
44
46. Best Practice
1. REL 1: How are you managing AWS service limits for your accounts?
2. REL 2: How are you planning your network topology on AWS?
3. REL 3: How does your system adapt to changes in demand?
4. REL 4: How are you monitoring AWS resources?
5. REL 5: How are you executing change?
6. REL 6: How are you backing up your data?
7. REL 7: How does your system withstand component failures?
8. REL 8: How are you testing your resiliency?
9. REL 9: How are you planning for disaster recovery?
46
48. Performance Efficiency
includes the ability to use computing resources efficiently to meet system
requirements and to maintain that efficiency as demand changes and
technologies evolve.
48
49. Design Principles
1. Democratize advanced technologies
2. Go global in minutes
3. Use serverless architectures
4. Experiment more often
5. Try various comparative testing and configurations to find out what performs
better
49
50. Democratize advanced technologies
Technologies that are difficult to implement can become easier to consume by pushing that knowledge
and complexity into the cloud vendor’s domain.
Rather than having your IT team learn how to host and run a new technology, they can simply consume it
as a service.
For example, NoSQL databases, media transcoding, and machine learning are all technologies that
require expertise that is not evenly dispersed across the technical community.
In the cloud, these technologies become services that your team can consume while focusing on product
development rather than resource provisioning and management.
50
51. Go global in minutes
Easily deploy your system in multiple Regions around the world with just a few clicks.
This allows you to provide lower latency and a better experience for your customers at minimal cost.
51
52. Use serverless architectures
In the cloud, serverless architectures remove the need for you to run and maintain servers to carry out
traditional compute activities.
For example, storage services can act as static websites, removing the need for web servers, and event
services can host your code for you.
This not only removes the operational burden of managing these servers, but also can lower transactional
costs because these managed services operate at cloud scale.
52
53. Experiment more often
With virtual and automatable resources, you can quickly carry out comparative testing using different
types of instances, storage, or configurations.
53
54. Best Practices
● PERF 1: How do you select the best performing architecture?
● PERF 2: How did you select your compute solution?
● PERF 3: How do you select your storage solution?
● PERF 4: How do you select your database solution?
● PERF 5: How do you configure your networking solution?
● PERF 6: How do you ensure that you continue to have the most appropriate
resource type as new resource types and features are introduced?
● PERF 7: How do you monitor your resources post-launch to ensure they are
performing as expected?
● PERF 8: How do you use tradeoffs to improve performance?
54
57. Design Principles
1. Adopt a consumption model
2. Measure overall efficiency
3. Stop spending money on data center operations
4. Analyze and attribute expenditure
5. Use managed services to reduce cost of ownership
57
58. Adopt a consumption model
Pay only for the computing resources that you consume and increase or decrease usage depending on
business requirements, not by using elaborate forecasting.
For example, development and test environments are typically only used for eight hours a day during the
work week. You can stop these resources when they are not in use for a potential cost savings of 75% (40
hours versus 168 hours).
58
59. Measure overall efficiency
Measure the business output of the system and the costs associated with delivering it.
Use this measure to understand the gains you make from increasing output and reducing costs.
59
60. Stop spending money on data center operations
AWS does the heavy lifting of racking, stacking, and powering servers, so you can focus on your
customers and business projects rather than on IT infrastructure.
60
61. Analyze and attribute expenditure
The cloud makes it easier to accurately identify the usage and cost of systems, which then allows
transparent attribution of IT costs to individual business owners.
This helps measure return on investment (ROI) and gives system owners an opportunity to optimize their
resources and reduce costs
61
62. Use managed services to reduce cost of ownership
In the cloud, managed services remove the operational burden of maintaining servers for tasks like
sending email or managing databases.
And because managed services operate at cloud scale, they can offer a lower cost per transaction or
service.
62
64. Best Practices
1. COST 1: Are you considering cost when you select AWS services for your solution?
2. COST 2: Have you sized your resources to meet your cost targets?
3. COST 3: Have you selected the appropriate pricing model to meet your cost targets?
4. COST 4: How do you make sure your capacity matches but does not substantially exceed what you need?
5. COST 5: Do you consider data-transfer charges when designing your architecture?
6. COST 6: How are you monitoring usage and spending?
7. COST 7: Do you decommission resources that you no longer need or stop resources that are temporarily not
needed?
8. COST 8: What access controls and procedures do you have in place to govern AWS usage?
9. COST 9: How do you manage and/or consider the adoption of new services?
64
66. The Review Process
The review of architectures needs to be done
● consistent manner (一致的方法)
● a light-weight process (hours not days)
● a conversation and not an audit
● identify any critical issues
● building an architecture using the Well-Architected Framework continually review their architecture,
rather than holding a formal review meeting.
66
67. Items for Review Meeting
● A meeting room with whiteboards
● Print outs of any diagrams or design notes
● Action list of questions that require out-of-band research to answer (for example, did we enable
encryption or not?)
67
68. SRE: Site Reliability Engineering
CH27 - Reliable Product Launches at Scale
Launch Coordination Engineers
68