Docker in Open Science Data Analysis Challenges by Bruce HoffDocker, Inc.
Typically in predictive data analysis challenges, participants are provided a dataset and asked to make predictions. Participants include with their prediction the scripts/code used to produce it. Challenge administrators validate the winning model by reconstructing and running the source code.
Often data cannot be provided to participants directly, e.g. due to data sensitivity (data may be from living human subjects) or data size (tens of terabytes). Further, predictions must be reproducible from the code provided by particpants. Containerization is an excellent solution to these problems: Rather than providing the data to the participants, we ask the participants to provided a Dockerized "trainable" model. We run the both the training and validation phases of machine learning and guarantee reproducibility 'for free'.
We use the Docker tool suite to spin up and run servers in the cloud to process the queue of submitted containers, each essentially a batch job. This fleet can be scaled to match the level of activity in the challenge. We have used Docker successfully in our 2015 ALS Stratification Challenge and our 2015 Somatic Mutation Calling Tumour Heterogeneity (SMC-HET) Challenge, and are starting an implementation for our 2016 Digitial Mammography Challenge.
DevNexus 2015
Docker: containerizing a monolithic app into a microservice-based PaaS
Convert a monolithic application into a microservice-based PaaS using Docker and related, containerization technologies. This will be the third presentation of a series of presentations that began greater than one year ago to evangelize the benefits of Docker. The scope of content spans from a development environment to a hybrid PaaS, and how Containerization is an enabler of architectural choice, innovation, scalability, and polyglot solutions.
The basics of Docker will be examined including repositories, brief discussion about managing and monitoring Docker containers, service discovery, and security. New and emerging technologies will be a constant theme, particularly about microservices, in addition to the ongoing evolution of the market and what the future may bring. Common organizational issues (and tactical solutions) that may impede successful decomposition and migration of legacy monoliths will be discussed, including security, DevOps and refactoring.
Hypothetical architectures will be described for building progressively more robust and complex applications and deployment models. The goal is to highlight the power, flexibility and scalability that containers enable.
Examples will start simple, from a local development environment, that is a simple two container setup that encapsulate a database and application tier. Subsequent discussion will involve progressively more complex and robust deployments that include features such as service discovery, automatic load balancing, and abstractions to simplify linking of containers including service gateways. With the stopping point of a hybrid PaaS.
DCSF 19 Improving the Human Condition with DockerDocker, Inc.
This document discusses how RTI International, a non-profit research institute, uses Docker to help improve various software products and tools. It describes several projects including CFS Analytics, a crime analysis tool; Crosstab Builder, a statistical analysis tool; and Public Health Microsimulations. For each, it explains how Docker helps allow for scalability, platform independence, security, and reproducibility. Overall, it conveys that Docker helps RTI International build reliable software and facilitate scientific analysis to work towards improving conditions for humanity.
'The History of Metrics According to me' by Stephen DayDocker, Inc.
Metrics and monitoring are a time honored tradition for any engineering discipline. It is how we ensure the systems we use are working the way we expect. If this is a time honored tradition, why is it not a built into every piece of software we create, from the ground up? With software engineering, usually the trick to solving anything is to make it easier. By solving the hard parts of application metrics in Docker, we should make it more likely that metrics are a part of your services from the start.
DockerCon EU 2015: Official Repos and Project NautilusDocker, Inc.
Presentation by Krish Garimella, Sr. Director of Engineering, Docker and Mario Ponticello, Product Manager, Docker
Learn more about Official Repositories and the process behind securing and maintaining images in collaboration with upstream partners. We will also introduce Project Nautilus.
Talk at the Boston Cloud Foundry Meetup June 2015Chip Childers
The document discusses the evolution of modern application architectures and cloud native application platforms. Key points include:
- Cloud native platforms allow applications to have continuous delivery of business value through practices like microservices, containers, and continuous integration/delivery.
- Platforms like Cloud Foundry provide abstraction layers that allow applications to be portable across infrastructure and have their lifecycles fully managed.
- Diego is Cloud Foundry's distributed systems architecture that orchestrates containerized workloads using an abstraction of tasks and long-running processes (LRPs) running in containers managed by the Garden container runtime.
Fully Orchestrating Applications, Microservices and Enterprise Services with ...Docker, Inc.
As a multi-national bank, Societe General IT infrastructure has thousands of apps, almost every bit of technology deployed and compliance requirements. Our vision is to broadly transform traditional bank IT to be agile and fast. Speed is critical in a digital economy and at Societe Generale we are building a new execution platform with Docker that provides IT containers, middleware and infrastructure as a service and orchestration. In this session we will share the technical and organizational steps of our journey from how we defined and architected a PaaS for our entity; with service catalog, service topologies, ambassadors with Docker Datacenter, continuous integration and what’s next.
Docker in Open Science Data Analysis Challenges by Bruce HoffDocker, Inc.
Typically in predictive data analysis challenges, participants are provided a dataset and asked to make predictions. Participants include with their prediction the scripts/code used to produce it. Challenge administrators validate the winning model by reconstructing and running the source code.
Often data cannot be provided to participants directly, e.g. due to data sensitivity (data may be from living human subjects) or data size (tens of terabytes). Further, predictions must be reproducible from the code provided by particpants. Containerization is an excellent solution to these problems: Rather than providing the data to the participants, we ask the participants to provided a Dockerized "trainable" model. We run the both the training and validation phases of machine learning and guarantee reproducibility 'for free'.
We use the Docker tool suite to spin up and run servers in the cloud to process the queue of submitted containers, each essentially a batch job. This fleet can be scaled to match the level of activity in the challenge. We have used Docker successfully in our 2015 ALS Stratification Challenge and our 2015 Somatic Mutation Calling Tumour Heterogeneity (SMC-HET) Challenge, and are starting an implementation for our 2016 Digitial Mammography Challenge.
DevNexus 2015
Docker: containerizing a monolithic app into a microservice-based PaaS
Convert a monolithic application into a microservice-based PaaS using Docker and related, containerization technologies. This will be the third presentation of a series of presentations that began greater than one year ago to evangelize the benefits of Docker. The scope of content spans from a development environment to a hybrid PaaS, and how Containerization is an enabler of architectural choice, innovation, scalability, and polyglot solutions.
The basics of Docker will be examined including repositories, brief discussion about managing and monitoring Docker containers, service discovery, and security. New and emerging technologies will be a constant theme, particularly about microservices, in addition to the ongoing evolution of the market and what the future may bring. Common organizational issues (and tactical solutions) that may impede successful decomposition and migration of legacy monoliths will be discussed, including security, DevOps and refactoring.
Hypothetical architectures will be described for building progressively more robust and complex applications and deployment models. The goal is to highlight the power, flexibility and scalability that containers enable.
Examples will start simple, from a local development environment, that is a simple two container setup that encapsulate a database and application tier. Subsequent discussion will involve progressively more complex and robust deployments that include features such as service discovery, automatic load balancing, and abstractions to simplify linking of containers including service gateways. With the stopping point of a hybrid PaaS.
DCSF 19 Improving the Human Condition with DockerDocker, Inc.
This document discusses how RTI International, a non-profit research institute, uses Docker to help improve various software products and tools. It describes several projects including CFS Analytics, a crime analysis tool; Crosstab Builder, a statistical analysis tool; and Public Health Microsimulations. For each, it explains how Docker helps allow for scalability, platform independence, security, and reproducibility. Overall, it conveys that Docker helps RTI International build reliable software and facilitate scientific analysis to work towards improving conditions for humanity.
'The History of Metrics According to me' by Stephen DayDocker, Inc.
Metrics and monitoring are a time honored tradition for any engineering discipline. It is how we ensure the systems we use are working the way we expect. If this is a time honored tradition, why is it not a built into every piece of software we create, from the ground up? With software engineering, usually the trick to solving anything is to make it easier. By solving the hard parts of application metrics in Docker, we should make it more likely that metrics are a part of your services from the start.
DockerCon EU 2015: Official Repos and Project NautilusDocker, Inc.
Presentation by Krish Garimella, Sr. Director of Engineering, Docker and Mario Ponticello, Product Manager, Docker
Learn more about Official Repositories and the process behind securing and maintaining images in collaboration with upstream partners. We will also introduce Project Nautilus.
Talk at the Boston Cloud Foundry Meetup June 2015Chip Childers
The document discusses the evolution of modern application architectures and cloud native application platforms. Key points include:
- Cloud native platforms allow applications to have continuous delivery of business value through practices like microservices, containers, and continuous integration/delivery.
- Platforms like Cloud Foundry provide abstraction layers that allow applications to be portable across infrastructure and have their lifecycles fully managed.
- Diego is Cloud Foundry's distributed systems architecture that orchestrates containerized workloads using an abstraction of tasks and long-running processes (LRPs) running in containers managed by the Garden container runtime.
Fully Orchestrating Applications, Microservices and Enterprise Services with ...Docker, Inc.
As a multi-national bank, Societe General IT infrastructure has thousands of apps, almost every bit of technology deployed and compliance requirements. Our vision is to broadly transform traditional bank IT to be agile and fast. Speed is critical in a digital economy and at Societe Generale we are building a new execution platform with Docker that provides IT containers, middleware and infrastructure as a service and orchestration. In this session we will share the technical and organizational steps of our journey from how we defined and architected a PaaS for our entity; with service catalog, service topologies, ambassadors with Docker Datacenter, continuous integration and what’s next.
DCSF 19 Modern Orchestrated IT for Enterprise CMSDocker, Inc.
Wiley’s Education Services (WES) leverages a mix of CMS platforms across their 50+ student information sites for major universities throughout the world. Traditionally these sites have been housed as part of a multi-site CMS install on a single VM, and eventually across 2 VMs. Failure of either one of these VMs would mean an outage for one or all of the hosted sites. As Wiley’s leadership looked forward, they recognized the risks involved with their current design and identified Docker as a way to mitigate these risks.
WES began their investigation in to Docker to address issues of fault tolerance, consistency, and portability. They used this opportunity to modernize their workflows and reduce risk by promoting Docker images through their dev, preview, and production environments using CI/CD. This increased their confidence in deployments and reduced the need for maintenance windows. Early in the process, WES brought in BoxBoat as subject matter experts to accelerate their migration, and architect their Docker EE solution. Through the use of well-defined workflows and persistent storage, applications are continually redeployed and restored between environments with zero downtime and no loss of data. Additionally developers can pull down and run any of the sites independently with configuration that matches production. Join this sessions to learn about the challenges and triumphs that Wiley faced when orchestrating CMS deployments in Docker!
Building a Platform for the People - IBM's Open Cloud Architecture Summit - A...Chip Childers
The document discusses the shift towards cloud platforms and microservices architectures to enable continuous delivery. It argues that platforms are needed to manage the increasing complexity of distributed systems and provide services like deployment, scaling, and monitoring. The Cloud Foundry platform is presented as fulfilling this need by automating operations and allowing developers to focus on building applications instead of infrastructure. The vision is for a ubiquitous, flexible, portable, and interoperable cloud computing environment underpinning a large ecosystem of applications.
Evolving Your Distributed Cache In A Continuous Delivery World: Tyler VangorderRedis Labs
1. The document discusses the evolution of caching strategies at Build.com as their systems and traffic grew rapidly over time. They initially used a Java-based distributed cache and later switched to Redis which proved more effective.
2. As Build.com moved to a continuous delivery model with multiple environments, they needed a "shared" cache that both environments could use. They implemented a unified caching model where each version of code has its own bucket in the cache but objects can be promoted from older versions if they are compatible.
3. The key aspects of the unified caching model are using a serialization checksum to detect changes between versions, using a build number as the cache key so each version is separate, and attempting to promote
DockerCon EU 2017 - General Session Day 1Docker, Inc.
This document discusses Docker and its container platform. It highlights Docker's momentum in the industry with over 21 million Docker hosts and 24 billion container downloads. The document then summarizes Docker's container platform and how it enables applications across diverse infrastructures and throughout the lifecycle. It also discusses how Docker can help modernize traditional applications and provide portability, agility and security. The remainder of the document focuses on how MetLife leveraged Docker to containerize applications, seeing benefits like a 70% reduction in VMs and 66% reduction in costs. It outlines Docker Enterprise Edition and its value in areas like security, multi-tenancy, policy automation and management capabilities for Swarm and Kubernetes.
Gene Kim gave a presentation on his 15-year journey studying high performing IT organizations and their use of DevOps practices. He discussed how traditional IT operations created conflict between development and operations teams. However, companies like Google, Amazon and Netflix achieved much higher performance through practices like continuous integration, deployment of smaller changes frequently, automated testing, and monitoring production environments. These practices improved flow, feedback and continuous learning.
server to cloud: converting a legacy platform to an open source paasTodd Fritz
This session discusses the process to move legacy applications "into the cloud". It is intended for a diverse audience including developers, architects, and managers. We will discuss techniques, methodologies, and thought processes used to analyze, design, and execute a migration strategy and implementation plan -- from planning through rollout and operational.
An important aspect of this is the necessity for technical staff to effectively communicate to mid-level management how these design decisions and strategies translate into cost, complexity and schedule.
Commonly used migration strategies, cloud technologies, architecture options, and low level technologies will be discussed.
The case will be made that investing in strategic refactoring and decomposition during the migration will reap the benefits of a modern, decoupled and simplified system.
The end game being alignment and adoption of current best practices around PaaS, Saas, SOA, event-driven architectures, and message-oriented middleware, at scale in the cloud, to provide quantifiable business value.
This talk will focus more on the big picture, at times delving into technical architectures and discussion of certain technologies and service providers.
Use of Containers (Docker) is evangelized for decoupling and decomposing legacy systems.
Containerization provides benefits like consistent environments, lightweight packages, and efficient resource utilization and isolation. Kubernetes is an open-source platform that provides tools to automate deployment, scaling, and management of containerized applications. It groups containerized applications into logical units called pods and uses labels to identify pods. It provides features like service discovery, load balancing, rolling updates, and self-healing capabilities. Kubernetes aims to provide a platform for automating deployment, scaling and operations of application containers across clusters of hosts.
Serverless architectures are one of the hottest trends in cloud computing this year, and for good reason. There are several technical capabilities and business factors coming together to make this approach compelling from both an application development and deployment cost perspective. The new OpenWhisk project provides an open source platform to enable these cloud-native, event-driven applications.
This talk will lay out the technical and business drivers behind the rise of serverless architectures, provide an introduction to the OpenWhisk open source project (and describe how it differs from other services like AWS Lambda), and give a demonstration showing how to start developing with this new cloud computing model using the OpenWhisk implementation available on IBM Bluemix.
Lightning talk and lab presented by IBM Cloud Software Engineer, Andrew Bodine.
Gluecon Monitoring Microservices and Containers: A ChallengeAdrian Cockcroft
This document discusses the challenges of monitoring microservices and containers. It provides six rules for effective monitoring: 1) spend more time on analysis than data collection, 2) reduce latency of key metrics to under 10 seconds, 3) validate measurement accuracy, 4) make monitoring more available than services monitored, 5) optimize for distributed cloud-native applications, 6) fit metrics to models to understand relationships. It also examines models for infrastructure, flow, and ownership and discusses speed, scale, failures, and testing challenges with microservices.
MetLife has adopted a containerization strategy using Docker to modernize its traditional applications. Some key points:
- MetLife aims to embrace containers ubiquitously across its portfolio to improve speed, stability, scalability, security and reduce costs.
- It has seen success with its strategy, such as a 70% reduction in infrastructure costs and millions of dollars avoided in costs.
- MetLife provides training and knowledge sharing programs to help developers and operations teams adopt containers. It also offers services to support customers in piloting, putting early apps into production, and migrating apps at scale to containers.
This document provides an overview of DevOps concepts and practices through examples. It discusses DevOps as a culture and movement emphasizing collaboration between development and operations teams. The document demonstrates infrastructure as code, continuous integration and delivery practices like building and packaging applications, as well as deploying applications and managing dynamic configurations. It also discusses monitoring, troubleshooting and creating a feedback loop in production. The document aims to help attendees grasp DevOps essentials and leave with open questions.
Making Friendly Microservices by Michele TitlolDocker, Inc.
Small is the new big, and for good reason. The benefits of microservices and service-oriented architecture have been extolled for a number of years, yet many forge ahead without thinking of the impact the users of the services. Consuming on micro services can be enjoyable as long as the developer experience has been crafted as finely as the service itself. But just like with any other product, there isn’t a single kind of consumer. Together we will walk through some typical kinds of consumers, what their needs are, and how we can create a great developer experience using brains and tools like Docker.
DCSF 19 Developing Apps with Containers, Functions and Cloud ServicesDocker, Inc.
Cloud native applications are composed of containers, serverless functions and managed cloud services.
What is the best set of tools on your desktop to provide a rapid, iterative development experience and package applications using these three components?
This hand-on talk will explain how you can complement Docker Desktop, with it’s local Docker engine and Kubernetes cluster, with open source tools such as the Virtual Kubelet, Open Service Broker, the Gloo hybrid app gateway, Draft, and others, to build the most productive development inner-loop for these type of applications.
It will also cover how you can use the Cloud Native Application Bundle (CNAB) format and it’s implementation in the Docker app experimental tool to package your application and manage it with container supply chain tooling such as Docker Hub.
Serverless architectures are one of the hottest trends in cloud computing this year, and for good reason. There are several technical capabilities and business factors coming together to make this approach compelling from both an application development and deployment cost perspective. The new OpenWhisk project provides an open source platform to enable these cloud-native, event-driven applications.
This talk will lay out the technical and business drivers behind the rise of serverless architectures, provide an introduction to the OpenWhisk open source project (and describe how it differs from other services like AWS Lambda), and give a demonstration showing how to start developing with this new cloud computing model using the OpenWhisk implementation available on IBM Bluemix.
Presented on October 12, 2016 at the NYC Bluemix meetup
microXchg 2018: "What is a Service Mesh? Do I Need One When Developing 'Cloud...Daniel Bryant
While service meshes may be the next "big thing" in microservices, the concept isn't new. Classical SOA attempted to implement similar technology for abstracting and managing all aspects of service-to-service communication, and this was often realized as the much-maligned Enterprise Service Bus (ESB). Several years ago similar technology emerged from the microservice innovators, including Airbnb (SmartStack for service discovery), Netflix (Prana integration sidecars), and Twitter (Finagle for extensible RPC), and these technologies have now converged into the service meshes we are currently seeing being deployed.
In this talk, Daniel Bryant will share with you what service meshes are, why they're well-suited for microservice deployments, and how best to use a service mesh when you're deploying microservices. This presentation begins with a brief history of the development of service meshes, and the motivations of the unicorn organisations that developed them. From there, you'll learn about some of the currently available implementations that are targeting microservice deployments, such as Istio/Envoy, Linkerd, and NGINX Plus
DevOps: a story about automation, open source & the CloudAdrian Todorov
Presented during the PASS day at Vanier College, 2018 for Computer Science Technology students in order to teach them about DevOps Transformation, monolithic app development lifecycle, architectural changes, Terraform, Kubernetes, Ansible (automation), open source & the Cloud. We also talked about virtualization vs containerization, migration from traditional app to modern app, the container advantage, and the hiring of a DevOps intern.
Faster safer and 100 user centric application at equifax with dockerDocker, Inc.
Equifax faced challenges around software development lifecycles, vulnerability detection, and security. They implemented Docker to improve security, reduce development cycles, and support multiple platforms. Their solution involved Docker Swarm for infrastructure, CI/CD pipelines for builds and deployments, and Dockerized applications including APIs, web apps, and mobile apps. This allowed them to deliver a new product in 3 months with greater transparency, faster deployments, improved security and scaling.
The document is an agenda for a Watson on Bluemix meetup. It includes:
- An overview of Bluemix runtime, services, and DevOps architecture by Animesh Singh.
- A discussion of Watson Cloud and Cognitive Services by Anthony Stevens.
- A demo of a Watson application by Wade Barnes, who will walk through deploying a Node.js app on Bluemix that uses the Watson User Modeling service.
Overseeing Ship's Surveys and Surveyors Globally Using IoT and Docker by Jay ...Docker, Inc.
Fugro Chance Inc. oversees ship surveys globally using IoT and Docker. They developed a solution using AWS, Docker, and microservices to support a real-time web application for ship tracking. Key challenges included supporting services that need to run together and efficiently deploying new versions. They addressed this using SupervisorD to run multiple services in a single Docker container. This allows flexible development and deployment of future microservices.
DCSF 19 Modern Orchestrated IT for Enterprise CMSDocker, Inc.
Wiley’s Education Services (WES) leverages a mix of CMS platforms across their 50+ student information sites for major universities throughout the world. Traditionally these sites have been housed as part of a multi-site CMS install on a single VM, and eventually across 2 VMs. Failure of either one of these VMs would mean an outage for one or all of the hosted sites. As Wiley’s leadership looked forward, they recognized the risks involved with their current design and identified Docker as a way to mitigate these risks.
WES began their investigation in to Docker to address issues of fault tolerance, consistency, and portability. They used this opportunity to modernize their workflows and reduce risk by promoting Docker images through their dev, preview, and production environments using CI/CD. This increased their confidence in deployments and reduced the need for maintenance windows. Early in the process, WES brought in BoxBoat as subject matter experts to accelerate their migration, and architect their Docker EE solution. Through the use of well-defined workflows and persistent storage, applications are continually redeployed and restored between environments with zero downtime and no loss of data. Additionally developers can pull down and run any of the sites independently with configuration that matches production. Join this sessions to learn about the challenges and triumphs that Wiley faced when orchestrating CMS deployments in Docker!
Building a Platform for the People - IBM's Open Cloud Architecture Summit - A...Chip Childers
The document discusses the shift towards cloud platforms and microservices architectures to enable continuous delivery. It argues that platforms are needed to manage the increasing complexity of distributed systems and provide services like deployment, scaling, and monitoring. The Cloud Foundry platform is presented as fulfilling this need by automating operations and allowing developers to focus on building applications instead of infrastructure. The vision is for a ubiquitous, flexible, portable, and interoperable cloud computing environment underpinning a large ecosystem of applications.
Evolving Your Distributed Cache In A Continuous Delivery World: Tyler VangorderRedis Labs
1. The document discusses the evolution of caching strategies at Build.com as their systems and traffic grew rapidly over time. They initially used a Java-based distributed cache and later switched to Redis which proved more effective.
2. As Build.com moved to a continuous delivery model with multiple environments, they needed a "shared" cache that both environments could use. They implemented a unified caching model where each version of code has its own bucket in the cache but objects can be promoted from older versions if they are compatible.
3. The key aspects of the unified caching model are using a serialization checksum to detect changes between versions, using a build number as the cache key so each version is separate, and attempting to promote
DockerCon EU 2017 - General Session Day 1Docker, Inc.
This document discusses Docker and its container platform. It highlights Docker's momentum in the industry with over 21 million Docker hosts and 24 billion container downloads. The document then summarizes Docker's container platform and how it enables applications across diverse infrastructures and throughout the lifecycle. It also discusses how Docker can help modernize traditional applications and provide portability, agility and security. The remainder of the document focuses on how MetLife leveraged Docker to containerize applications, seeing benefits like a 70% reduction in VMs and 66% reduction in costs. It outlines Docker Enterprise Edition and its value in areas like security, multi-tenancy, policy automation and management capabilities for Swarm and Kubernetes.
Gene Kim gave a presentation on his 15-year journey studying high performing IT organizations and their use of DevOps practices. He discussed how traditional IT operations created conflict between development and operations teams. However, companies like Google, Amazon and Netflix achieved much higher performance through practices like continuous integration, deployment of smaller changes frequently, automated testing, and monitoring production environments. These practices improved flow, feedback and continuous learning.
server to cloud: converting a legacy platform to an open source paasTodd Fritz
This session discusses the process to move legacy applications "into the cloud". It is intended for a diverse audience including developers, architects, and managers. We will discuss techniques, methodologies, and thought processes used to analyze, design, and execute a migration strategy and implementation plan -- from planning through rollout and operational.
An important aspect of this is the necessity for technical staff to effectively communicate to mid-level management how these design decisions and strategies translate into cost, complexity and schedule.
Commonly used migration strategies, cloud technologies, architecture options, and low level technologies will be discussed.
The case will be made that investing in strategic refactoring and decomposition during the migration will reap the benefits of a modern, decoupled and simplified system.
The end game being alignment and adoption of current best practices around PaaS, Saas, SOA, event-driven architectures, and message-oriented middleware, at scale in the cloud, to provide quantifiable business value.
This talk will focus more on the big picture, at times delving into technical architectures and discussion of certain technologies and service providers.
Use of Containers (Docker) is evangelized for decoupling and decomposing legacy systems.
Containerization provides benefits like consistent environments, lightweight packages, and efficient resource utilization and isolation. Kubernetes is an open-source platform that provides tools to automate deployment, scaling, and management of containerized applications. It groups containerized applications into logical units called pods and uses labels to identify pods. It provides features like service discovery, load balancing, rolling updates, and self-healing capabilities. Kubernetes aims to provide a platform for automating deployment, scaling and operations of application containers across clusters of hosts.
Serverless architectures are one of the hottest trends in cloud computing this year, and for good reason. There are several technical capabilities and business factors coming together to make this approach compelling from both an application development and deployment cost perspective. The new OpenWhisk project provides an open source platform to enable these cloud-native, event-driven applications.
This talk will lay out the technical and business drivers behind the rise of serverless architectures, provide an introduction to the OpenWhisk open source project (and describe how it differs from other services like AWS Lambda), and give a demonstration showing how to start developing with this new cloud computing model using the OpenWhisk implementation available on IBM Bluemix.
Lightning talk and lab presented by IBM Cloud Software Engineer, Andrew Bodine.
Gluecon Monitoring Microservices and Containers: A ChallengeAdrian Cockcroft
This document discusses the challenges of monitoring microservices and containers. It provides six rules for effective monitoring: 1) spend more time on analysis than data collection, 2) reduce latency of key metrics to under 10 seconds, 3) validate measurement accuracy, 4) make monitoring more available than services monitored, 5) optimize for distributed cloud-native applications, 6) fit metrics to models to understand relationships. It also examines models for infrastructure, flow, and ownership and discusses speed, scale, failures, and testing challenges with microservices.
MetLife has adopted a containerization strategy using Docker to modernize its traditional applications. Some key points:
- MetLife aims to embrace containers ubiquitously across its portfolio to improve speed, stability, scalability, security and reduce costs.
- It has seen success with its strategy, such as a 70% reduction in infrastructure costs and millions of dollars avoided in costs.
- MetLife provides training and knowledge sharing programs to help developers and operations teams adopt containers. It also offers services to support customers in piloting, putting early apps into production, and migrating apps at scale to containers.
This document provides an overview of DevOps concepts and practices through examples. It discusses DevOps as a culture and movement emphasizing collaboration between development and operations teams. The document demonstrates infrastructure as code, continuous integration and delivery practices like building and packaging applications, as well as deploying applications and managing dynamic configurations. It also discusses monitoring, troubleshooting and creating a feedback loop in production. The document aims to help attendees grasp DevOps essentials and leave with open questions.
Making Friendly Microservices by Michele TitlolDocker, Inc.
Small is the new big, and for good reason. The benefits of microservices and service-oriented architecture have been extolled for a number of years, yet many forge ahead without thinking of the impact the users of the services. Consuming on micro services can be enjoyable as long as the developer experience has been crafted as finely as the service itself. But just like with any other product, there isn’t a single kind of consumer. Together we will walk through some typical kinds of consumers, what their needs are, and how we can create a great developer experience using brains and tools like Docker.
DCSF 19 Developing Apps with Containers, Functions and Cloud ServicesDocker, Inc.
Cloud native applications are composed of containers, serverless functions and managed cloud services.
What is the best set of tools on your desktop to provide a rapid, iterative development experience and package applications using these three components?
This hand-on talk will explain how you can complement Docker Desktop, with it’s local Docker engine and Kubernetes cluster, with open source tools such as the Virtual Kubelet, Open Service Broker, the Gloo hybrid app gateway, Draft, and others, to build the most productive development inner-loop for these type of applications.
It will also cover how you can use the Cloud Native Application Bundle (CNAB) format and it’s implementation in the Docker app experimental tool to package your application and manage it with container supply chain tooling such as Docker Hub.
Serverless architectures are one of the hottest trends in cloud computing this year, and for good reason. There are several technical capabilities and business factors coming together to make this approach compelling from both an application development and deployment cost perspective. The new OpenWhisk project provides an open source platform to enable these cloud-native, event-driven applications.
This talk will lay out the technical and business drivers behind the rise of serverless architectures, provide an introduction to the OpenWhisk open source project (and describe how it differs from other services like AWS Lambda), and give a demonstration showing how to start developing with this new cloud computing model using the OpenWhisk implementation available on IBM Bluemix.
Presented on October 12, 2016 at the NYC Bluemix meetup
microXchg 2018: "What is a Service Mesh? Do I Need One When Developing 'Cloud...Daniel Bryant
While service meshes may be the next "big thing" in microservices, the concept isn't new. Classical SOA attempted to implement similar technology for abstracting and managing all aspects of service-to-service communication, and this was often realized as the much-maligned Enterprise Service Bus (ESB). Several years ago similar technology emerged from the microservice innovators, including Airbnb (SmartStack for service discovery), Netflix (Prana integration sidecars), and Twitter (Finagle for extensible RPC), and these technologies have now converged into the service meshes we are currently seeing being deployed.
In this talk, Daniel Bryant will share with you what service meshes are, why they're well-suited for microservice deployments, and how best to use a service mesh when you're deploying microservices. This presentation begins with a brief history of the development of service meshes, and the motivations of the unicorn organisations that developed them. From there, you'll learn about some of the currently available implementations that are targeting microservice deployments, such as Istio/Envoy, Linkerd, and NGINX Plus
DevOps: a story about automation, open source & the CloudAdrian Todorov
Presented during the PASS day at Vanier College, 2018 for Computer Science Technology students in order to teach them about DevOps Transformation, monolithic app development lifecycle, architectural changes, Terraform, Kubernetes, Ansible (automation), open source & the Cloud. We also talked about virtualization vs containerization, migration from traditional app to modern app, the container advantage, and the hiring of a DevOps intern.
Faster safer and 100 user centric application at equifax with dockerDocker, Inc.
Equifax faced challenges around software development lifecycles, vulnerability detection, and security. They implemented Docker to improve security, reduce development cycles, and support multiple platforms. Their solution involved Docker Swarm for infrastructure, CI/CD pipelines for builds and deployments, and Dockerized applications including APIs, web apps, and mobile apps. This allowed them to deliver a new product in 3 months with greater transparency, faster deployments, improved security and scaling.
The document is an agenda for a Watson on Bluemix meetup. It includes:
- An overview of Bluemix runtime, services, and DevOps architecture by Animesh Singh.
- A discussion of Watson Cloud and Cognitive Services by Anthony Stevens.
- A demo of a Watson application by Wade Barnes, who will walk through deploying a Node.js app on Bluemix that uses the Watson User Modeling service.
Overseeing Ship's Surveys and Surveyors Globally Using IoT and Docker by Jay ...Docker, Inc.
Fugro Chance Inc. oversees ship surveys globally using IoT and Docker. They developed a solution using AWS, Docker, and microservices to support a real-time web application for ship tracking. Key challenges included supporting services that need to run together and efficiently deploying new versions. They addressed this using SupervisorD to run multiple services in a single Docker container. This allows flexible development and deployment of future microservices.
Software rotting - 28 Apr - DeveloperWeek Europe 2022Giulio Vian
"Software rotting or why you need to change your approach to security"
28 April 2022
DeveloperWeek Europe 2022
https://www.developerweek.com/europe/conference/conference-tracks/devops-security/
A new phenomenon stands out in recent years: security must pervade the entire software development lifecycle.
Except it isn't. Current generation of processes and tools is lacking crucial features to properly manage modern security risks.
Think of the Log4J event. Were you able to identify all affected components? Were they internally developed, or you need a vendor support? How fast you were able to deliver a fix?
In this talk we'll explore the challenges, what you can do with current tools, and which gaps should be addressed by communities through better practices and new tools.
Hugtakið hugbúnaðararkítektúr er yfirhlaðið orð og þýðir mismunandi hluti fyrir mismunandi fólk. Við ætlum í þessum fyrirlestri að skilgreina ýmis hugtök tengd arkítektúr til að fá betri skilning á þessu. Við munum einnig skilgreina hvað agile arkítektúr þýðir eða hvað það þýðir ekki. Þá skoðum við monolith arkítektúr sem er hinn hefðbundi arkítektúr sem flestir nota í dag. Vandinn er sá að í dag eru kröfurnar meiri en þessi arkítektúr ræður við og því hafa menn verið að skoða aðrar leiðir eins og lightweight Service Oriented Architecture og hvernig smíða má hugbúnað sem þjónustur eða microapps eða microservice.
Við skoðum einnig lagskiptingu en það er elsta trikkið í bókinni og byggir á deila og drottna aðferðinni.
OSSF 2018 - Brandon Jung of GitLab - Is Your DevOps 'Tool Tax' Weighing You D...FINOS
The document discusses how a single application that handles the entire software development lifecycle can help alleviate the "DevOps tool tax" caused by managing and integrating multiple point solutions. It provides an overview of GitLab's Auto DevOps feature which automates the build, test, security, deployment, and monitoring pipelines in a single system. By consolidating tools and processes, Auto DevOps helps reduce integration complexity and accelerate development cycles.
Is Technical Debt the right metaphor for Continuous Update?Giulio Vian
Conf42 DevSecOps 2022 - December 1st 2022
The environmental pressure on software, mainly security, has dramatically changed in few years. Sticking to the Technical Debt category, will crush IT, and the business. So, let’s introduce a new term: Technical Inflation, and change how we plan, budget, manage changes and implement automation.
The Anatomy of Continuous Deployment at Scale - 100 deploys a week at Envato ...John Viner
The Envato market development team runs a two sided marketplace platform that powers sites such as themeforest.net and graphicriver.net. This presentation describes how they deploy the application up to 25 times a day while serving up to 200 million requests a week.
Agile and continuous delivery – How IBM Watson Workspace is builtVincent Burckhardt
Journey and transformations that we have been taking at IBM to implement Cloud Native application. Covers culture, architecture and pipeline changes. This presentation was given at IBM Connect 2017 in San Francisco in Feb 2017.
Secure Your DevOps Pipeline Best Practices Meetup 08022024.pptxlior mazor
Our technology, work processes, and activities all depend on if we trust our software to be developed in a safe and secure manner. Join us virtually for our upcoming "Secure Your DevOps Pipeline: Best Practices" Meetup to learn how to integrate security in the development process, DevSecOps advance methods, manage the implement secure coding analysis and how to manage software security risks.
Today, it is critical that IT teams are able to easily, consistently deploy to production. Running Docker containers on Amazon Web Services makes it possible to engineer a compliant and DevOps-friendly environment from the ground up. Spring Venture Group successfully migrated to AWS with Docker containers and leveraged Logicworks to migrate to AWS and automate infrastructure build-out and deployment. Join our webinar to learn how Spring Venture Group, an innovative insurance brokerage, reduced risk and improved deployment velocity with Logicworks, AWS, and Docker.
See how IT Risks Impacts your Business. CAST help you to check on software performance, stability, maintainability, and security vulnerabilities in which CAST excels and successfully differentiates from code analyzers.CAST’s Application Intelligence Platform and Rapid Portfolio Analysis solutions can help you avoid these types of “software glitches” or "software risks" by allowing you to gain greater visibility through automated code review that identifies the root causes of risks before they become production problems, while expediting time-to-market with shorter release time lines and improved business agility.
Cloud continuous integration- A distributed approach using distinct servicesAndré Agostinho
In cloud computing services the ability to share and deliver services, scale computing resources and distribute data storage and files requires a deployment process aligned with agility and scalability. The continuous integration can automate process reducing operational effort, improving code quality and reducing time to market. This presentation shows a proposal for distributed continuous integration to use differents cloud computing services, from planning to execution of scenarios.
DevOps and Safety Critical Systems discusses applying DevOps practices like continuous deployment to safety critical systems. It proposes "partial continuous deployment" which involves:
1. Identifying and isolating safety critical portions of a system's architecture.
2. Applying continuous deployment practices to non-safety critical portions.
3. Continuing traditional testing methods for safety critical portions.
It discusses past efforts in smart grid security controls and hardening deployment pipelines that provide foundations for this approach. Key steps include explicitly defining safety requirements, analyzing architectures to identify minimum required safe components, and refactoring to separate safe and non-safe concerns. Regulatory approval is viewed as a major gate to implementing partial continuous deployment for real safety
3784_Streamlining_the_development_process_with_feature_flighting_and_Azure_cl...Crystal Thomas
A large organization within Microsoft IT successfully streamlined its development processes by adopting feature flagging and using Azure cloud services. This allowed them to deliver smaller changes more frequently, reduce risk, and improve the customer experience. Feature flagging enabled isolating new code and exposing it gradually to specific user segments for testing before broader release. Telemetry data from Azure Application Insights provided feedback to evaluate changes. Using Azure virtual machines provided fast, flexible development environments with reduced overhead. The new approach eliminated constraints of traditional waterfall models like multiple code branches and environments.
You are already the Duke of DevOps: you have a master in CI/CD, some feature teams including ops skills, your TTM rocks ! But you have some difficulties to scale it. You have some quality issues, Qos at risk. You are quick to adopt practices that: increase flexibility of development and velocity of deployment. An urgent question follows on the heels of these benefits: how much confidence we can have in the complex systems that we put into production? Let’s talk about the next hype of DevOps: SRE, error budget, continuous quality, observability, Chaos Engineering.
The document discusses the origins of software engineering as a discipline. It summarizes discussions from a conference in 1968 where the term "software engineering" was first used. Key points discussed included that testing is best done iteratively during design rather than after, that small groups tend to be more successful than large groups on software projects, and that an organizational structure is needed for communication and decision making in large groups. The document also discusses criticisms of the "waterfall" development model and advocates for an iterative approach.
This document discusses improvements to agile methodology through continuous integration using dynamic regression, code bisection, and code quality. It proposes mapping source code to test suites and running only relevant tests after code changes to speed up testing. When failures occur, code bisection is used to quickly identify responsible code changes. Code quality is also assessed continuously using tools like Sonar to monitor for issues. The approaches aim to improve agility, reduce bug fixing time, and ensure high code quality.
Using security to drive chaos engineering - April 2018Dinis Cruz
Presentation I delivered at ISSA UK "Application Security - London Chapter Meeting" https://www.eventbrite.co.uk/e/application-security-london-chapter-meeting-tickets-42284085839
Similar to L'impatto della sicurezza su DevOps (20)
Come implementare la governance nella vostra piattaforma e lavorare felici se...Giulio Vian
DevOps Conf 2024 - Roma - 10 mag 2024
https://devopsconf.dotnetdev.it
Gli strumenti che usiamo per lo sviluppo e il rilascio sono essenziali per controllare i processi in uso e garantire che soddisfino requisiti aziendali, legali, e regolamentari.
In questa sessione illustrerò come passare da norme (policies) astratte a implementationi su piattaforme come Azure DevOps o GitHub delle stesse così da poter prevenire prima e verificare poi il corretto svolgimento delle operazioni. E diventare amici del direttore Rischi e Audit.
Is Technical Debt the right metaphor for Continuous Update - AllDayDevOps 2022Giulio Vian
The environmental pressure on software has dramatically changed in a few years, both in quality and quantity. Security is the main force but other dynamics can be seen, including the adoption of agile, shortest product cycles, and more. As a consequence the software is no more written once and run many times: it must be updated continuously. If we, as an industry, continue to use the classic category of Technical Debt, IT will be crushed by the forces at hand, pulling the business side along. I propose to introduce a new term for this phenomenon: Technical Inflation. It is not simply to mark the difference but to help discuss and explain to other stakeholders what is happening on the technical side and the effect on the entire business. The new perspective impacts how we plan and budget, how we manage changes and automation, and the need to excel in engineering to save the bottom line.
A map for DevOps on Microsoft Stack - MS DevSummitGiulio Vian
This document provides an overview of DevOps on the Microsoft stack. It discusses three ways of implementing DevOps: 1) Flowing work from idea to production using tools like GitHub, Azure Boards, Azure DevOps Server, and infrastructure as code. 2) Gathering feedback using observability tools like Application Insights and alerting. 3) Fostering communication, documentation, learning and fun through tools like GitHub Pages, Teams, LinkedIn Learning and DevTest Labs. The document recommends resources for learning more about DevOps and the Microsoft stack.
Pipeline your Pipelines - 2020 All Day DevOpsGiulio Vian
Giulio Vian discusses automating build infrastructure by treating it as code that can be versioned, backed up, and rebuilt. This allows building environments to be rebuilt if lost, fixes to be deployed to production, and old versions to be rebuilt. Infrastructure as code uses version control, secrets stores, and pipelines to build runtime, CI/CD, and application infrastructure in a fractal manner.
How to write cloud-agnostic Terraform code - Incontro DevOps Italia 2020Giulio Vian
The document discusses how to write Terraform code that is cloud-agnostic and not specific to a single provider. It recommends abstracting common services like networking and computing blocks, and using variables and modules to deploy resources for multiple platforms. Examples are given using count and conditional deployment based on variables, as well as referencing subnets and regions in a provider-independent way. The document aims to help make Terraform configurations reusable across different cloud providers.
The document lists the top 10 pipeline mistakes, including unsafe secrets, untraceable artifacts, environment-specific deploy packages, lack of testing, use of bleeding edge technology, overly complex builds, flaky builds, overuse of versioning, implicit assumptions, and reliance on dubious plugins. The author provides recommendations to address each mistake, such as using secret stores, adding versioning and links to artifacts, deploying the same packages to all environments, including quality checks, ensuring deployable technology and available agents, splitting processes, enabling reproducible builds, adding version specifications, checking tool requirements, and using autonomous pipelines.
Introduction to Terraform with Azure flavorGiulio Vian
Terraform is a tool for provisioning and managing infrastructure as code. It allows defining and deploying infrastructure through configuration files rather than interactive console tools. The configuration files describe the components needed for an application and their relationships, and Terraform uses this information to provision and update infrastructure safely and efficiently. Terraform works by defining resources such as compute instances, storage, and networking components using a high-level configuration language, and then generates and executes the plans to build, change, and version those resources. It supports a variety of cloud platforms including Azure.
How collaboration works between Dev and Ops - DevOps Agile Testing and Test S...Giulio Vian
This document summarizes tools and techniques for collaboration between Dev and Ops teams, including:
- Shared version control of infrastructure as code, secrets stores, and documentation to provide transparency.
- The use of dashboards, chat, wikis, and monitoring and logging tools to share information across teams.
- Having Dev and Ops use the same environment names and classifications to facilitate coordination between pipelines, dashboards, and other systems.
Usare SQL Server for Linux e Docker per semplificare i processi di testing - ...Giulio Vian
DevOps@Work 2020
Roma, 16 January 2020
https://www.domusdotnet.org/events/
SQL Server per Linux apre un nuovo mondo di possibilità per testare il codice SQL in modi che prima non erano pensabili.
Esploriamo alcune opzioni come:
- Ripristinare il database ad uno stato noto tra un test e l'altro
- Provare più varianti di configurazione
- Eseguire test di integrazione nella pipeline CI
- Test delle migrazioni dello schema
- Attach di grossi database eseguendo i container nel cloud
The document discusses automating build and deployment pipelines using infrastructure as code. It recommends:
1. Treating development environments like production by making them automated, disposable, and recreated from code.
2. Not sharing secrets between environments and making credentials, keys, and other sensitive data unique to each automated environment.
3. Automating the creation of all infrastructure components including VMs, containers, Kubernetes clusters from configuration files to ensure they can be recreated identically on any cloud provider.
Why is DevOps vital for my company’s businessGiulio Vian
The document discusses why DevOps is vital for companies in the modern business landscape. It notes that software is now central to many businesses and products, like cars which contain over 150 million lines of code. DevOps applies lean principles to streamline the process of delivering software by reducing waste and improving feedback loops between development and operations teams. Implementing DevOps through systems thinking, amplifying feedback, and continuous experimentation can lead to benefits like less risk, faster feedback, and increased value delivery and organizational efficiency.
GLV OnAir Ottobre 2019
In questa introduzione a GitHub Actions: vedremo gli elementi base, cosa è possibile fare, cosa invece si rivela complicato o impossibile da fare, come trovare informazioni ed esempi.
Terraform for azure: the good, the bad and the ugly -Giulio Vian
Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. The presenter discusses the good, bad, and ugly aspects of using Terraform with Azure. The good includes its simple configuration language and ability to integrate with Azure and automate deployments. The bad includes limitations in its language and some errors being difficult to debug. The ugly involves challenges around managing state files and keeping infrastructure definitions well organized. Overall, Terraform provides benefits but also requires understanding its quirks and handling state carefully.
How we moved our environments to the cloudGiulio Vian
Šibenik, 4 April 2019
http://windays.hr/
In this talk, you will hear about the DevOps journey in our company, from the initial brown-field all-manual state, to our current situation where we migrated (almost) everything to the cloud using automation in a few months. Not a migration but rebuilding the environment using Infrastructure-as-Code tools: Terraform, Powershell, Ansible, TFS/Azure DevOps. In equilibrium between an high-level view and useful practical tips, we will touch on what informed our decisions, in terms of priorities and technologies, some lessons learned, and how the legacy constraints helped or hindered.
Customize Azure DevOps using AggregatorGiulio Vian
Šibenik, 4 April 2019
http://windays.hr/
We will see how to customize Azure DevOps (ex Visual Studio Team Services, ex Team Foundation Server) using a powerful tool like Aggregator.
Version 2 made a simple task adding rules to TFS on-premise, now vervsion 3 offers a full support to Azure DevOps; furthermore rules are more powerful, no more limited to Boards (work items) events, but to new types like Git events.
You can please your _Project Manager_/_Scrum Master_ by automating task creation, or roll-ups; or automatically inject a set of reviewers in a Pull Request.
Even if you will never use Aggregator, you can learn something from its use of Azure and Azure DevOps API and build your own tooling.
Moving a Windows environment to the cloudGiulio Vian
Incontro DevOps Italia 2019
Bologna, 8 March 2019
https://2019.incontrodevops.it/
About the DevOps journey in our company, from the initial brown-field all-manual state, to our current situation where we migrated (almost) everything to the cloud using automation in a few months. Not a migration but rebuilding the environment using Infrastructure-as-Code tools: Terraform, Powershell, Ansible, TFS/Azure DevOps. In equilibrium between an high-level view and useful practical tips, we will touch on what informed our decisions, in terms of priorities and technologies, some lessons learned, and how the legacy constraints helped or hindered.
Flutter is a popular open source, cross-platform framework developed by Google. In this webinar we'll explore Flutter and its architecture, delve into the Flutter Embedder and Flutter’s Dart language, discover how to leverage Flutter for embedded device development, learn about Automotive Grade Linux (AGL) and its consortium and understand the rationale behind AGL's choice of Flutter for next-gen IVI systems. Don’t miss this opportunity to discover whether Flutter is right for your project.
UI5con 2024 - Boost Your Development Experience with UI5 Tooling ExtensionsPeter Muessig
The UI5 tooling is the development and build tooling of UI5. It is built in a modular and extensible way so that it can be easily extended by your needs. This session will showcase various tooling extensions which can boost your development experience by far so that you can really work offline, transpile your code in your project to use even newer versions of EcmaScript (than 2022 which is supported right now by the UI5 tooling), consume any npm package of your choice in your project, using different kind of proxies, and even stitching UI5 projects during development together to mimic your target environment.
Transform Your Communication with Cloud-Based IVR SolutionsTheSMSPoint
Discover the power of Cloud-Based IVR Solutions to streamline communication processes. Embrace scalability and cost-efficiency while enhancing customer experiences with features like automated call routing and voice recognition. Accessible from anywhere, these solutions integrate seamlessly with existing systems, providing real-time analytics for continuous improvement. Revolutionize your communication strategy today with Cloud-Based IVR Solutions. Learn more at: https://thesmspoint.com/channel/cloud-telephony
Most important New features of Oracle 23c for DBAs and Developers. You can get more idea from my youtube channel video from https://youtu.be/XvL5WtaC20A
SMS API Integration in Saudi Arabia| Best SMS API ServiceYara Milbes
Discover the benefits and implementation of SMS API integration in the UAE and Middle East. This comprehensive guide covers the importance of SMS messaging APIs, the advantages of bulk SMS APIs, and real-world case studies. Learn how CEQUENS, a leader in communication solutions, can help your business enhance customer engagement and streamline operations with innovative CPaaS, reliable SMS APIs, and omnichannel solutions, including WhatsApp Business. Perfect for businesses seeking to optimize their communication strategies in the digital age.
Artificia Intellicence and XPath Extension FunctionsOctavian Nadolu
The purpose of this presentation is to provide an overview of how you can use AI from XSLT, XQuery, Schematron, or XML Refactoring operations, the potential benefits of using AI, and some of the challenges we face.
Microservice Teams - How the cloud changes the way we workSven Peters
A lot of technical challenges and complexity come with building a cloud-native and distributed architecture. The way we develop backend software has fundamentally changed in the last ten years. Managing a microservices architecture demands a lot of us to ensure observability and operational resiliency. But did you also change the way you run your development teams?
Sven will talk about Atlassian’s journey from a monolith to a multi-tenanted architecture and how it affected the way the engineering teams work. You will learn how we shifted to service ownership, moved to more autonomous teams (and its challenges), and established platform and enablement teams.
Mobile app Development Services | Drona InfotechDrona Infotech
Drona Infotech is one of the Best Mobile App Development Company In Noida Maintenance and ongoing support. mobile app development Services can help you maintain and support your app after it has been launched. This includes fixing bugs, adding new features, and keeping your app up-to-date with the latest
Visit Us For :
Top 9 Trends in Cybersecurity for 2024.pptxdevvsandy
Security and risk management (SRM) leaders face disruptions on technological, organizational, and human fronts. Preparation and pragmatic execution are key for dealing with these disruptions and providing the right cybersecurity program.
3. What it’s all
about
The environmental pressure
on software has dramatically
changed in few years.
In quality and quantity.
Mainly security concerns.
4. Pressure
impact
How we automate.
How we plan, budget
I suggest to introduce a new
term: Technical Inflation.
Inflation differs from
Technical Debt.
Software value decrease
(even drops) over time
without intervention.
10. What is
DevOps?
«The result of
applying Lean
principles to the
technology value
stream»
The DevOps Handbook,
Gene Kim et al., 2016
The Three Ways: The Principles Underpinning DevOps
14. Finding code
Which code matches production?
master main release/*
v* tags
Multiple production branches
release/* and hotfix/*
Untagged releases
SCA tools pipeline-bound
Rarely built code
Pipeline does not work anymore
15. Vulnerability
may affect
Application stack
Container images
Virtual Machine images
Application itself
Application code
Libraries
Internal
3rd party
Self-contained run-time
Application
Run-time
OS
libraries
Docker
base
image
Self-
contained
19. Everyone else
Many teams
Many repos
My company has 3,000 repos
across 100 teams, storing over
13 million lines of code, and
using 2,800 pipelines
A single vulnerability
may affect 10s teams and
100s of repos
Image: The Crowd For DMB 1 by Moses
20. Redeploy.
Every. Day.
Simplest pattern
Once automated
patching is in place
Zero-downtime deploy
in place
Consider pipeline
resources
Image: the gerbil wheel pose by dbgg1979
21. Setup a Code
Metabase
Reverse indexes
Library → Binaries [SCA tool]
O.S. API → Binaries [SAST tool]
Binary → Pipelines [artifact store]
Pipeline → Repo(s) [pipeline tool]
Pipeline
Binaries
Production
Library
Repo
22. Expedite
pipelines
Separation of Duties
Regulation / audit requirement
Slows 0-day patching
Tightly controlled usage
Automated checks
Single commit with limited
churn
Additional approvers for
quick turnaround
Image courtesy of SpaceX
23. Breadth of change
Fix impacting many systems at once
Hundreds of concurrent pipelines
Can your build & deploy tool auto-scale?
Can your approval process scale?
How fast can you rebuild a substantial portion
of IT systems?
29. App Platform shift
Chrome 1 month patched after 14 days
Node.JS 30 months (LTS) patched every 25 days
6 months
Go 6 months patched every 26 days
Two major releases supported.
MongoDB 30 months patched every 5 weeks
.NET 3 years (LTS) patched every 6 weeks
18 months
Java 3 years (LTS) patched every
6 months 12 weeks
30. Base images
vmdk, VHD, VDI, OVA, …
AMI , VHD
Docker, OCI, ACI, …
Application
Run-time
OS
libraries
Base
image
31. Security SLA
Mean Time to Patch
Single component
Multiple components at once!
In Production
33. Technical Debt
«describes the consequences
of software development
actions that intentionally or
unintentionally prioritize
client value and/or project
constraints such as delivery
deadlines, over more
technical implementation
and design considerations.»
Holvitie J., Licorish S.A., et al. - Technical
debt and agile software development
practices and processes – Information and
Software Technology, iss. 96 (2018) p.142 Image by ThoBel-0043
34. Technical
Inflation
Unintended reduction
in value of a software
product over time,
independent of source
code changes.
Depreciation does not
capture two elements:
Unintentionality
Value can be restored
Image source: Max Pixel
35. 1974
Continuing Change law
«A[n E-type] system
must be continually
adapted or it becomes
progressively less
satisfactory.»
Image source: WikiMedia
36. Restoring
Value
At most two platform
versions
Zero-(security-)issues policy
Expedite pipelines
Image by Marek Ślusarczyk
43. References (2/4)
https://heartbleed.com/
Why Every Business Is a Software Business — Watts S. Humphrey Informit, Feb 22, 2002
http://www.informit.com/articles/article.aspx?p=25491
https://en.wikipedia.org/wiki/Watts_Humphrey
https://www.sonatype.com/resources/state-of-the-software-supply-chain-2021
https://www.shopify.com/enterprise/global-ecommerce-statistics
https://blog.cloudflare.com/popular-domains-year-in-review-2021/
https://radar.cloudflare.com/year-in-review-2021
https://snyk.io/blog/net-open-source-security-insights/
https://www.contrastsecurity.com/the-state-of-the-oss-report-2021
https://octoverse.github.com/static/github-octoverse-2020-security-report.pdf
Buonasera a tutti, sono GV
E vi parlero’ dell’impatto della sicurezza su devops
Anzitutto ringraziamo gli sponsor di questo evento e dei loro contributi
Il tema che voglio approfondire con voi questa sera si riassume brevemente:La pressione sull’IT e sullo sviluppo software e` drasticamente aumentata in pochissimi anni,
sia in ampiezza che in profondita`, in particolar modo su questioni di sicurezza
Tale pressione ci costringera`, se gia` non lo ha fatto, a modificare diversi processi, non ultime le modalita’ con cui gestiamo la pianificazione tecnica e quella finanziaria, ovvero il budget.
Per meglio interloquire, tanto con i manager e la leadership tecnologica, quanto con le divisioni di business che si appoggiano ogni giorno di piu’ sull’IT, suggerisco di introdurre una nuova espressione: iniziamo a parlare di Inflazione Tecnica, distinta dall’ormai classico Debito Tecnico.
La novita` consiste nel calo di valore, ovvero nel deprezzamento, che avviene automaticamente sul software indipendentemente dagli interventi evolutivi o manutentivi.
Focalizzandoci sul tema dell’Inflazione Tecnica, dovro` necessariamente tralasciare molti altri argomenti, sia tecnici che manageriali, riguardanti il rapport tra sicurezza e DevOps.
Quindi mi limito’ ad accennare alcuni tipi di strumenti utili ad arginare l’Inflazione Tecnica: SCA e SAST, sigle che vedremo tra breve.
Static Application Security Testing
Dynamic Application Security Testing
Interactive Application Security Testing
IAST places an agent within an application and performs all its analysis in the app in real-time and anywhere in the development process IDE, continuous integrated environment, QA or even in production.
Dopo aver introdotto il tema di questa sera, un breve cenno biografico volto a inquadrare la mia esperienza professionale.
Lavoro come Principal Engineer negli uffici irlandesi di Unum dove siamo circa 200 persone tutte nella struttura IT. Unum e` una assicurazione statunitense, una Fortune 500, con fatturato di 12 miliardi di $ e 10,000+ employees (1,000+ in IT).
Precedentemente ho lavorato in diverse aziende in Italia e all’estero, sia grandi che medie. Qualcuno di voi mi ricordera` per lunghi anni nella consulenza Microsoft Italia o come Microsoft MVP.
Per contattarmi su questo e altri argomenti DevOps usate tranquillamente Twitter giulio_vian o anche direttamente per mail.
La presentazione di questa sera si articolera` su tre momenti:
come la sicurezza interseca i processi DevOps e in particolare riguardo il Continuous Delivery
in quale misura e` aumentata la pressione e sia cruciale affrontarla adeguatamente
come spiegare il fenomeno e cambiare la fase di pianificazione
Spesso la relazione tra devops e sicurezza non e` delle migliori
si blocca un ingresso, ma si trascurano tutti gli altri
Prima di entrare nel vivo, e` opportuno che ci accordiamo su una definizione di DevOps
Per molti si traduce come Continous Integration e Continous Delivery
ma, a mio parere e di molti altri, si tratta di una visione riduttiva
perche` risolve i problemi dei dev lasciando le rogne agli altri
Source: https://devops.com/12-factor-app-build-release-run/
Quindi ora vi subite il pippone metodologico, ma provero` a farla breve
DevOps vuol dire applicare gli stressi principi lean che adotta l’industra manufatturiera (e in particolare automobilistica)applicati pero` al flusso di valore tecnologico, che include l’assemblaggio di hardware e software come prodotto o servizio.
In questa visione l’automazione e’ solo un mezzo per snellire i processi informaticima l’obiettivo e` trasformare l’intera organizzazione perche` si concentri sulla catena del valore in modo organico e completelasciando da parte visioni ristrette e corporative
cio` si articola in tre dimensioni
Flow / Flusso – principalmente automazione, ma anche rimuovere ogni elemento di rallentamento
Feedback/ritorno – verificare continuamente il ritorno degli investimenti, dal monitoraggio delle performance tecniche, al dismettere funzionalita` inutile, al migliorare l’esperienza dell’utente
Continual learning and experimentation – ogni anello della catena spende energie per migliorare il proprio contributo diretto e indiretto, ben sapendo che chi si ferma e` perduto
Adesso che abbiamo chiarito la visione DevOps, passiamo a parlare di sicurezza.
Pensiamo ad un caso ideale…
…il signor Guilio (80% delle volte che scrivono il mio nome, convinti, eh)
Dicevamo, il signor Guilio ha raggiunto la vetta piu’ alta: la sua applicazione SparagnaSchei non ha piu` bug noti di alcun genere.
Il sistemista Giuseppi, ha fatto un lavoro eccellente, l’infrastruttura che ha realizzato resiste a tutti gli attacchi noti.
Fantastico, l’azienda vuole premiare questi eccezzionali lavoratori…
…peccato che il giorno dopo, ecco che abbiamo una nuova vulnerabilita’
Ce n’e’ di ogni genere
BLAH
ma a Guilio interessa soprattutto il primo tipo
Com’e’ che l’ha scoperta?
La dott.ssa Georgia della sicurezza ha ricevuto la notiza per posta e l’ha girata a Guilio.
Loosely related to Security Orchestration, Automation and Response (SOAR)How we run the process today?
Publication of a CVE triggers the Security team in the organization,Security team instructs Dev Teams to
fix application code as needed,
code must be deployed to Production under Release Management team supervision
A Release Management role may be required by SOX, Basilea, and similar regulation
Deploy where? Production! We don’t care about the rest (although…), so we need to…
Sotto version control c’e` moltissimo codice con mille mila branches, come trova Guilio i sorgenti da modificare?
Ci sono convenzioni diverse!
Alcuni usano master per rappresentare la versione in produzione, chi lo chiama main chi mainline, altri creano un branch di release, altri marcano con un Tag corrispondente alla versione SemVer dei binary, e altre varianti ancora?
C’e’ persino chi non fa’ nulla e si affida allo strumento di CI/CD per identificare a ritroso la versione di produzione!
Come potra` orientarsi Guilio in questo marasma?
Le pipeline di build offrono un aiuto perche` usano uno strumento di SCA (ci torniamo su questo) ma purtroppo ci son dei limiti.
L’applicazione CalendarioPerpetuo non viene aggiornata da tre anni! Guilio prova a lanciare la build ma la nuova versione di SDK da’ errori e non si riesce neanche a ricompilare! E intanto la sabbia scorre… tic tac, tic tac…
Here we discuss how to identify:1. the code that needs to be patched
2. the pipeline that release that code in Production
and some issues that one may face:
If more than one branch can reach prod, which one you choose?
How do you match the exact version of code?
Software Composition Analysis kicks in only through pipelines? Is triggered by the deploy pipeline?
The deploy pipeline hasn’t been used in months and doesn’t work anymore (e.g. a token expired, or there is no more an apt agent)
Apriamo un parentesi per spiegare che cavolo sia uno strumento SCA.
Purtroppo ci vuole un secondo pippone, se non altro e` di roba tecnica e non di metodologia.
Prendiamo SparagnaSchei.ReteMondiałe (Risparmia Web) Come tutte le applicazioni di buona famiglia non e` mica scritta in linguaggio assembler x86 (anche perche` avrebbe problemi sui nuovi Mac), eh no, e` scritta in Java che e` tanto portabile.
Quindi abbiamo il codice dell’applicazione, il quale usa delle librerie (inclusa la famigerata Log4J) e richiede una JRE (Java Runtime Environment). Questo schema vale anche per SparagnaSchei.Pomo (Risparmia iOS) che invece usa Xamarin e .NET Core (dai .NET 5) e SparagnaSchei.Mòbiłe la versione Android.
Uuuuh.
ReteMondiałe e` in un container Docker, quindi bisogna re-buildarlo ogni 3 mesi per rinferscare la JRE dentro l’imagine Docker. Pomo e Mòbiłe sono self-contained e vanno aggiornate ogni 40 giorni con un nuovo rilascio di .NET.
You stop and think: what is affected by these vulnerabilities? Which is the portion I am responsible for?
Thus, you analyse and find three (four) layersBLAH BLAH
…and the next question is…
Ma Guilio non si perde d’animo e sa come trovo vulnerabilita` nel codice e nelle librerie usate: SAST e SCA!
Static Application Security Testing (SAST) analizza i sorgenti per errori come il mancato controllo dell’input o SQL injection.
Software Composition Analysis (SCA) analizza i binari o i sorgenti per identificare le versioni di librerie in uso e controllare in un database continuamente aggiornato se hanno vulnerabilita` note. Quegli strumenti SCA che validano i binari sono in grado di indentificare anche componenti di runtime o del Sistema operative riguardo a vulnerabilita` note.
Guilio non ha budget e quindi usera` un versione open source o freemium per la sua ricerca.
E chiudiamo la parentesi
…are there tools to support me and detect vulnerabilities in the code I deliver?Yes, there are BLAH
Difatti Guilio e` molto stimolato dalle gentili parole del mega-direttore galattico sul suo personale future e riesce a identificare tutte le applicazioni e componenti da aggiornare.
Alcuni casi son complicati: parent pom files, Directory.Build.props, Directory.Build.targets, ma Si. Puo`. Fare!
La faccenda e` assai laboriosa: pur usando la stessa piattaforma, i team usano convenzioni diverse per organizzare il codice. Chi butta lo script di build in cima, chi pretende avere una cartella src. Lo stesso team non e` coerente nel tempo e non si cura di riarmonizzare il codice. Automatizzare le modifiche e` un compito improbo, lasciamolo perdere, pensa Guilio, tanto non ci sara` piu` un altra Log4J.
Che ne dite, avra` ragione?
The vulnerability could be a bad code pattern, use of an API, a vulnerable dependency; in any case we need to find the impacted code.
We must scan all repositories that contain production code. Non-production repositories should be included in the search but listed separately to remove noise.
Some patching can be easily automated, in particular library dependencies listed in project file (e.g. package.json, pom.xml,.csproj, …)
Val la pena di menzionare che la situazione di Guilio non si applica a tutti: se non hai tanti repo, tante app, tante pipeline, come alcuni fortunati hanno, e’ facile affrontare la situazione con un approccio manuale.If you have a uber-pipeline that deploys everything, you do not need anything fancy.
Sadly, this is a rare scenario in modern landscape: your organization can have lot of legacy, or can be a big IT with dozens or hundreds of teams, or a hundreds or thousand of micro-services.
A Guilio va ancora bene, se provate a mettervi nei miei panni, comprenderete come sia ben difficile gestire lo scenario di Guilio con una gestione completamente manuale.
Vediamo alcune idee per una gestione su scala.
Current tooling may offer some information but a well-rounded process lot of cross-reference data.
Dependency management is a weak spot in general, SCA (Software Composition Analysis) can identify vulnerabilities in libraries.
Use of API may be caught by security scans
Artifact management tool can track the source (build) of binaries if properly used.
Pipeline knows which repositories they use, what we need here is ability to call a REST API that tell us the dependency.
If you can use such tools, great. Maybe you need to follow a bit of conventions and write some query tools.
In the worst scenario, you have to build and maintain your own database.
A Release Management role may be required by SOX, Basilea, and similar regulation
But you need speed when it is a 0-day exploit.
For example, you must be able to deploy a patch within hours of its release from a 3rd party (an OSS project or a vendor).
fast-track (expedite) pipelines are not for normal usage: there should be some kind of trigger, like a new CVE, a communication from the Security team or upper management.
As mentioned, on a small scale, it is easy. Problems raise when you need to manage at scale: more than a few teams, repos, or pipeline.
Consider the scenario where a single vulnerability impacts most of your applications (which is probable when you the majority of you code use the same platform, e.g. Log4J impacting all Java-based applications).
You need to patch lots of repositories and deploy lots of components, each through a separate pipeline.
In such scenario, you need new capabilities:
Global editing tool
Launch most pipelines in parallel (consider batching)
Auto-scale build resources to sustain the spike
Single-approval for the set of pipeline runs
These aren’t offered by current systems.
What is the way to solve this burning problem?
…they are not decreasing, quite the opposite.
Increasing more than linearly!
…display the same pattern, even more.
Why?
.NET Core 3.1
3.1.0 December 3, 2019
3.1.22 December 14, 2021
got 22 patch releases in 3 years i.e. every 45 days/6 weeks
Node v14 (Fermium)
Active LTS start 2020-10-27 v14.15.0
2022-02-01, Version 14.19.0
total 19 releases in 463 days or 66 weeks i.e. every 24.4 days
JDK 11
Java SE 11 (LTS)September 25, 2018
11.0.13+8 (GA), October 19th 2021
total 13 releases(updates) in 1121 days i.e. every 12.3 weeks or 86.2 days
Go 1.16 released 2021-02-16
go1.16.14 (released 2022-02-10)total 14 updates in 360 days i.e. 26 days
go1 (released 2012-03-28) -> go1.17 (released 2021-08-16)
17 major releases in 3429 days or 490 weeks
MongoDB 5.0
5.0.0 - Jul 13, 2021
5.0.6 - January 31, 2022
total 6 releases in 203 days or 29 weeks i.e. every 4.8 weeks
Crucial metric that IT can discuss with Business and translate in Cost and Risk
What is the way to solve this burning problem?
Johannes Holvitie, Sherlock A. Licorish, Rodrigo O. Spínola, et al. - Technical debt and agile software development practices and processes: An industry practitioner survey - Information and Software Technology, issue 96 (2018) p.142
Le conseguenze di azioni che con o senza intenzione danno priorita
«An E-program is written to perform some real-world activity; how it should behave is strongly linked to the environment in which it runs, and such a program needs to adapt to varying requirements and circumstances in that environment»
“On understanding laws, evolution, and conservation in the large-program life cycle” Lehman M.M. - Journal of Systems and Software Vol. 1, 1979–1980, pp. 213-221
Non siate passivi come GuilioPreparatevi, iniziate ad adottare strumenti di SAST ed SCA, ad includere scenari d’emergenza e massivi nei processi e nelle automazioniAdopt SAST and SCA
Free tier
No issues allowed!
Break the build!
Design expedite process
Today
Portate queste discussioni al livello superiore, fate riconsiderare i rischi legati a trascurare i problemi di sicurezza
Cambiate come viene distribuito il budget
Change budget allocation