Developing cloud native applications bring in a lot of complexities for developers. Without using tools to compensate these complexities, you will not become very efficient. Additional, cloud developers often suffer a rising frustration, by fighting these problems.
Before I push my code into Git, I want to test different things in my cloud environment. Therefore it is essential to have a fast and easy round trip. A classic round trip starts by writing or generating code, create a Docker image, deploy it into Kubernetes and test or remote debug the application in Docker or in Kubernetes. Without some elementary tools, this round trip will not be very fast or simple and therefore error prone.
This Lab will show you some open source tools, making your live as a developer more easy. Short demos will demonstrate the simple handling of these tools. Starting point is the generation of a MicroProfile and a SpringBoot application. By using the different tools (e.g. Helm, Shell completion, kubectl cp, Ksync, Stern, Kubefwd, Telepresence, …) on these applications, the complete round trip will be shown. Most of these tools can also be used with other programming languages. Every tool works on its own which makes it easy to switch between these tools.
Finally you will get an evaluation of these tools and I will show you an outlook on tools which are more focused on larger developer teams.
Your developers just walked into your cube and said: "Here's the new app, I built it with Docker, and it's ready to go live." What do you do next? In this session, we'll talk about what containers are and what they are not. And we'll step through a series of considerations that need to be examined when deploying containerized workloads - VMs or Container? Bare Metal or Cloud? What about capacity planning? Security? Disaster Recovery? How do I even get started?
Docker Online Meetup: Announcing Docker CE + EEDocker, Inc.
Docker Community Edition (CE) and Enterprise Edition (EE) are the best expressions of the Docker Platform to date. Whether you’re a developer, an ops team or a enterprise IT-team member, and no matter the infrastructure, Docker CE and EE gives you a way to install, upgrade and maintain Docker with the support and assurances required for your particular workload.
Both Docker CE and EE are available on a wide range of popular operating systems (including Windows Server 2016) and cloud infrastructure. Developers and devOps have the freedom to run Docker on their favorite infrastructure without risk of lock-in.
Michael Friis will give an overview of both editions and highlight the big enhancements to the lifecycle, maintainability and upgradability of Docker.
Production sec ops with kubernetes in dockerDocker, Inc.
In this talk, Scott Coulton will walk through how to build a container as a service platform with Docker EE. Starting from scratch he will help you figure out what orchestrator to choose by deep diving into the technical differences between swarm and kubernetes on the EE platform as well as cover some of the practical considerations that could influence your decision. He will also share various automation solutions to deploy your cluster into production. Once the cluster is up and and running, Scott will delve into sec ops and discuss security best practices - including signing images in DTR (Docker Trusted Registry) and CVE scanning to provide a secure supply chain into production. You’ll leave this talk with the knowledge needed to build your own container platform in production. And did I mention it will all be done live, step-by-step?
Considerations for operating docker at scaleDocker, Inc.
"Scale" happens along 3 different aspects: (1) applications and their services scale up and down leading to (2) the infrastructure scaling up to meet the needs of the applications, and finally (3) sites scale across multiple locations, including movement to public cloud. In this session, we will talk about how Docker EE scales along all three of these dimensions to give you a consistent platform for running your applications:
1. At the application level: how do you manage application state & health along with resource and security constraints to scale containers up and down up in a controlled fashion?
2. The infrastructure level: as your application estate grows on the Docker EE platform you will need to scale across more nodes. How do automate the provisioning of these new nodes and how do you integrate the Docker EE platform layer with your existing infrastructure systems and tools.
3. Finally, we'll talk about distributed scale: how do you take what works for applications in one data center and spread it across multiple sites, in an integrated fashion so you can operate seamlessly?
Efficient Parallel Testing with Docker by Laura FrankDocker, Inc.
Fast and efficient software testing is easy with Docker. We often
use containers to maintain parity across development, testing, and production environments, but we can also use containerization to significantly reduce time needed for testing by spinning up multiple instances of fully isolated testing environments and executing tests in parallel. This strategy also helps you maximize the utilization of infrastructure resources. The enhanced toolset provided by Docker makes this process simple and unobtrusive, and you’ll see how Docker Engine, Registry, Machine, and Compose can work together to make your tests fast.
DCEU 18: Building Your Development PipelineDocker, Inc.
Oliver Pomeroy - Solution Engineer, Docker
Laura Frank Tacho - Director of Engineering, CloudBees
Enterprises often want to provide automation and standardisation on top of their container platform, using a pipeline to build and deploy their containerized applications. However this opens up new challenges… Do I have to build a new CI/CD Stack? Can I build my CI/CD pipeline with Kubernetes orchestration? What should my build agents look like? How do I integrate my pipeline into my enterprise container registry? In this session full of examples and “how-to”s, Olly and Laura will guide you through common situations and decisions related to your pipelines. We’ll cover building minimal images, scanning and signing images, and give examples on how to enforce compliance standards and best practices across your teams.
DCEU 18: Docker Containers in a Serverless WorldDocker, Inc.
Jules Testard - Software Engineer, Docker Inc
Since the advent of AWS Lambda in 2014, the Function as a Service (FaaS) programming paradigm has gained a lot of traction in the cloud community. Since then, interest has increased for developers and entreprises to build their own open source solutions on top of Kubernetes. A number of competing frameworks in this space have been developed. In this talk, we will look at three specific frameworks (OpenFaas, Nuclio and FN) and for each framework we will: Show how to create, deploy, and invoke a function using that framework Show how Docker images and containers are used by each framework under the hood Investigate how the frameworks leverage KNative to build, ship and run applications on Kubernetes
Your developers just walked into your cube and said: "Here's the new app, I built it with Docker, and it's ready to go live." What do you do next? In this session, we'll talk about what containers are and what they are not. And we'll step through a series of considerations that need to be examined when deploying containerized workloads - VMs or Container? Bare Metal or Cloud? What about capacity planning? Security? Disaster Recovery? How do I even get started?
Docker Online Meetup: Announcing Docker CE + EEDocker, Inc.
Docker Community Edition (CE) and Enterprise Edition (EE) are the best expressions of the Docker Platform to date. Whether you’re a developer, an ops team or a enterprise IT-team member, and no matter the infrastructure, Docker CE and EE gives you a way to install, upgrade and maintain Docker with the support and assurances required for your particular workload.
Both Docker CE and EE are available on a wide range of popular operating systems (including Windows Server 2016) and cloud infrastructure. Developers and devOps have the freedom to run Docker on their favorite infrastructure without risk of lock-in.
Michael Friis will give an overview of both editions and highlight the big enhancements to the lifecycle, maintainability and upgradability of Docker.
Production sec ops with kubernetes in dockerDocker, Inc.
In this talk, Scott Coulton will walk through how to build a container as a service platform with Docker EE. Starting from scratch he will help you figure out what orchestrator to choose by deep diving into the technical differences between swarm and kubernetes on the EE platform as well as cover some of the practical considerations that could influence your decision. He will also share various automation solutions to deploy your cluster into production. Once the cluster is up and and running, Scott will delve into sec ops and discuss security best practices - including signing images in DTR (Docker Trusted Registry) and CVE scanning to provide a secure supply chain into production. You’ll leave this talk with the knowledge needed to build your own container platform in production. And did I mention it will all be done live, step-by-step?
Considerations for operating docker at scaleDocker, Inc.
"Scale" happens along 3 different aspects: (1) applications and their services scale up and down leading to (2) the infrastructure scaling up to meet the needs of the applications, and finally (3) sites scale across multiple locations, including movement to public cloud. In this session, we will talk about how Docker EE scales along all three of these dimensions to give you a consistent platform for running your applications:
1. At the application level: how do you manage application state & health along with resource and security constraints to scale containers up and down up in a controlled fashion?
2. The infrastructure level: as your application estate grows on the Docker EE platform you will need to scale across more nodes. How do automate the provisioning of these new nodes and how do you integrate the Docker EE platform layer with your existing infrastructure systems and tools.
3. Finally, we'll talk about distributed scale: how do you take what works for applications in one data center and spread it across multiple sites, in an integrated fashion so you can operate seamlessly?
Efficient Parallel Testing with Docker by Laura FrankDocker, Inc.
Fast and efficient software testing is easy with Docker. We often
use containers to maintain parity across development, testing, and production environments, but we can also use containerization to significantly reduce time needed for testing by spinning up multiple instances of fully isolated testing environments and executing tests in parallel. This strategy also helps you maximize the utilization of infrastructure resources. The enhanced toolset provided by Docker makes this process simple and unobtrusive, and you’ll see how Docker Engine, Registry, Machine, and Compose can work together to make your tests fast.
DCEU 18: Building Your Development PipelineDocker, Inc.
Oliver Pomeroy - Solution Engineer, Docker
Laura Frank Tacho - Director of Engineering, CloudBees
Enterprises often want to provide automation and standardisation on top of their container platform, using a pipeline to build and deploy their containerized applications. However this opens up new challenges… Do I have to build a new CI/CD Stack? Can I build my CI/CD pipeline with Kubernetes orchestration? What should my build agents look like? How do I integrate my pipeline into my enterprise container registry? In this session full of examples and “how-to”s, Olly and Laura will guide you through common situations and decisions related to your pipelines. We’ll cover building minimal images, scanning and signing images, and give examples on how to enforce compliance standards and best practices across your teams.
DCEU 18: Docker Containers in a Serverless WorldDocker, Inc.
Jules Testard - Software Engineer, Docker Inc
Since the advent of AWS Lambda in 2014, the Function as a Service (FaaS) programming paradigm has gained a lot of traction in the cloud community. Since then, interest has increased for developers and entreprises to build their own open source solutions on top of Kubernetes. A number of competing frameworks in this space have been developed. In this talk, we will look at three specific frameworks (OpenFaas, Nuclio and FN) and for each framework we will: Show how to create, deploy, and invoke a function using that framework Show how Docker images and containers are used by each framework under the hood Investigate how the frameworks leverage KNative to build, ship and run applications on Kubernetes
Docker Platform Internals: Taking runtimes and image creation to the next lev...Docker, Inc.
In this session, we'll go into details about the latest developments around some of the components behind the core features of the Docker Platform. We'll cover the containerd runtime that was built to serve as an underlying daemon for Docker and Kubernetes, and BuildKit, a toolkit that builds on containerd to provide next-generation capabilities for building software with the help of containers. You will learn about the architecture and design choices of these projects, for example, the power of containerd's rich client library and BuildKit's frontend model that allows introducing new build languages or Dockerfile features. You can discover how you can use these projects directly and how they are being integrated into the Docker Platform.
Kubernetes has been a key component for many companies to reduce technical debt in infrastructure by:
• Fostering the Adoption of Docker
• Simplifying Container Management
• Onboarding Developers On Infrastructure
• Unlocking Continuous Integration and Delivery
During this meetup we are going to discuss the following topics and share some best practices
• What's new with Kubernetes 1.3
• Generate Cluster Configuration using CloudFormation
• Deploy Kubernetes Clusters on AWS
• Scaling the Cluster
• Integrating Ingress with Elastic Load Balancer
• Using Internal ELB's as Kubernetes' Service
• Using EBS for persistent volumes
• Integrating Route53
Containerizing Hardware Accelerated ApplicationsDocker, Inc.
Many applications allow you to use hardware such as GPUs and FPGAs for acceleration. Common examples include media processing and offloading highly parallel work to a GPU. Applications that use accelerators are resource heavy and have stacks spanning kernel and user space; accelerators often have their own requirements for operating system support and kernel versions. While it may not seem intuitive to containerize this type of application, the use of containers provides benefits such as reduced setup time from container reuse, reduction in dependency conflicts and dependency on a specific operating system, and easier updates.
In this session I show a media processing stack, making use of containers alongside a GPU. Specifically, I explain the kernel and user space divide of a hardware-accelerated transcode application using a device exposed to the container. This specific stack is an interesting case because of its dependency on hardware, use of a custom kernel and libraries, and operating system requirements. Our investigations have shown the use of containers has minimal performance overhead compared to running natively. Furthermore, we can quickly deploy on other machines with reduced configuration effort. There are some aspects of the application not suited to containerization, however. Since the application relies on a custom kernel, the use of containers does not necessarily increase portability. Improvement in this area would require rethinking how the applications are developed and distributed. Other areas of innovation include things such as Docker plugins to check for compatibility between the container software and host kernel.
How to build your containerization strategyDocker, Inc.
The Docker Enterprise Edition platform helps customers deploy and manage applications faster and it secures the application pipeline at a lower cost than traditional application delivery models. But it takes more than just great technology to achieve the desired results. The organization and culture of your enterprise directly impacts what you transform, how it’s done, and who does it. Success requires a strategy for how you will govern the Docker EE container platform, how to assess your application estate, what your delivery pipeline will look like, and how to ensure developers, operators, security teams and others play nicely together.
In this talk I will cover topics such as different types of workloads (legacy, microservices, FaaS, big data, ...), how your org chart can influence whether you deploy a CaaS (Containers as a Service) vs CLaaS (Clusters as a Service), how "shifting left" can determine if you can outsource, centralized vs distributed CI/CD and how containers play a role, transforming your pets into cattle, how giant whale balloons are used for onboarding, and a prescriptive and comprehensive methodology for successfully deploying Docker in your enterprise.
2016 - Continuously Delivering Microservices in Kubernetes using Jenkinsdevopsdaysaustin
Presentation by Sandeep Parikh
In this talk, we will cover the basics of Kubernetes and show how to set up continuous delivery pipelines using Jenkins and Jenkins Workflow to go from code to deployment, without developers having to interact with the production deployment infrastructure. The goal is an end-to-end set of steps to automate deployment and delivery of an application composed of several microservices.
Singularity happened. Machines have risen. Skynet, the massively distributed AI will soon expand its grip and its army of robots will spread across the whole planet. Fortunately, a great ape army is standing across their way to put an end to this supremacy of steel.
The battle will be epic!
On one side we have Skynet, an automated, self-healed, hybrid Docker platform and skynet application that has become indestructible. Playing the army of Apes, is a dedicated platform hosting Netflix's Simian Army and other flavors.
Will Skynet resists the relentless assaults of the great ape army?
Through this fantasy, we'll first cover all the technologies concretely used to set up the platforms and run the battle (linuxkit, infrakit, & swarm mode, and even raspberry devices among others), while we'll step back in the second part to address the subsequent architecture stakes involved: reliability, scalability, edge computing, immutability, microservices, hybridization, distributed storage. Most of all, you'll understand the importance of the synergies implied between the platforms and the app's design to achieve such a result.
Docker for .NET Developers - Michele Leroux Bustamante, SollianceDocker, Inc.
Millions of developers use .NET to build high performance apps, from Enterprise to hobbiests. Docker enables .NET developers to build containerized applications that can be deployed natively to Windows or Linux. Windows containers support applications that leverage the full .NET Framework. And with AspNetCore on Linux developers can target both Linux-based Docker containers or Windows containers. In both cases you can develop your applications on Windows using your favorite .NET developer tools - then build Docker images and run them as containers on Windows Server or Linux machines. This session in this session, you will learn how to build or migrate full .NET Framework applications and deploy them as Windows Containers. Then you will learn to build AspNetCore applications that can target either Windows or Linux containers, without any changes to your code. Topics covered include - Common considerations as you work locally - Running local Docker containers, and preserving as environment settings - Unit testing - Choosing the right base image - Working with IIS or Kestrel - Composing multiple containers - Working with a Docker Registry
In this talk, Phil and Michael will talk about how Docker was extended from x86 Linux to Windows, ARM and IBM’s z Systems mainframe and Power platforms. They will cover the work and architecture that makes it possible to run Docker on different CPU architectures and operating systems; How porting Docker to a new OS is different from porting it to new hardware; What it means for a Docker image to be multi-arch (and how are multi-arch images built and maintained); How does Docker correctly deploy and schedule apps on heterogeneous swarms.
Phil and Michael will also demo some of the new features that let Docker Enterprise Edition manage swarms with both x86 Linux and Windows nodes as well as mainframes.
Docker for developers on mac and windowsDocker, Inc.
The whole Docker ecosystem exists today because of every single developer who found ways of using Docker to improve how they build software; whether streamlining production deployments, speeding up continuous integration systems or standing up an application on your laptop to hack on. In this talk we want to take a step back and look at where Docker sits today from the software developers point of view - and then jump ahead and talk about where it might go in the future. In this talk, we’ll discuss:
* Making Docker an everyday part of the developing software on the desktop, with Docker for Windows and Docker for Mac
* Docker Compose, and the future of describing applications as code
* How Docker provides the best tools for developing applications destined to run on any Kubernetes cluster
This session should be of interest to anyone who writes software; from people who want to hack on a few personal projects, to polyglot open source programmers and to professional developers working in tightly controlled environments. Everyone deserves a better developer experience.
This is the presentation on the current status of the 'Lean Cloud Starterkit'. It allows for lean cloud development process that is implemented including the infrastructure in less then 3 day.
Pull, push, clone, it is all in your daily workflow. But what if this wasn't your source code or your container, but the state of your whole computer? Push your production database over to another machine? No problem!
This talk shows how you can use Dotmesh with LinuxKit to work with persistent data on your server as simply as you work with git. This workflow helps unleash new ways of working with servers and data. Immutable infrastructure from LinuxKit meets controlled and manageable data storage from Dotmesh. Combining these two open source projects allows new possibilities in how to manage your infrastructure.
DockerCon EU 2015: Placing a container on a train at 200mphDocker, Inc.
Presented by Casper S. Jensen, Software Engineer, Uber
At Uber, we've been introducing Docker to give service owners more control over their environments. However, everything at Uber is moving very fast so we have had to do it a way such that Docker fitted into the existing infrastructure and services could be migrated seamlessly to Docker without any service interruptions. In this talk we will talk about the challenges we faced while doing this, such as handling both non-Docker and Docker builds, image replication, integration with our deployment systems and other challenges when deploying Docker at scale.
Presented at DockerCon 2018 EU, I go through using Docker and the Swarm Orchestrator (a simpler Kuberentes) to stack different tools up from the base OS to a full-featured production server cluster. Also, Sci-Fi. The Video to this deck will be at https://www.bretfisher.com/docker once they are posted.
Since last DockerCon, Kubernetes has been integrated into both the Desktop and Enterprise editions of the Docker Platform. In this deep dive session, we’ll showcase live demos and explore where Kubernetes fits in the architecture of both the Desktop and the Enterprise editions and which community tools make this integration possible. We’ll be covering topics ranging from hypervisor control, storage and networking all the way to the integration of a custom RBAC system, native Compose file support and providing a rich user interface for Kubernetes.
Docker Engine laid the foundation for a paradigm shift in software development with containers. Come and learn about the history of Docker Engine, current architecture, evolution of containerd and future direction of Docker Engine. This talk will explore the following: • Latest features of Docker Engine including enhancements around Build • Relationship between Docker Engine and containerd and the common building blocks across them, with a deep dive into the Engine Architecture • Differences between the Community and Enterprise Engines • Highlight areas of innovation and future direction
Packaging software for the distribution on the edgeDocker, Inc.
At GE Digital, in the Asset Performance Management space, we need to supply an edge solution that impacts both on-premise and data transmission to the cloud. Our current edge solutions are relatively simplistic, but as our technologies mature along with our customers’ needs, we’re finding that we need to grasp a more fog computing-based approach where we include more intelligence, more computing power, at the edge. Along with this computative power, we need to better remotely manage these systems – to be able to monitor progress and diagnose problems – a technology that would enable us to containerize, to better manage, our software bundlings and deployments.
We found that Windows Docker seemed to fit the bill -- much of the technologies that live at our edge solutions are Windows OS based (as the customers’ main platforms are Windows OS based). This presentation reviews the approach that we took to repackage one of our main APM on-premise solutions using Windows Docker. We’ve created a prototype which we’re looking forward to productize and enable the capability of remote management to thousands of deployments.
The presentation also contains a video demo of the running system. The on-prem APM system will demonstrate the usage of Docker networking along with docker volumes and three (3) docker containers – will discuss the construction of the images, and nuances, of execution of the running docker containers.
Docker Platform Internals: Taking runtimes and image creation to the next lev...Docker, Inc.
In this session, we'll go into details about the latest developments around some of the components behind the core features of the Docker Platform. We'll cover the containerd runtime that was built to serve as an underlying daemon for Docker and Kubernetes, and BuildKit, a toolkit that builds on containerd to provide next-generation capabilities for building software with the help of containers. You will learn about the architecture and design choices of these projects, for example, the power of containerd's rich client library and BuildKit's frontend model that allows introducing new build languages or Dockerfile features. You can discover how you can use these projects directly and how they are being integrated into the Docker Platform.
Kubernetes has been a key component for many companies to reduce technical debt in infrastructure by:
• Fostering the Adoption of Docker
• Simplifying Container Management
• Onboarding Developers On Infrastructure
• Unlocking Continuous Integration and Delivery
During this meetup we are going to discuss the following topics and share some best practices
• What's new with Kubernetes 1.3
• Generate Cluster Configuration using CloudFormation
• Deploy Kubernetes Clusters on AWS
• Scaling the Cluster
• Integrating Ingress with Elastic Load Balancer
• Using Internal ELB's as Kubernetes' Service
• Using EBS for persistent volumes
• Integrating Route53
Containerizing Hardware Accelerated ApplicationsDocker, Inc.
Many applications allow you to use hardware such as GPUs and FPGAs for acceleration. Common examples include media processing and offloading highly parallel work to a GPU. Applications that use accelerators are resource heavy and have stacks spanning kernel and user space; accelerators often have their own requirements for operating system support and kernel versions. While it may not seem intuitive to containerize this type of application, the use of containers provides benefits such as reduced setup time from container reuse, reduction in dependency conflicts and dependency on a specific operating system, and easier updates.
In this session I show a media processing stack, making use of containers alongside a GPU. Specifically, I explain the kernel and user space divide of a hardware-accelerated transcode application using a device exposed to the container. This specific stack is an interesting case because of its dependency on hardware, use of a custom kernel and libraries, and operating system requirements. Our investigations have shown the use of containers has minimal performance overhead compared to running natively. Furthermore, we can quickly deploy on other machines with reduced configuration effort. There are some aspects of the application not suited to containerization, however. Since the application relies on a custom kernel, the use of containers does not necessarily increase portability. Improvement in this area would require rethinking how the applications are developed and distributed. Other areas of innovation include things such as Docker plugins to check for compatibility between the container software and host kernel.
How to build your containerization strategyDocker, Inc.
The Docker Enterprise Edition platform helps customers deploy and manage applications faster and it secures the application pipeline at a lower cost than traditional application delivery models. But it takes more than just great technology to achieve the desired results. The organization and culture of your enterprise directly impacts what you transform, how it’s done, and who does it. Success requires a strategy for how you will govern the Docker EE container platform, how to assess your application estate, what your delivery pipeline will look like, and how to ensure developers, operators, security teams and others play nicely together.
In this talk I will cover topics such as different types of workloads (legacy, microservices, FaaS, big data, ...), how your org chart can influence whether you deploy a CaaS (Containers as a Service) vs CLaaS (Clusters as a Service), how "shifting left" can determine if you can outsource, centralized vs distributed CI/CD and how containers play a role, transforming your pets into cattle, how giant whale balloons are used for onboarding, and a prescriptive and comprehensive methodology for successfully deploying Docker in your enterprise.
2016 - Continuously Delivering Microservices in Kubernetes using Jenkinsdevopsdaysaustin
Presentation by Sandeep Parikh
In this talk, we will cover the basics of Kubernetes and show how to set up continuous delivery pipelines using Jenkins and Jenkins Workflow to go from code to deployment, without developers having to interact with the production deployment infrastructure. The goal is an end-to-end set of steps to automate deployment and delivery of an application composed of several microservices.
Singularity happened. Machines have risen. Skynet, the massively distributed AI will soon expand its grip and its army of robots will spread across the whole planet. Fortunately, a great ape army is standing across their way to put an end to this supremacy of steel.
The battle will be epic!
On one side we have Skynet, an automated, self-healed, hybrid Docker platform and skynet application that has become indestructible. Playing the army of Apes, is a dedicated platform hosting Netflix's Simian Army and other flavors.
Will Skynet resists the relentless assaults of the great ape army?
Through this fantasy, we'll first cover all the technologies concretely used to set up the platforms and run the battle (linuxkit, infrakit, & swarm mode, and even raspberry devices among others), while we'll step back in the second part to address the subsequent architecture stakes involved: reliability, scalability, edge computing, immutability, microservices, hybridization, distributed storage. Most of all, you'll understand the importance of the synergies implied between the platforms and the app's design to achieve such a result.
Docker for .NET Developers - Michele Leroux Bustamante, SollianceDocker, Inc.
Millions of developers use .NET to build high performance apps, from Enterprise to hobbiests. Docker enables .NET developers to build containerized applications that can be deployed natively to Windows or Linux. Windows containers support applications that leverage the full .NET Framework. And with AspNetCore on Linux developers can target both Linux-based Docker containers or Windows containers. In both cases you can develop your applications on Windows using your favorite .NET developer tools - then build Docker images and run them as containers on Windows Server or Linux machines. This session in this session, you will learn how to build or migrate full .NET Framework applications and deploy them as Windows Containers. Then you will learn to build AspNetCore applications that can target either Windows or Linux containers, without any changes to your code. Topics covered include - Common considerations as you work locally - Running local Docker containers, and preserving as environment settings - Unit testing - Choosing the right base image - Working with IIS or Kestrel - Composing multiple containers - Working with a Docker Registry
In this talk, Phil and Michael will talk about how Docker was extended from x86 Linux to Windows, ARM and IBM’s z Systems mainframe and Power platforms. They will cover the work and architecture that makes it possible to run Docker on different CPU architectures and operating systems; How porting Docker to a new OS is different from porting it to new hardware; What it means for a Docker image to be multi-arch (and how are multi-arch images built and maintained); How does Docker correctly deploy and schedule apps on heterogeneous swarms.
Phil and Michael will also demo some of the new features that let Docker Enterprise Edition manage swarms with both x86 Linux and Windows nodes as well as mainframes.
Docker for developers on mac and windowsDocker, Inc.
The whole Docker ecosystem exists today because of every single developer who found ways of using Docker to improve how they build software; whether streamlining production deployments, speeding up continuous integration systems or standing up an application on your laptop to hack on. In this talk we want to take a step back and look at where Docker sits today from the software developers point of view - and then jump ahead and talk about where it might go in the future. In this talk, we’ll discuss:
* Making Docker an everyday part of the developing software on the desktop, with Docker for Windows and Docker for Mac
* Docker Compose, and the future of describing applications as code
* How Docker provides the best tools for developing applications destined to run on any Kubernetes cluster
This session should be of interest to anyone who writes software; from people who want to hack on a few personal projects, to polyglot open source programmers and to professional developers working in tightly controlled environments. Everyone deserves a better developer experience.
This is the presentation on the current status of the 'Lean Cloud Starterkit'. It allows for lean cloud development process that is implemented including the infrastructure in less then 3 day.
Pull, push, clone, it is all in your daily workflow. But what if this wasn't your source code or your container, but the state of your whole computer? Push your production database over to another machine? No problem!
This talk shows how you can use Dotmesh with LinuxKit to work with persistent data on your server as simply as you work with git. This workflow helps unleash new ways of working with servers and data. Immutable infrastructure from LinuxKit meets controlled and manageable data storage from Dotmesh. Combining these two open source projects allows new possibilities in how to manage your infrastructure.
DockerCon EU 2015: Placing a container on a train at 200mphDocker, Inc.
Presented by Casper S. Jensen, Software Engineer, Uber
At Uber, we've been introducing Docker to give service owners more control over their environments. However, everything at Uber is moving very fast so we have had to do it a way such that Docker fitted into the existing infrastructure and services could be migrated seamlessly to Docker without any service interruptions. In this talk we will talk about the challenges we faced while doing this, such as handling both non-Docker and Docker builds, image replication, integration with our deployment systems and other challenges when deploying Docker at scale.
Presented at DockerCon 2018 EU, I go through using Docker and the Swarm Orchestrator (a simpler Kuberentes) to stack different tools up from the base OS to a full-featured production server cluster. Also, Sci-Fi. The Video to this deck will be at https://www.bretfisher.com/docker once they are posted.
Since last DockerCon, Kubernetes has been integrated into both the Desktop and Enterprise editions of the Docker Platform. In this deep dive session, we’ll showcase live demos and explore where Kubernetes fits in the architecture of both the Desktop and the Enterprise editions and which community tools make this integration possible. We’ll be covering topics ranging from hypervisor control, storage and networking all the way to the integration of a custom RBAC system, native Compose file support and providing a rich user interface for Kubernetes.
Docker Engine laid the foundation for a paradigm shift in software development with containers. Come and learn about the history of Docker Engine, current architecture, evolution of containerd and future direction of Docker Engine. This talk will explore the following: • Latest features of Docker Engine including enhancements around Build • Relationship between Docker Engine and containerd and the common building blocks across them, with a deep dive into the Engine Architecture • Differences between the Community and Enterprise Engines • Highlight areas of innovation and future direction
Packaging software for the distribution on the edgeDocker, Inc.
At GE Digital, in the Asset Performance Management space, we need to supply an edge solution that impacts both on-premise and data transmission to the cloud. Our current edge solutions are relatively simplistic, but as our technologies mature along with our customers’ needs, we’re finding that we need to grasp a more fog computing-based approach where we include more intelligence, more computing power, at the edge. Along with this computative power, we need to better remotely manage these systems – to be able to monitor progress and diagnose problems – a technology that would enable us to containerize, to better manage, our software bundlings and deployments.
We found that Windows Docker seemed to fit the bill -- much of the technologies that live at our edge solutions are Windows OS based (as the customers’ main platforms are Windows OS based). This presentation reviews the approach that we took to repackage one of our main APM on-premise solutions using Windows Docker. We’ve created a prototype which we’re looking forward to productize and enable the capability of remote management to thousands of deployments.
The presentation also contains a video demo of the running system. The on-prem APM system will demonstrate the usage of Docker networking along with docker volumes and three (3) docker containers – will discuss the construction of the images, and nuances, of execution of the running docker containers.
[20200720]cloud native develoment - Nelson LinHanLing Shen
There is no shortage now of development and CI/CD tools for cloud-native application development. But how do we put the cloud-native concept and think as the cloud-native way on the leftmost side of CI/CD pipeline.
During developing phrase, the tools provided with cloud code can help you expedite iteration of source codes, run and debug cloud native applications in an easy and fast way, making cloud-native development turn into real-time process, reduce the gap between deployment and development.
現在不乏用於雲原生應用程序開發的開發和 CI/CD工具。 但是,我們如何將雲原生概念放在的 CI/CD 流水線的最左側呢?
在開發階段,如何用 Cloud code 協助您加快原始碼的迭代速度,以簡便快捷的方式運行和調用雲原生應用程序,使雲原生開發變為即使過程,縮小開發與部署之間的差
Developer Experience Cloud Native - Become Efficient and Achieve ParityMichael Hofmann
Zu einer effizienten Cloud-Entwicklung gehört nicht nur ein schnelles Deployment der Services in die Cloud. Auch ein reibungsloses Entwickeln und Debuggen der Services direkt in der Cloud steigert die Effizienz. Darüber hinaus sollte die Entwicklungsumgebung möglichst identisch mit der Produktionsumgebung sein. Diesen Umstand empfiehlt schon seit langem die 12-Factor-App-Auflistung in Punkt 10: "Dev/prod parity".
In dieser Session wird eine Auswahl an Open-Source-Tools vorgestellt, die einem Java-Entwickler bei der Erreichung folgender Ziele behilflich sind: schnelles und synchrones Deployment (Skaffold), Entwicklung und Debugging im Kubernetes Pod (OpenLiberty mit Ksync, Quarkus Live Coding), Erweiterung des Kubernetes Perimeter für eine lokale Entwicklung (telepresence oder Bridge to Kubernetes). Die einfache Handhabung dieser Tools verdeutlichen die zugehörigen Demos in dieser Session.
Docker Container As A Service
X11 Linux apps on mac in a container.
In container Java development with STS or Eclipse in a container.
Docker UCP and swarm load balancing with Interlock.
0507 057 01 98 * Adana Klima Tamir Servisi, Adana Klima Tamir Servisi, Adana Klima Tamir Servisleri, Arçelik Klima Tamir Servisi Adana, Beko Klima Tamir Servisi Adana, Demirdöküm Klima Tamir Servisi Adana, Vestel Klima Tamir Servisi Adana, Aeg Klima Tamir Servisi Adana, Bosch Klima Tamir Servisi Adana, Ariston Klima Tamir Servisi Adana, Samsung Klima Tamir Servisi Adana, Siemens Klima Tamir Servisi Adana, Profilo Klima Tamir Servisi Adana, Fujitsu Klima Tamir Servisi Adana, Baymak Klima Tamir Servisi Adana, Sharp Klima Tamir Servisi Adana, Mitsubishi Klima Tamir Servisi Adana, Alaska Klima Tamir Servisi Adana, Aura Klima Tamir Servisi Adana, Adana Çukurova Klima Servisleri, Adana Yüreğir Klima Servisleri, Adana Seyhan Klima Servisleri
3 years ago, Meetic chose to rebuild it's backend architecture using microservices and an event driven strategy. As we where moving along our old legacy application, testing features became gradually a pain, especially when those features rely on multiple changes across multiple components. Whatever the number of application you manage, unit testing is easy, as well as functional testing on a microservice. A good gherkin framework and a set of docker container can do the job. The real challenge is set in end-to-end testing even more when a feature can involve up to 60 different components.
To solve that issue, Meetic is building a Kubernetes strategy around testing. To do such a thing we need to :
- Be able to generate a docker container for each pull-request on any component of the stack
- Be able to create a full testing environment in the simplest way
- Be able to launch automated test on this newly created environment
- Have a clean-up process to destroy testing environment after tests To separate the various testing environment, we chose to use Kubernetes Namespaces each containing a variant of the Meetic stack. But when it comes to Kubernetes, managing multiple namespaces can be hard. Yaml configuration files need to be shared in a way that each people / automated job can access to them and modify them without impacting others.
This is typically why Meetic chose to develop it's own tool to manage namespace through a cli tool, or a REST API on which we can plug a friendly UI.
In this talk we will tell you the story of our CI/CD evolution to satisfy the need to create a docker container for each new pull request. And we will show you how to make end-to-end testing easier using Blackbeard, the tool we developed to handle the need to manage namespaces inspired by Helm.
Yet Another Session about Docker and ContainersPedro Sousa
"Yet Another Session about Docker and Containers" public presentation at TugaIT 2017.
Following the trends of hot-topic Docker and Containers. We will talk about the newest developments in Docker World and Microsoft’s container adoption.
GCP - Continuous Integration and Delivery into Kubernetes with GitHub, Travis...Oleg Shalygin
Kubernetes provides an automated platform to deployment, scaling and operations of applications across a cluster of hosts. Complementing Kubernetes with a series of build scripts in conjunction with Travis-CI, GitHub, Artifactory, and Google Cloud Platform, we can take code from a merged pull request to a deployed environment with no manual intervention on a highly scaleable and robust infrastructure.
Docker is in all the news and this talk presents you the technology and shows you how to leverage it to build your applications according to the 12 factor application model.
Accelerate your software development with DockerAndrey Hristov
Docker is in all the news and this talk presents you the technology and shows you how to leverage it to build your applications according to the 12 factor application model.
Similar to Developer Experience Cloud Native - From Code Gen to Git Commit without a CI/CD Pipeline (20)
Service Specific AuthZ In The Cloud InfrastructureMichael Hofmann
Eine produktiv betriebene Anwendung kommt in der Regel nicht ohne Authorization-Checks aus. Entsprechend dem OWASP-Prinzip “Defense in Depth” sollten die AuthZ-Prüfungen nicht nur im Anwendungscode erfolgen. Eine zusätzliche Ebene für die Berechtigungsprüfung, am besten in der Cloud-Infrastruktur, gilt als Best Practice. Mit einem Service-Mesh-Tool können anwendungsspezifische deklarative Authz-Prüfungen im Sidecar durchgeführt werden. Die Möglichkeiten, die Istio hier bietet, werden in dieser Session genauer betrachtet. Aber auch TLS/mTLS und Authentication, als notwendige Voraussetzungen für AuthZ, werden ausführlich vorgestellt.
New Ways To Production - Stress-Free Evolution Of Your Cloud ApplicationsMichael Hofmann
Neue Versionen der eigenen Cloud-Anwendungen geordnet, stabil und somit ohne Stress und risikofrei in die Produktionsumgebung zu deployen, sollte das Ziel eines jeden Entwicklerteams sein. Erfolgt das zusammen mit den passenden Teststrategien, ohne Downtime und voll automatisiert, ist die Basis für hochfrequente Releasewechsel geschaffen. Ein Service-Mesh-Tool wie beispielsweise Istio bietet für verschiedene Deployment-Strategien – Canary, A/B Testing (HTTP Headers Routing), Blue/Green (Traffic Mirroring) – die notwendige Unterstützung. Kombiniert man das mit einem progressive Delivery Operator wie Flagger, wird die Automatisierung noch weiter gesteigert. Hotfixes und hektische Release-Rollbacks gehören damit der Vergangenheit an. In dieser Session werden die unterschiedlichen Release- und Teststrategien genauer vorgestellt. Darüber hinaus wird gezeigt, wie die Integration von Istio und Flagger erfolgen kann und welche Benefits sich daraus ergeben.
Every microservice in production must be secured. In order to ensure this, there is a significant additional effort compared to a monolithic system due to the high number of services. If the operation then still takes place in a public cloud, neither the communication within the infrastructure of the cloud provider nor the connection via the Internet may be unencrypted. In addition, corresponding authorization checks must take place in each individual service.
This session shows how easy and effortless it is to implement security measures with a service mesh tool like Istio. With a few small Istio rules, all communication in the service mesh is secured with mutual TLS (mTLS). Basic checks of service-to-service communication and end-user authorization using JWT can also be delegated to Istio. The extended authorization checks within a Java service are illustrated using the MicroProfile specifications.
Service Mesh vs. Frameworks: Where to put the resilience?Michael Hofmann
Distributed systems should definitely no longer be developed and operated without resilience. The responsible developer or architect must first consider which resilience patterns are necessary. The next question is how the implementation of these patterns in the individual services should take place. One can distinguish between two basic alternatives. On the one hand an implementation with the classic resilience frameworks such as Resilience4J, Failsafe or MicroProfile Fault Tolerance. On the other hand, it is also possible to establish resilience with a service mesh tool like Istio.
In this session, after a brief introduction to Istio, the two basic alternatives are compared. The respective advantages and disadvantages are listed and compared in a final evaluation. Additional possibilities of Istio to explicitly test resilience will also be introduced.
Service Mesh vs. Frameworks: Where to put the resilience?Michael Hofmann
Verteilte System sollten heute definitiv nicht mehr ohne Resilienz entwickelt und betrieben werden. Der zuständige Entwickler oder Architekt muss sich zuerst überlegen, welche Resilienzpatterns notwendig sind. Im Anschluss daran stellt sich die Frage, wie die Umsetzung dieser Patterns in den einzelnen Services erfolgen soll. Dabei kann zwischen zwei grundsätzlichen Alternativen unterschieden werden. Zum einen gibt es die Implementierung mit den klassischen Resilienz-Frameworks, wie beispielsweise Resilience4j, Failsafe oder MicroProfile Fault Tolerance. Andererseits ist es mittlerweile auch möglich, Resilienz mit Hilfe eines Service-Mesh-Werkzeugs, wie zum Beispiel Istio, zu etablieren. In dieser Session werden nach einer kurzen Einführung zu Istio die beiden grundsätzlichen Alternativen verglichen. Die jeweiligen Vor- und Nachteile werden aufgeführt und in einer abschließenden Bewertung gegenübergestellt. Darüber hinaus wird noch gezeigt, welche Möglichkeiten Istio für den Test der Resilienz bietet.
Servicierung von Monolithen - Der Weg zu neuen Technologien bis hin zum Servi...Michael Hofmann
Die Migration von monolithischen Anwendungen hin zu einer service-basierenden Applikationslandschaft bringt nicht nur Vorteile mit sich. Neben dem notwendigen Einsatz neuer System-Komponenten, wie zum Beispiel OpenID Connect oder Cloud-Technologien wie Openshift gibt es noch andere Herausforderungen, die gemeistert werden müssen. Durch die Zerlegung des Monolithen in Microservices und der dabei entstehenden Kommunikations-Beziehungen zwischen diesen Services bildet sich ein sog. Service Mesh. Je nach Anzahl der Services und Kommunikations-Pfade entsteht dabei sehr schnell ein komplexes Geflecht das beherrscht werden muss. Istio ist eines der Werkzeuge das für den Betrieb und das Verwalten des Service Mesh eine große Hilfe sein kann.
Service Mesh mit Istio und MicroProfile - eine harmonische Kombination?Michael Hofmann
Die Entwicklung einer Cloud-native-Anwendung ist nur eine Seite der Medaille, die andere Seite ist die Cloud-Umgebung, in der die Anwendung betrieben werden soll. Als Architekt muss man Entscheidungen treffen, die auch von der Laufzeitumgebung abhängig sind. Einige Aspekte, wie zum Beispiel Konfiguration, Resilienz, Health Checks, Metriken, Request Tracing und Service Discovery besitzen eine starke Kopplung mit der Cloud-Umgebung.
Istio, das als offene Plattform auf beispielsweise Kubernetes betrieben werden kann, bietet diese Funktionalitäten. Auf der anderen Seite besitzt MicroProfile auch eine Menge von Spezifikationen, die bei der Implementierung der Cloud-native-Anwendung hilfreich sein können. Die Session startet mit einer kurzen Einführung in Istio und MicroProfile und zeigt im Anschluss, wie diese beiden Welten in einer Cloud-native-Anwendung am besten miteinander kombiniert werden können.
Service Mesh - kilometer 30 in a microservice marathonMichael Hofmann
Distributed applications like microservices shift some of their complexities into the interaction of services. Such a service mesh, which can have hundreds of runtime instances, is very difficult to manage. You will be concerned with some of the following questions: Which service will be requested by which other services in which version and how often depending on the request content? How can you test the interaction and how can you replace single services with new ones?
These and other questions will be discussed in this session. Tools to make your live easier with a service mesh will also be introduced.
Service Mesh - Kilometer 30 im Microservices-MarathonMichael Hofmann
Verteile Anwendungen wie Microservices verlagern einen Teil der Komplexität in das Zusammenspiel der Services untereinander. Ein solches Service Mesh, das bis zu dreistellige (oder mehr) Laufzeitinstanzen haben kann, wird sehr schwierig zu beherrschen. Man muss sich mit Fragen auseinander setzen wie zum Beispiel: Welcher Service wird von welchem Service in welcher Version bei welchem Request-Inhalt wie oft aufgerufen? Wie kann man das Zusammenspiel testen und wie werden einzelne Services durch neue ersetzt?
Diese und andere Fragestellungen werden in der Session beleuchtet. Dabei werden auch Werkzeuge vorgestellt, die das Leben mit dem Service Mesh vereinfachen sollen.
API-Economy bei Financial Services – Kein Stein bleibt auf dem anderenMichael Hofmann
Im Zuge der voranschreitenden Digitalisierung werden Projekte im Umfeld der API-Economy immer wichtiger. Die Umsetzung solcher Projekte hat in der Regel enorme Auswirkungen auf das gesamte Unternehmen. Vor allem vor dem Hintergrund, dass es im Grunde in jedem Unternehmen sog. Legacy-Systeme gibt, die integriert werden müssen, denn kaum ein Unternehmen im Bereich der Financial Services hat den Vorteil, auf der grünen Wiese starten zu können.
Durch den Schwenk von Legacy-Systemen, die eher monolithisch aufgebaut sind, hin zu Microservices kommen weitere Herausforderungen auf die Projekte zu. Die weitreichenden Auswirkungen erstrecken sich von technischen Herausforderungen verbunden mit der Neuausrichtungen der Softwarearchitektur bis hin zu Konsequenzen bzgl. Betriebsführung und organisatorischen Veränderungen. Im Grunde bleibt hier im Unternehmen kein Stein auf dem anderen.
Wir wollen in dieser Session zeigen, welche Fragestellungen exemplarisch auftreten können und welche Lösungsalternativen diskutiert werden müssen. Dabei werden wir auf die organisatorischen und die technischen Problemfelder in Verbindung mit der veränderten Softwarearchitektur genauer eingehen. Am Ende der Session sollten die Teilnehmer ein Gespür dafür bekommen, wo die Herausforderungen bei solchen Projekten liegen.
MicroProfile ist eine Vereinigung aus namhaften Open-Source-Projekten und Herstellern, die sich das Ziel gesetzt haben, Enterprise Java für Cloud Native und Microservice Architekturen zu optimieren. Dabei soll die Portierbarkeit der Anwendungen innerhalb der verschiedenen MicroProfile-Laufzeitumgebungen gewährleistet werden. Unter Verwendung konkreter Code-Beispiele wird der bereits existierenden Funktionsumfang aufgezeigt. Zum Abschluss wird auf das geplante MicroProfile-Backlog eingegangen und versucht, den angedachten Schulterschluss mit Java EE 8 und Java EE 9 herzustellen.
Microservices mit Java EE - am Beispiel von IBM LibertyMichael Hofmann
Viele Unternehmen versprechen sich derzeit einiges vom aktuellen Architektur-Trend: Microservices. Unter anderem verbinden sie damit die Hoffnung bestimmte Architektur-Probleme in den Griff zu bekommen: Stichwort Monolith. Dabei stellen sich Entwicklungsorganisationen mit einem Fokus auf Java EE-Technologien die Frage, ob und wie sie mit ihren Java EE-Mitteln optimal Microservices implementieren können. Im Gegenzug erweitern oder verändern Java EE-Hersteller ihre Produkte, um den Trend der Microservices gerecht zu werden. Ziel des Vortrages soll es sein, am Beispiel von IBM's WebSphere Liberty Profile Server zu verdeutlichen, welche Vorteile bzw. Nachteile der Java EE-Ansatz bringen kann. Dabei wird nicht nur auf technologische Aspekte, sondern auch auf organisatorische Problemstellungen eingegangen. Themen wie DevOps und Continous Delivery werden dabei am Rande auch betrachtet. Abgerundet wird das Ganze mit Hinweisen auf bekannte Fallbeispiele, wie z.B. Netflix, um weitere Denkanstöße zu geben.
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
A Comprehensive Look at Generative AI in Retail App Testing.pdfkalichargn70th171
Traditional software testing methods are being challenged in retail, where customer expectations and technological advancements continually shape the landscape. Enter generative AI—a transformative subset of artificial intelligence technologies poised to revolutionize software testing.
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
Enhancing Project Management Efficiency_ Leveraging AI Tools like ChatGPT.pdfJay Das
With the advent of artificial intelligence or AI tools, project management processes are undergoing a transformative shift. By using tools like ChatGPT, and Bard organizations can empower their leaders and managers to plan, execute, and monitor projects more effectively.
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
Experience our free, in-depth three-part Tendenci Platform Corporate Membership Management workshop series! In Session 1 on May 14th, 2024, we began with an Introduction and Setup, mastering the configuration of your Corporate Membership Module settings to establish membership types, applications, and more. Then, on May 16th, 2024, in Session 2, we focused on binding individual members to a Corporate Membership and Corporate Reps, teaching you how to add individual members and assign Corporate Representatives to manage dues, renewals, and associated members. Finally, on May 28th, 2024, in Session 3, we covered questions and concerns, addressing any queries or issues you may have.
For more Tendenci AMS events, check out www.tendenci.com/events
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...informapgpstrackings
Keep tabs on your field staff effortlessly with Informap Technology Centre LLC. Real-time tracking, task assignment, and smart features for efficient management. Request a live demo today!
For more details, visit us : https://informapuae.com/field-staff-tracking/
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
4. Developer Experience Cloud Native
Why should we develop in a cloud-native way?
●
delivering and deploying
●
new K8S environment brings new bugs
●
dependencies to local system
●
local system with different versions and behavior
6. Generate Docker Image
Build image with Docker Daemon?
●
Security
●
Scalability
Dockerfile or proprietary description?
Maven/Gradle plugin or standalone tool?
On build server or in K8S?
13. Deploy to (local) K8S
same situation as Maven/Gradle plugins?
Draft (Microsoft), Gitkube, Ksonnet, Metaparticle,
Forge, ...
But! New Kids on the Block!
Skaffold, Helm, Kustomize, ...
14. Deploy to (local) K8S
Helm 3 (13.11.2019) (Microsoft, Google, Bitnami)
“The package manager for Kubernetes”
Now without Tiller!
Manage K8S applications
Create own Helm charts or use existing
(Helm Hub https://hub.helm.sh)
CNCF Survey
2018:
Helm usage 68%
15. Deploy to (local) K8S
Interaction with K8S:
A lot of shell commands necessary ‘kubectl’-ing
(and a lot of ‘yaml’-ing)
Simplify: bash completion
(https://github.com/scop/bash-completion)
21. Test and Redeploy
Ksync
(https://github.com/ksync/ksync)
similar functionality as kubectl cp, but
●
installs DaemonSet in your K8s cluster
●
works bi-directional
●
local watch process keeps local folder and pod
folder in sync
●
new pods will also be synced (scale --replicas=2)
23. Debugging
Remote Debugging with your IDE
Squash (https://github.com/solo-io/squash)
“Debug your microservice applications from your terminal
or IDE while they run in Kubernetes”
Debugging across multiple services
also for Istio debugging
IDE support so far: VS Code (IntelliJ and Eclipse)
25. Debugging
Telepresence
●
debugging a service mesh
●
substitutes a two-way network proxy for your K8S pod
●
proxies data from K8S environment (e.g., TCP
connections, environment variables, volumes) to the
local process
●
local process has its networking transparently
overridden so that DNS calls and TCP connections are
routed through the proxy to K8S
29. The Big Players
Kabanero (IBM)
supports development, architecture and
operations
platform architect designs platform
solution architect provides software stacks (PaaS)
developer can use predefined stack
IDE tools
TM
30. Final Thoughts
steep learning curve (Docker, Kubernetes)
new responsibility of a developer: deploy to Kubernetes
you have to deal with it, especially in solving environment errors
shown tools are easy to handle and make live of a developer
easier
selected tools should not interfere with each other
31. Final Thoughts
tools come and go; maybe you sit on the wrong horse
biggest challenge: find the right tool for a special purpose
there is no single golden bullet: but stay in sync with your CI/CD
pipeline; maybe you can use the same tools (e.g. Helm, ...)
as always: documentation could be better
the big companies smell a deal ...