See an intro to the Habitat supervisor and packaging system, with a particular focus on what you can do with Habitat TODAY. See a demo of building a Linux package on a Windows workstation, building a Windows package on a Windows workstation, and creating and running a Docker container image from a Habitat package.
You can see the demos at these YouTube links
Running a Linux Package with Habitat Supervisor: https://www.youtube.com/watch?v=uw-mWBklmFg
Creating a Linux Package on a Windows Workstation: https://www.youtube.com/watch?v=uaIe2RuIC4Y
Creating a Windows Package on a Windows Workstation: https://www.youtube.com/watch?v=01s3t-S_ae4
Creating and running a Docker Image from a Habitat Package: https://www.youtube.com/watch?v=tILvSfGEO0I
Since many apps are not about just a single container, this talk discusses the ability and benefits of creating an hybrid Docker cluster capacity leveraging on Linux+Windows OS and x86+ARM architectures.
Moreover, the docker nodes composing this cloud will be hosted across several providers (local DC, cloud vendors such as Azure or AWS), in order to face various scenarios (cloud migration, elasticity...).
Illustrated Intro to Containers & KubernetesKaslin Fields
Interested in Containers and want to learn more? This talk will introduce you to the basics of why containers are important, how they work, and how Kubernetes is making containers the DevOps way of the future - through fun comic illustrations and analogies! You'll learn and retain the key points you'll need whether you're trying to convince your leadership that container adoption is right for the company, or talking to that person a few cubes down who just can't seem to stop talking about containers!
Unleash software architecture leveraging on dockerAdrien Blind
The following talk first comes back on key aspects of microservices architectures. It then shifts to Docker, to explain in this context the benefits of containers and especially the new orchestration features appeared with version 1.12.
DevOps at scale: what we did, what we learned at Societe GeneraleAdrien Blind
The following talk discusses Societe Generale's transformation journey to DevOps, and more largelly to continuous delivery principles, inside a large, traditionnal company. It emphases the importance of practices over tooling, a human centric approach massively leveraging on coaching, and our "framework" approach to make it scaling up to the IS level.
It has been initially delivered at DevOps Rex conference, with teammate Laurent Dussault, also DevOps coach at Societe Generale.
Docker, cornerstone of cloud hybridation ? [Cloud Expo Europe 2016]Adrien Blind
The following talk discusses the opportunity to leverage on docker to create an hybrid logical cloud built simultaneously on top of traditionnal datacenters and public cloud vendors and enabling to manage new kind of containers (Windows, linux over ARM). It also discusses the value of such capacity for applications in a contexte of topology orchestrations and micro service oriented applications.
Since many apps are not about just a single container, this talk discusses the ability and benefits of creating an hybrid Docker cluster capacity leveraging on Linux+Windows OS and x86+ARM architectures.
Moreover, the docker nodes composing this cloud will be hosted across several providers (local DC, cloud vendors such as Azure or AWS), in order to face various scenarios (cloud migration, elasticity...).
Illustrated Intro to Containers & KubernetesKaslin Fields
Interested in Containers and want to learn more? This talk will introduce you to the basics of why containers are important, how they work, and how Kubernetes is making containers the DevOps way of the future - through fun comic illustrations and analogies! You'll learn and retain the key points you'll need whether you're trying to convince your leadership that container adoption is right for the company, or talking to that person a few cubes down who just can't seem to stop talking about containers!
Unleash software architecture leveraging on dockerAdrien Blind
The following talk first comes back on key aspects of microservices architectures. It then shifts to Docker, to explain in this context the benefits of containers and especially the new orchestration features appeared with version 1.12.
DevOps at scale: what we did, what we learned at Societe GeneraleAdrien Blind
The following talk discusses Societe Generale's transformation journey to DevOps, and more largelly to continuous delivery principles, inside a large, traditionnal company. It emphases the importance of practices over tooling, a human centric approach massively leveraging on coaching, and our "framework" approach to make it scaling up to the IS level.
It has been initially delivered at DevOps Rex conference, with teammate Laurent Dussault, also DevOps coach at Societe Generale.
Docker, cornerstone of cloud hybridation ? [Cloud Expo Europe 2016]Adrien Blind
The following talk discusses the opportunity to leverage on docker to create an hybrid logical cloud built simultaneously on top of traditionnal datacenters and public cloud vendors and enabling to manage new kind of containers (Windows, linux over ARM). It also discusses the value of such capacity for applications in a contexte of topology orchestrations and micro service oriented applications.
This presentation was made by Atul Malaviya, Principal Program Manager @Microsoft as part of Container Conference '18: www.containerconf.in
"Present the best and the easiest way to setup Kubernetes DevOps by using Docker CI and Helm charts. From Zero to DevOps in a matter of mins. We will also like to show how you can build on the initial setup and add features like: .
* Docker CI best practices
* Helm chart best practices
* Secure your K8s cluster, use RBAC, Helm Provenance etc
* Use Container Insights, App Insights
* Other alternatives in the open source world like Brigade"
Since the release of 17.05, Docker has introduced Multi-Stage Build for Docker Images for anyone who has struggled to optimize Dockerfiles while keeping them easy to read and maintain. This builder pattern will help anyone who would just like to have the runtime, configuration & application and doesn’t want to have compilers, debuggers, code, build, test logs etc.
Build and automate your machine learning application with docker and jenkinsKnoldus Inc.
Modern Web Applications need agile systems because of the ever-changing requirements of the clients and consumers. In Machine Learning the challenge is to make the system that works well with the real world, and the real-world scenarios change continuously. The system needs continuous learning and training from the real world.
The solution is DevOps for Machine learning and deep learning. Which continuously trains the model on the new data after some time and then validates and tests the model accuracy to make sure it will work well with the current real-world scenarios.
Download this Slides to explore how to build and automate ML applications with Jenkins and Docker. We’ll cover these topics in the webinar:
* Why is bringing machine learning code into production hard?
* What are docker and its benefits
* Create a machine learning application with Docker [Demo]
* What is Jenkins and its benefits?
* Automate a machine learning pipeline with Jenkins. [Demo]
Building distributed applications with concurrent processing scenarios, easily and simply, ensuring high performance and fault tolerance. The concept of Remoting, Cluster, Deployment and Grid Processing will be explored.
This presentation was made as part of the Container Conference 2018 - www.containerconf.in
"Containers have gained lot of attention ever since it came into existence. And why not? With the speed and ease it provides for running user application, it is definitely the most preferred solution for many of the real world use cases.
OpenStack, on the other hand is a cloud solution which has always evolved in supporting newer technologies. OpenStack have many projects around containers that tries to cater the practical use cases. Some of the real world use cases that OpenStack fulfils are:
OpenStack deployment could be very complex and so is its upgrade. OpenStack Helm, Triple-O and Kolla uses Kubernetes, Docker that helps its users to easily deploy and upgrade their cloud.
Containers lacks the security as compared to VMs, so many users want to run their application on secure environment. OpenStack Zun enables Clear Containers and Kata Containers that provides the security of VMs and speed of containers.
Other use cases include running Kubernetes cluster on OpenStack, CI/CD, managing applications using microservices which can be done by Magnum, Zuul, Zun respectively. In this presentation, we will talk about the practical use cases where containers can help us and what OpenStack provides to fulfill those requirements."
‘Always be Optimising’ was a meetup for digital marketers and product people keen on getting more from their existing traffic. The slide deck holds all presentations from the meetup.
Reproducible Computational Pipelines with Docker and Nextflowinside-BigData.com
Paolo Di Tommaso from the Center for Genomic Regulation presented this talk at the Switzerland HPC Conference.
"Research computational workflows consist of several pieces of third party software and, because of their experimental nature, frequent changes and updates are commonly necessary thus raising serious deployment and reproducibility issues. Docker containers are emerging as a possible solution for many of these problems, as they allow the packaging of pipelines in an isolated and self-contained manner. This presentation will introduce our experience deploying genomic pipelines with Docker containers at the Center for Genomic Regulation (CRG). I will discuss how we implemented it, the main issues we faced, the pros and cons of using Docker in an HPC environment including a benchmark of the impact of containers technology on the performance of the executed applications."
Watch the video presentation: https://www.youtube.com/watch?v=Doo9H2-gBAk
See more talks in the Swiss Conference Video Gallery: http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Obra reflete sobre a presença feminina dentro da carreira jurídica e traz 17 entrevistas com mulheres pioneiras a atuarem pelo Ministério Público de Santa Catarina (MPSC).
1) Hercília Regina Lemke;
2) Rosa Maria Garcia;
3) Vera Lúcia Ferreira Copetti;
4) Lenir Roslindo Piffer;
5) Maria Auxiliadora Alves;
6) Heliete Marty Filomeno Leal;
7) Heloísa Crescenti Abdalla Freire;
8) Sonia Maria Demeda Groisman Piardi;
9) Márcia Aguiar Arend;
10) Kátia Dal Pizzol;
11) Rosemary Machado Silva;
12) Regina Kurschu;
13) Avone Chagas;
14) Havah Emília Piccinini de Araújo Mainhardt;
15) Walkyria Ruicir Danielski;
16) Gladys Afonso;
17) Eliana Volcato Nunes.
One might find it ironic that some of the world's fastest supercomputers -- vast clusters capable of trillions of floating point operations per second -- can take upwards of a half an hour to reboot in between jobs. While we often talk about the density advantages of containers, it's the opposite approach that we use in the High Performance Computing world! Here, we use exactly 1 system container per node, giving it unlimited access to all of the host's CPU, Memory, Disk, IO, and Network. And yet we can still leverage the management characteristics of containers -- security, snapshots, live migration, and instant deployment to recycle each node in between jobs. In this talk, we'll examine a reference architecture and some best practices around containers in HPC environments.
This presentation was made by Atul Malaviya, Principal Program Manager @Microsoft as part of Container Conference '18: www.containerconf.in
"Present the best and the easiest way to setup Kubernetes DevOps by using Docker CI and Helm charts. From Zero to DevOps in a matter of mins. We will also like to show how you can build on the initial setup and add features like: .
* Docker CI best practices
* Helm chart best practices
* Secure your K8s cluster, use RBAC, Helm Provenance etc
* Use Container Insights, App Insights
* Other alternatives in the open source world like Brigade"
Since the release of 17.05, Docker has introduced Multi-Stage Build for Docker Images for anyone who has struggled to optimize Dockerfiles while keeping them easy to read and maintain. This builder pattern will help anyone who would just like to have the runtime, configuration & application and doesn’t want to have compilers, debuggers, code, build, test logs etc.
Build and automate your machine learning application with docker and jenkinsKnoldus Inc.
Modern Web Applications need agile systems because of the ever-changing requirements of the clients and consumers. In Machine Learning the challenge is to make the system that works well with the real world, and the real-world scenarios change continuously. The system needs continuous learning and training from the real world.
The solution is DevOps for Machine learning and deep learning. Which continuously trains the model on the new data after some time and then validates and tests the model accuracy to make sure it will work well with the current real-world scenarios.
Download this Slides to explore how to build and automate ML applications with Jenkins and Docker. We’ll cover these topics in the webinar:
* Why is bringing machine learning code into production hard?
* What are docker and its benefits
* Create a machine learning application with Docker [Demo]
* What is Jenkins and its benefits?
* Automate a machine learning pipeline with Jenkins. [Demo]
Building distributed applications with concurrent processing scenarios, easily and simply, ensuring high performance and fault tolerance. The concept of Remoting, Cluster, Deployment and Grid Processing will be explored.
This presentation was made as part of the Container Conference 2018 - www.containerconf.in
"Containers have gained lot of attention ever since it came into existence. And why not? With the speed and ease it provides for running user application, it is definitely the most preferred solution for many of the real world use cases.
OpenStack, on the other hand is a cloud solution which has always evolved in supporting newer technologies. OpenStack have many projects around containers that tries to cater the practical use cases. Some of the real world use cases that OpenStack fulfils are:
OpenStack deployment could be very complex and so is its upgrade. OpenStack Helm, Triple-O and Kolla uses Kubernetes, Docker that helps its users to easily deploy and upgrade their cloud.
Containers lacks the security as compared to VMs, so many users want to run their application on secure environment. OpenStack Zun enables Clear Containers and Kata Containers that provides the security of VMs and speed of containers.
Other use cases include running Kubernetes cluster on OpenStack, CI/CD, managing applications using microservices which can be done by Magnum, Zuul, Zun respectively. In this presentation, we will talk about the practical use cases where containers can help us and what OpenStack provides to fulfill those requirements."
‘Always be Optimising’ was a meetup for digital marketers and product people keen on getting more from their existing traffic. The slide deck holds all presentations from the meetup.
Reproducible Computational Pipelines with Docker and Nextflowinside-BigData.com
Paolo Di Tommaso from the Center for Genomic Regulation presented this talk at the Switzerland HPC Conference.
"Research computational workflows consist of several pieces of third party software and, because of their experimental nature, frequent changes and updates are commonly necessary thus raising serious deployment and reproducibility issues. Docker containers are emerging as a possible solution for many of these problems, as they allow the packaging of pipelines in an isolated and self-contained manner. This presentation will introduce our experience deploying genomic pipelines with Docker containers at the Center for Genomic Regulation (CRG). I will discuss how we implemented it, the main issues we faced, the pros and cons of using Docker in an HPC environment including a benchmark of the impact of containers technology on the performance of the executed applications."
Watch the video presentation: https://www.youtube.com/watch?v=Doo9H2-gBAk
See more talks in the Swiss Conference Video Gallery: http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Obra reflete sobre a presença feminina dentro da carreira jurídica e traz 17 entrevistas com mulheres pioneiras a atuarem pelo Ministério Público de Santa Catarina (MPSC).
1) Hercília Regina Lemke;
2) Rosa Maria Garcia;
3) Vera Lúcia Ferreira Copetti;
4) Lenir Roslindo Piffer;
5) Maria Auxiliadora Alves;
6) Heliete Marty Filomeno Leal;
7) Heloísa Crescenti Abdalla Freire;
8) Sonia Maria Demeda Groisman Piardi;
9) Márcia Aguiar Arend;
10) Kátia Dal Pizzol;
11) Rosemary Machado Silva;
12) Regina Kurschu;
13) Avone Chagas;
14) Havah Emília Piccinini de Araújo Mainhardt;
15) Walkyria Ruicir Danielski;
16) Gladys Afonso;
17) Eliana Volcato Nunes.
One might find it ironic that some of the world's fastest supercomputers -- vast clusters capable of trillions of floating point operations per second -- can take upwards of a half an hour to reboot in between jobs. While we often talk about the density advantages of containers, it's the opposite approach that we use in the High Performance Computing world! Here, we use exactly 1 system container per node, giving it unlimited access to all of the host's CPU, Memory, Disk, IO, and Network. And yet we can still leverage the management characteristics of containers -- security, snapshots, live migration, and instant deployment to recycle each node in between jobs. In this talk, we'll examine a reference architecture and some best practices around containers in HPC environments.
LXD is a container "hypervisor" and a new user experience for LXC.
The daemon exports a REST API both locally and if enabled, over the network.
The command line tool is designed to be a very simple, yet very powerful tool to manage all your containers. It can handle connect to multiple container hosts and easily give you an overview of all the containers on your network, let you create some more where you want them and even move them around while they're running.
http://walidshaari.blogspot.com/2016/12/devops-and-traditional-hpc.html
Cloud, Web, Big Data operations and DevOps mindsets are changing the Internet, IT and Enterprise services and applications scene rapidly. What can HPC community learn from these technologies, processes, and culture? From the IT unicorns "Google, Facebook, Twitter, Linkedin, and Etsy" that are in the lead? What could be applied to tackle HPC operations challenges? The problem of efficiency, better use of resources? A use case of automation and version control system in HPC enterprise data centre, as well a proposal for utilising containers and new schedulers to drive better utilizations and diversify the data centre workloads, not just HPC but big data, interactive, batch, short and long-lived scientific jobs.
Taller de Color · Pac 1 · Paquita RibasPaquita Ribas
PAC de l'assignatura Taller de Color del Grau de Disseny y Creació Digital de la UOC. Pots veure el projecte complet a www.racovermell.com. Gràcies pel "like" ;-)
Docker "Global Mentor Week" is your opportunity to #learndocker. to learn how to build, ship, and run modern distributed applications with ease. thanks to the Docker platform.
Right now, Docker has developed out a series of self-paced online labs that will be available during the meetup. Docker’s meetup groups worldwide are hosting a series of complimentary events to help newcomers and intermediate users learn Docker.
We'll have hands-on labs for both beginners and intermediate users, labs targeting both developers and operations. There is something for everyone. Docker mentor will be on hand at this event to help you prepare. and work through the self-paced materials. Bring your laptop, have fun and learn Docker!
The pillars of DevOps are Culture, Automation, Measurement and Sharing. Docker is a rare tool at enables DevOps through all 4 pillars. These slides take a look at how Docker can affect each pillar in your organization through a Lean lens.
Some technologies are tools of the DevOps trade. Chef, Jenkins, Vagrant and Zookeeper are all tools that can be used for huge leverage and impact by the right people. Rarely, however, is there a technology that *enables* the practice of DevOps. The advent of the cloud and disposable infrastructure is one example. Docker is in this second, more rarified class.
The challenge of application distribution - Introduction to Docker (2014 dec ...Sébastien Portebois
Live recording with the demos: https://www.youtube.com/watch?v=0XRcmJEiZOM
Contents
- The application distribution challenge
- The current solutions
- Introduction to Docker, Containers, and the Matrix from Hell
- Why people care: Separation of Concerns
- Technical Discussion
- Ecosystem, momentum
- How to build Docker images
- How to make containers talk to each other, how to handle data persistence
- Demo 1: isolation
- Demo 2: real case - installing Go Math! Academy, tail –f containers, unit tests
Docker Enables DevOps - Keep C.A.L.M.S. and Docker on ...Boyd Hemphill
The pillars of DevOps are Culture, Automation, Measurement and Sharing. Docker is a rare tool at enables DevOps through all 4 pillars. These slides take a look at how Docker can affect each pillar in your organization through a Lean lens.
StackEngine has talked to over 100 businesses about the direction and needs of companies ranging from start ups still in Stealth mode to the Fortune 100. Combine these learnings with the features currently included in the StackEngine Controller and a solution to production operation begins to come to light.
To think about a production operation we:
* Establish the characteristics of an ideal containerized application.
* Motivate those characteristics in terms of business benefit.
* Discuss the "final mile" problem of taking a containerized service and making it available to the operations team.
* Now that containers are running, how do we inventory what we have and the state that it is in?
* Demo Host, Container and Search pages as a means of inventory management.
* When our monitoring tells us something is wrong on a host, what do we do?
* How do services find each other?
* Discuss how StackEngine will provide service discovery.
* Provide a roadmap overview
Immutable Infrastructure: the new App DeploymentAxel Fontaine
Immutable Infrastructure: the new App Deployment
App deployment and server setup are complex, error-prone and time-consuming. They require OS installers, package managers, configuration recipes, install and deployment scripts, server tuning, hardening and more. But... Is this really necessary? Are we trapped in a mindset of doing things this way just because that's how they've always done?
What if we could start over and radically simplify all this? What if, within seconds, and with a single command, we could wrap our application into the bare minimal machine required to run it? What if this machine could then be transported and run unchanged on our laptop and in the cloud? How do the various platforms and tools like AWS, Docker, Heroku and Boxfuse fit into this picture? What are their strengths and weaknesses? When should you use them?
This talk is for developers and architects wishing to radically improve and simplify how they deploy their applications. It takes Continuous Delivery to a level far beyond what you've seen today. Welcome to Immutable Infrastructure generation. This is the new black.
Demystifying Containerization Principles for Data ScientistsDr Ganesh Iyer
Demystifying Containerization Principles for Data Scientists - An introductory tutorial on how Dockers can be used as a development environment for data science projects
Introduction to dockers and kubernetes. Learn how this helps you to build scalable and portable applications with cloud. It introduces the basic concepts of dockers, its differences with virtualization, then explain the need for orchestration and do some hands-on experiments with dockers
Habitat is amazing technology - but a new technology alone will not deliver business value. A technology is good for your business when it allows you to deliver stronger value in higher quantities at a faster velocity. For a business, much of the value comes in the software applications it produces - the application itself is what makes it money. Come hear how Habitat’s focus on the application as the unit of automation allows you to focus on the application itself and not worry about where it will run. Habitat also allows you to easily change where and what your application runs on. Your application and business needs will change over time, which means you need to be able to change your application at a very high velocity without being locked into one type of infrastructure or one vendor. Come witness how Habitat allows your applications to be infrastructure and platform agnostic - you focus on the application, Habitat takes care of packaging your software, exporting it, and running it wherever you need. Learn how you can deliver stronger value in higher quantities at a faster velocity without sacrificing stability.
As the scale of compute usage increases automation becomes a requirement. However, automation is traditionally seen as a replacement of humans. So, how do you build automation with the humans in mind? This talk will cover Habitat, an open source project for application automation. We will cover the design decisions behind Habitat that enable humans to be more effective at their jobs, while removing them from the complexity of managing systems at scale.
https://rakutentechnologyconference2016.sched.org/event/8ChB/automation-for-the-humans
Rakuten Technology Conference 2016
http://tech.rakuten.co.jp/
La sécurité avec Kubernetes et les conteneurs Docker (June 19th, 2019)Alexandre Roman
Avec l’essor de Kubernetes dans le petit monde des moteurs d’orchestration de conteneurs, nous nous rendons compte à quel point nos logiciels, conteneurs et plateformes sont vulnérables. Toute l’attention portée sur Kubernetes et les images Docker amène à découvrir des failles de sécurité plus ou moins importantes, avec un rythme de plus en plus soutenu.
Est-ce que votre installation Kubernetes est à jour ? Quelle est votre stratégie de mise à jour ? Comment garantir la sécurité des images Docker, alors même que de nouvelles failles apparaissent chaque jour ?
Equifax, Tesla, Marriott : nombreux sont les acteurs qui, ces dernières années, ont dû faire face à des incidents de sécurité majeurs, avec à la clé des fuites de données sensibles en grande quantité. Un rapport a montré récemment que 10 des images Docker les plus populaires contiennent au moins 30 vulnérabilités.
En s’appuyant sur les technologies Pivotal, venez découvrir comment sécuriser les images Docker avec des outils modernes, et comment patcher un cluster K8s avec un correctif pour la faille runC, sans interruption.
Businesses are speeding up development and automating operations to remain competitive and to get large organizations to scale. Project based monolithic application updates are replaced by product teams owning containerized microservices. This puts developers on call, responsible for pushing code to production, fixing it when it breaks, and managing the cost and security aspects of running their microservices. In this world operations skill-sets are either embedded in the microservices development teams, or building and operating API driven platforms. The platform automates stress testing, canary based deployment, penetration testing and enforces availability and security requirements. There are no meetings or tickets to file in the delivery process for updating a containerized microservice, which can happen many times a day, and takes seconds to complete. The role of site reliability engineering moves from firefighting and fixing outages to buiding tools for finding problems and routing those problems to the right developers. SREs manage the incident lifecycle for customer visible problems, and measure and publish availability metrics. This may sound futuristic but Werner Vogels described this as “You build it, you run it” in 2006.
2019 05 - Exploring Container Offerings in AzureAdam Stephensen
Containers are portable, make deployments fast and predictable and help devs and IT Pros to get along. In this talk I’ll show you how easy it is to start leveraging the benefits of containers, I’ll make sense of when to use the various container offerings in Azure and show how to avoid the common container pitfalls. (Spoiler – Mistake #1 is thinking you need Kubernetes! K.I.S.S.)
There are probably a lot of technologies you must learn in order to master the modern development and DevOps ecosystem but Docker (and of course orchestration and the containers’ ecosystem) is one of the important skills to have nowadays.
https://www.gangboard.com/operating-system-training/docker-training
For seven years and over 400 issues, This Week in Rust has brought the pulse of the Rust community to Rustaceans' inboxes. As we know from the Stack Overflow Developer Survey, Rust is a beloved language. Each week the long list of blog posts, requests for comments, calls for participation, and more in This Week in Rust show WHY Rust is so loved. Rustaceans are not only dedicated to improving both the language itself and their skills with it, they are also committed to teaching others. Come to this talk for a look back through This Week in Rust's History, including the trends we have seen in community conversations around the language, stories from community members who's articles have been featured in the newsletters, and more. You will also get a behind the scenes look at how your editors bring the newsletter to you each week and learn how you can help too!
The Rust compiler's borrow checker is critical for ensuring safe Rust code. Even more critical, however, is how the borrow checker provides useful, automated guidance on how to write safe code when the check fails. Early in your Rust journey it may feel like you are fighting the borrow checker. Come to this talk to learn how you can transition from fighting the borrow checker to using its guidance to write safer and more powerful code at any experience level. Walk away not only understanding the what and the how of the borrow checker - but why it works the way it does - and why it is so critical to both the technical functionality and philosophy of Rust.
One of the most magical parts of Habitat is service discovery. Come to this talk to see the beauty of Habitat service discovery in action - how running services become aware of new services, self organize into defined topologies, and handle failure - all without any central orchestrator. Additionally, you will take a deep technical dive into WHY Habitat handles service discovery the way it does and the tradeoffs we made in constructing it. The better you understand the why of Habitat service discovery, the better you will be able to harness its power for your own organization!
Traits are one of the most powerful, but also most difficult parts of Rust to master. Come to this talk for a visual exploration of how traits work - from the most basic to the advanced. It is only through deep understanding of a concept like traits that you can fully harness their power in your every day code. You will walk away with a deep understanding of how traits work, why they work the way they do, and the how and why of using them.
In a world of buzzwords and trends it is easy to believe that a type of infrastructure is now dead or that a new type is the future. Beyond those with clairvoyance, no one can say with certainty what the future of infrastructure will bring. What we can know is that in today’s reality many applications must run on mixed infrastructure - bare metal for static compute heavy loads, virtual machines for persistent data stores, and ephemeral short lived containers for stateless portions of the application. Come to this talk to learn how to determine what parts of an application go on what type of infrastructure and how to coordinate the different types into a coherent and powerful experience.
The challenge of balancing the need for security with the need for usability is nothing new. Managing secrets when using configuration management tools like Chef is no exception to this rule. Add in the fact that there are multiple tools attempting to solve this problem - each with advantages and drawbacks - and the balance becomes even more precarious! This talk will provide a brief overview of secrets management and then take a deep, technical dive into one tool in particular - Chef Vault. You will walk away understanding how it works - what theories and technologies drive it - as well as how to use it and evaluate whether Chef Vault is the right tool for your particular need. You will also walk away knowing the limitations of Chef Vault - it is not the right tool for every secrets management situation - and how to evaluate whether you safely can work around those limits or need to look at another tool.
Running a successful open source project is just as much (if not more) of a social task as a technical one. The combination of technical and social skills required can seem very intimidating at first! The good new is that all these skills can be learned. Come to this talk and learn how to tell when a project is ready to be open sourced (hint: it’s more than throwing it on Github), reviewing pull requests, kindly saying “no” when a pull request isn’t the right direction for a project, and more.
Working technology for a political campaign involves the shortest timelines, tightest deadlines, and highest stakes you will likely ever encounter in a technology career. Come hear a tale of two political campaigns - a state measure campaign and a presidential campaign - and the application of both DevOps technologies and culture to move fast, pivot quickly, and hopefully win. One of the key challenges of politics - as well as DevOps in general - is harnessing automation without losing the critical human touch which moves hearts and changes minds. Learn how to find the line where too much automation (yes, there is such a thing) is counterproductive and you need to pull back to maintain a personal connection with voters, customers, employees, and more. You will also walk away knowing how to take the lessons and experience learned to future campaigns and projects - especially when your candidate, product, etc. does not end up winning. There is value - sometimes more value - in a loss as well as a win. Learn how to take what you can, iterate, and refine it for a future application.
So you’ve released an open source project to the world, people are using it …the hard part is done, right? No, far from it. Open sourcing a project is only a fraction of the effort that will go into it over time. Come to this talk to earn how to triage and determine levels of support for issues that come into your projects (open source users are customers!). Also learn how to handle when something goes wrong – whether it is with your own project or an upstream project two levels up from yours. Walk away knowing how to handle the hardest (and most rewarding) parts of open source governance.
When a developer comes into an existing code base the urge to refactor can be overwhelming. However, legacy code bases – even those created and maintained with the best intentions – often resemble living organisms more than modular machines. Rather than simply taking out a module and replacing it with a better one, we have to surgically slice intricately connected sections of a code base apart and precisely tie each one off to prevent it from bleeding into another section. We also have to operate with the fear that a change in one part of a system may adversely affect other parts or even kill a critical piece of our application or infrastructure. This talk will teach you how to recognize the difference between necessary and cosmetic refactoring and how to assess and evaluate the risks of each. You will also walk away knowing how to develop safeguards and bypasses to minimize potential harm before, during, and after a refactor, as well as how to recognize the point of no return when rolling back a refactoring is riskier than keeping it in production. Maintaining a code base means you must constantly juggle the wish to improve it through refactoring and the potential side effects – you will walk away from this talk with clear techniques to help you find and maintain this balance.
This is the final version of this talk, given at RubyConf 2013
Many of us approach regular expressions with a certain fear and trepidation, using them only when absolutely necessary. We can get by when we need to use them, but we hesitate to dive any deeper into their cryptic world. Ruby has so much more to offer us. This talk showcases the incredible power of Ruby and the Onigmo regex library
Ruby runs on. It takes you on a journey beneath the surface, exploring the beauty, elegance, and power of regular expressions. You will discover the flexible, dynamic, and eloquent ways to harness this beauty and power in your own code.
Many of us approach regular expressions with a certain fear and trepidation, using them only when absolutely necessary. We can get by when we need to use them, but we hesitate to dive any deeper into their cryptic world. Ruby has so much more to offer us. This talk showcases the incredible power of Ruby and the Oniguruma regex library Ruby runs on. It takes you on a journey beneath the surface, exploring the beauty, elegance, and power of regular expressions. You will discover the flexible, dynamic, and eloquent ways to harness this beauty and power in your own code.
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
Water billing management system project report.pdfKamal Acharya
Our project entitled “Water Billing Management System” aims is to generate Water bill with all the charges and penalty. Manual system that is employed is extremely laborious and quite inadequate. It only makes the process more difficult and hard.
The aim of our project is to develop a system that is meant to partially computerize the work performed in the Water Board like generating monthly Water bill, record of consuming unit of water, store record of the customer and previous unpaid record.
We used HTML/PHP as front end and MYSQL as back end for developing our project. HTML is primarily a visual design environment. We can create a android application by designing the form and that make up the user interface. Adding android application code to the form and the objects such as buttons and text boxes on them and adding any required support code in additional modular.
MySQL is free open source database that facilitates the effective management of the databases by connecting them to the software. It is a stable ,reliable and the powerful solution with the advanced features and advantages which are as follows: Data Security.MySQL is free open source database that facilitates the effective management of the databases by connecting them to the software.
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...ssuser7dcef0
Power plants release a large amount of water vapor into the
atmosphere through the stack. The flue gas can be a potential
source for obtaining much needed cooling water for a power
plant. If a power plant could recover and reuse a portion of this
moisture, it could reduce its total cooling water intake
requirement. One of the most practical way to recover water
from flue gas is to use a condensing heat exchanger. The power
plant could also recover latent heat due to condensation as well
as sensible heat due to lowering the flue gas exit temperature.
Additionally, harmful acids released from the stack can be
reduced in a condensing heat exchanger by acid condensation. reduced in a condensing heat exchanger by acid condensation.
Condensation of vapors in flue gas is a complicated
phenomenon since heat and mass transfer of water vapor and
various acids simultaneously occur in the presence of noncondensable
gases such as nitrogen and oxygen. Design of a
condenser depends on the knowledge and understanding of the
heat and mass transfer processes. A computer program for
numerical simulations of water (H2O) and sulfuric acid (H2SO4)
condensation in a flue gas condensing heat exchanger was
developed using MATLAB. Governing equations based on
mass and energy balances for the system were derived to
predict variables such as flue gas exit temperature, cooling
water outlet temperature, mole fraction and condensation rates
of water and sulfuric acid vapors. The equations were solved
using an iterative solution technique with calculations of heat
and mass transfer coefficients and physical properties.
The Internet of Things (IoT) is a revolutionary concept that connects everyday objects and devices to the internet, enabling them to communicate, collect, and exchange data. Imagine a world where your refrigerator notifies you when you’re running low on groceries, or streetlights adjust their brightness based on traffic patterns – that’s the power of IoT. In essence, IoT transforms ordinary objects into smart, interconnected devices, creating a network of endless possibilities.
Here is a blog on the role of electrical and electronics engineers in IOT. Let's dig in!!!!
For more such content visit: https://nttftrg.com/
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
18. Our Approach to Open Source in the Cloud
Integrate
Embrace leading Open
Source ecosystems and
integrate Microsoft products
with agility and consistency
Release
Release key Microsoft
technologies into the Open
Source domain to build a
strong ecosystem
Participate
Microsoft engineers to
participate in communities
and contribute to key Open
Source projects
Enable
Enable Linux and Open
Source technology to be first
class citizens on Microsoft
Platforms
Open Source Partners & Ecosystem
R Server
.NET Core
Roslyn
TypeScript
F#
PowerShell
autorest
PowerBI Visuals
Office UI Fabric
Tools plugins
50. Running an Application with Habitat
ARTIFACTDEPOT
SERVICE
SUPERVISOR
BARE METAL
SERVICE
SUPERVISOR
CONTAINERS
SERVICE
SUPERVISOR
AMI
SERVICE
SUPERVISOR
VM
Without the need for a central command and control structure? I have news for you today
With Chef Habitat – a new Open source project from Chef.
Habitat goes beyond infrastructure automation, beyond cloud automation, and into the world of application automation. We’ll go deeper into what the means in just a moment, but first…
Let’s cover who I am
Reason I was asked to speak here today, in the Open Source track…
I’d like to invite Ken Thompson of Microsoft to say a few words about Microsoft’s commitment to Open Source.
Thanks Ken! Now you know that Habitat is an Open Source project, and that Microsoft is committed to Open Source in the industry. The next thing we need to cover what exactly Habitat is.
Notes coming soon – please check back on http://aka.ms/opensource for latest version
Thanks Ken! Let’s bring it back to the Habitat open source project in particular and let’s start discussing what, exactly, it is.
But to do that – we first need to understand the “why” of Habitat.
The reason Habitat was created is simple – building and running software is very painful
For example, you need one runtime for a Ruby application and a different runtime for a Java application
You need different packages for Debian Linux vs. Redhat Linux and I don’t even want to go into the difference in packages between Linux and Windows.
And you need different types of packages for bare metal VMs and container images.
Habitat seeks to alleviate this pain…
By transforming applications – including legacy applications – into modern applications.
What I mean by a modern application is that it….
Reduces complexity in automating the application, rather than making it more complex
In today’s world, having an application running on one server in your closet is not enough, it needs to be able to scale rapidly.
How you build a modern application starts with…
Storing all code – including the application code and all automation code – in the same source code repository.
We then use that store code to build an artifact – or a software package – that package is then sent to
And artifact repository. This is a place where you can store software artifacts, find other artifacts, and download those artifacts.
The artifacts in that repo can then be deployed to a bare metal server, a container, a cloud instance, or a VM, without needing to change the artifact. That artifact can run anywhere.
Habitat was designed with this workflow at the forefront. If you are familiar with the 12 factor app, that is exactly the workflow we are embracing.
This is the “why” of Habitat…now that we know the WHY
Now, let’s cover WHAT Habitat is.
Briefly, Habitat is a new technology to build, deploy, and manage applications
That run in ANY environment – from traditional datacenters to containerized microservice – from the legacy infrastructure of 20 years ago to the most bleeding edge infrastructure innovations of today.
The reason for this is that in Habitat the application itself is the unit of automation. That application package we create contains everything needed to deploy, run, and maintain the application.
And that leads us to this question – how exactly does this work? The clearest way to explain this is to look at the workflows for packaging and running an application with Habitat.
Packaging an application with Habitat starts with…
The user – this is you at your workstation. It doesn’t matter if you are using Windows, Mac, or Linux, Habitat works on all of them.
And on your workstation you are going to create a plan. This contains instructions on how the application should be set up wherever it is deployed. When you are creating a package for a Linux system, you would write this plan in bash. If it’s for a Windows system – and we will see a demo of this in just a bit – you would write that in Powershell. The powershell functionality is not quite production ready yet, but it will be soon.
Then you use that plan and the compiled application code to create an artifact that contains everything in one place. That artifact is cryptographically signed with a key so you can verify that the artifact came from the place you expected it to come from. We also use this key when we run the application, and I’ll show you that in just a moment.
And then you can optionally upload that artifact to the public Habitat depot – where you can find Habitat packages by developers all over the world.
And this is what that public depot looks like –you can access it at app.habitat.sh. In this screencap I was searching for a node Habitat package and there are several available, we’ll use the core/node one down there at the bottom in a demo later in this talk. So that is how you package an application with Habitat…
Now let’s go through how you would run that application with Habitat. This is where the magic happens.
If you have your application artifact on the depot, you can find it on that depot.
And pull that artifact from that depot…
Onto wherever you want to run it – whether bare metal, containers, a machine image, or a VM.
If you are not using the depot, you can also upload it to wherever you want to run it through scp or whatever method you prefer.
You then run that package as a service within the supervisor – we’ll go into what I mean by a service and how it interacts with the supervisor later.
Once you have that application running somewhere – in this example it’s a VM – with the supervisor, you can still get information in and out of it of the application through a restful API.
And this is really useful if you have something like a load balancer that needs to send traffic to that application, run health checks, and more. No matter where that application is running, you can still get information into and out of it.
The real magic of the supervisor comes when you have more than one instance of an application running – let’s say we have four VMS all running the same package.
The supervisors on each of these VMs form a ring. They will use that key we used to sign the package to decide whether to allow another VM into the ring. They all have to be signed with the same key in order to communicate with each other
They do this communication over an encrypted GOSSIP protocol which they can then use to self organize into different topologies – we’ll cover those topologies a little later.
When I last spoke to Jamie Winsor – one of the creators of Habitat – he described it as an umbrella over many components, all designed to allow you to build software once and run it anywhere. Many of these components are still in development. Even though they are very promising…
I’m here to talk about what you can do with Habitat today – how you can create and run application packages that can run (nearly) anywhere, move existing applications into the cloud, and give your applications the intelligence to recover from failure on their own, without the need for central supervisor or controller.
I want us to understand what we can do with Habitat today and then together – since this is open source and this project will largely be driven by our community and contributors - we can shape where it goes in the future.
Today, we will first discuss the Habitat Supervisor – which is what you use to run your application packages. I think this is where the true genius of Habitat shines.
Then the Habitat packaging format – this is what you use on your workstation to make that software artifact that can run nearly anywhere. This is what you will run with the Supervisor.
Finally – although you can use Habitat without containers, and we will see that in action, Habitat truly shines when it comes to working with containers. Habitat complements things like Docker and Mesosphere and makes them work even better.
So let’s start with the Habitat supervisor.
Briefly – the Habitat Supervisor is an intelligent runtime with deployment coordination and service discovery – we’ll go deeper into what that means in just a moment.
The supervisor is what allows you to run the application within your hart package natively on HW…
In a VM or Cloud Instance
In a container with or without a Container Management Service like Mesosphere.
One of the main things the supervisor does is act as a process manager…
And what this means is whenever you pull that HART package onto whatever infrastructure you want to run it on, Habitat will start up and monitor that package.
Additionally, it will also receive and implement configuration changes. So if you upload a new version of that package to the public depot – say for a security patch – the supervisor will be monitoring that depot and become aware of it, then pull in that new package, install it, and make whatever configuration changes needed for it.
And it also runs services. Let’s take a closer look by what I mean by services.
A service is one habitat package running under a supervisor…
And the simplest example is 1 supervisor running on service on one piece of infrastructure – whether that’s a server, VM or container.
So one service running would look like this. In this example we have a VM running one service under one supervisor. Let’s take a look of a demo of this simple example.
The examples we just saw where one supervisor running on one service. But one service is pretty limiting, when we go beyond one service when we scale out, we need a supervisor ring.
And let’s look at an example of this. Let’s say we start off with one supervisor on a VM running MySQL.
And we decide we want a MySQL cluster, so we spin up two more VMs and install the MySQL hart package on them. Since that MySQL package on each of them is signed with the same key, they will be allowed to form a supervisor ring.
What the ring allows these VMs to do is to communicate over each other over a GOSSIP protocol – remember, that communication is all encrypted.
With a MySql cluster like this, it’s common to use a leader/follower topology. What this means is once we have those three VMS running that MySQL service, they need to elect a leader. Habitat has a built in algorithm for electing a leader in a cluster such as this. And it’s going to run that election algorithm...
And let’s say this one on the top is elected the leader, that means it will receive the write requests which come to this MySQL cluster. That means…
The other two VMS are designated as followers, and they receive the read requests that come into that cluster.
Now let’s say something goes wrong. Something bad happens and the leader goes offline.
The two other Supervisors will notice this when they cannot connect with the leader over that GOSSIP protocol.
And they will take it out of that supervisor ring. So now we are down to two VMs, and at the moment both are still followers.
They will realize that they don’t have a leader, they are not implementing the topology that they have promised to implement. So they will hold another election with that built in election algorithm.
Let’s say this supervisor wins the election and becomes the leader. It will automatically start receiving write requests.
And that means this supervisor would be the follower and receive read requests.
This illustrates that Habitat assumes that failures will happen and that they are normal. We don’t try to anticipate every edge case in the beginning because, frankly, we can’t. There will always be something unforeseen that happens somewhere in an application’s lifecycle.
The remaining healthy components will self-organize and re-converge on their own. There’s no central coordinate that re-organizes and re-converges them, they have the intelligence to do this on their own.
The Habitat Supervisor supports two different topologies at this time. We just saw this first one – Leader/Follower.
And the other topology is the standalone topology – which assumes that every member of the supervisor ring is working as an individual that is in communication with all the others. And these are based on existing standards for IT infrastructure.
The supervisor is also in charge of handling updates to the package or configuration changes and rolling them out to the rest of the ring.
It’s going to detect when a new release becomes available on the depot…
Then deploy it based on a defined update strategy .
Some of the update strategies that we support or will support soon include all at once – that’s when – whenever a new package becomes available – all the supervisors update to that new package immediately.
Some strategies coming soon include rollout strategy – that means if we have a supervisor ring with, say, four supervisors – one will update, then the next one will update, then the next one. Only one supervisor is updating at any given time.
That’s the Supervisor, which is used to supervise and managed applications packaged with Habitat. Let’s now step back a bit and talk about HOW we make that package for the Supervisor to supervise.
Habitat packages are created through the Habitat Packaging format– this is what you use to create your artifact on your workstation.
Habitat packages are in a format called the HART format – because we heart you. That stands for Habitat Artifact.
And these HART packages contain the source code for the application itself – if for example you had a Ruby on Rails application you were automating, you would have all your Ruby on Rails code within this package…
…along with the application code, these packages also include everything needed to deploy and run the application. This is all kept in one place. Now, enough with the talking about it, let’s see this HART package format in action.
As we just saw, Habitat plans use Bash for packages that will run on Linux…
But Habitat can also create plans in Powershell for installing on Windows infrastructure. This is still in the development stages, but I can give you a bit of a preview of building a package for Windows.
Along with running hart packages with Habitat…
Habitat also allows you to export your hart packages into other formats.
And by far the most popular format people export to is a Docker container image.
Which brings me to Habitat and Containers. Habitat DOES work very well without containers, as we’ve seen, but I think it really shines when we use it WITH containers.
Getting a software package to run anywhere is very difficult. That’s not a new problem, we all know that.
And containers were supposed to solve this problem. But…
There’s still a lot of pain with containers in their current state.
Part of this pain is that there is a major learning cliff in between using a container in a development environment and using it in a production environment. Who here uses containers in dev? What about in prod? How long did it take you to get to using them in prod?
The other major issue is that it is easy for containers to become black boxes – where we deploy them to different environments without fully understanding everything that is inside of them.
And, among other things, this can cause serious security issues if a container has something in it with a security flaw, but we don’t know or don’t remember to update it.
Part of the reason for this is traditional containers start by building the operating system, then add in the libraries, then the application libraries, and then finally the application itself at the very end. This adds bloat and complexity to containers.
Habitat, by contrast, turns the traditional container workflow on it’s head. You start with the application. Once you have the application, you add in the libraries to run that application, and then only at the end do you add in a bare minimum operating system that is just enough to run your application and nothing more.
Again, Habitat starts with the application, and then the bare minimum operating system comes in later.
When an app has dependencies, the app itself declares those dependencies and resolves them. We don’t add in those dependencies pre-emptively – the app will pull what it needs and only what it needs.
And, even when that package is within a container, it still has that exposed API we talked about earlier. Other outside services – such as a load balancer – can still interact with the app easily, even though it is in a container.
In summary, when you create a container image with Habitat…
You know exactly what went into the container and exactly what is configurable about the container, it’s not a black box.
Now, let’s look at this in action with another demo.
Once you have that container image, you can run it locally, but you’re going to want to deploy it somewhere, it doesn’t do much good just sitting on your workstation.
You can deploy it using a container scheduling service such as Kubernetes and Mesosphere…
Or a Cloud based container service, like AWS ECS or Azure Container Services. We’re going to look at a demo of deploying our container image to Azure Container Service, but before we do I’d like to bring Ken Thompson back to say a few words about Azure Container Service.
ADD TRANSITION
Now let’s see this in action, let’s deploy our image to Azure container services
As we come to the end of this talk, there’s a few things I would like you to take with you…
You CAN build software ONCE and run it almost ANYWHERE
You CAN move a legacy application into the cloud without rewriting it
You can EMPOWER you applications to recover from failure on their own
With Chef Habitat – a new Open source product from Chef.
The key to understanding what Habitat is is to realize that it is not Infrastructure automation, it’s not Contain Automation, it’s Application Automation. The application itself is what we are automating.
And Habitat is 100% Open Source. Where it goes from here is going to be driven largely by our community of users and contributors.
If you’d like to get involved – and I hope you do! – check out habitat.sh/community and stop by the Chef booth in the exhibit hall!
And with that…
Again, I’m Nell Shamrell-Harrington, a Sr. Engineer at Chef Software. That’s my contact info…