Docker allows simple environment isolation and repeatability so that we can create a run-time environment once, package it up, then run it again on any other machine. Furthermore, everything that runs in that environment is isolated from the underlying host (much like a virtual machine). And best of all, everything is fast and simple.
This presentation, we’ll provide basic introduction. What is Docker? why to use it? and demonstrate how we can use Docker to compose and deploy an application.
These slides highlight what’s new in Grails® framework 5, Micronaut Integration, Groovy 3, and the current developments around Grails framework.
It is originally presented at Madrid GUG on 15 December, 2021.
This presentation was held at the Spring One 2GX 2015 conference in Washington DC.
The presentation explains how to migrate an existing Grails 2 application to the new Spring Boot and Gradle based Grails 3. It covers migrating plugins, applications and features gotchas as well as best praciteces.
The conference presentation also included an extensive live coding section in which I migrated an existing application to Grails 3.
Gradle is an open-source build automation tool focused on flexibility, build reproducibility and performance. Over the years, this tool has evolved and introduced new concepts and features around dependency management, publication and other aspects on build and release of artifacts for the Java platform.
Keeping up to date with all these features across several projects can be challenging. How do you make sure that all your projects can be upgraded to the latest version of Gradle? What if you have thousands of projects and hundreds of engineers? How can you abstract common tasks for them and make sure that new releases work as expected?
At Netflix, we built Nebula, a collection of Gradle plugins that helps engineers remove boilerplate in Gradle build files, and makes building software the Netflix way easy. This reduces the cognitive load on developers, allowing them to focus on writing code.
In this talk, I’ll share with you our philosophy on how to build JVM artifacts and the pieces that help us boost the productivity of engineers at Netflix. I’ll talk about:
- What is Nebula
- What are the common problems we face and try to solve
- How we distribute it to every JVM engineer
- How we ensure that Nebula/Gradle changes do not break builds so we can ship new features with confidence at Netflix.
---
About Roberto: Roberto Perez Alcolea is a Senior Software Engineer at Netflix. He is a member of the Java Platform team providing the core language and framework components that enable the Java community at Netflix. He's an active maintainer of Netflix Nebula Plugins (https://nebula-plugins.github.io/) and passionate about Gradle. Prior to that, he spent several years building high performant APIs with Ratpack and web applications using Grails.
These slides highlight what’s new in Grails® framework 5, Micronaut Integration, Groovy 3, and the current developments around Grails framework.
It is originally presented at Madrid GUG on 15 December, 2021.
This presentation was held at the Spring One 2GX 2015 conference in Washington DC.
The presentation explains how to migrate an existing Grails 2 application to the new Spring Boot and Gradle based Grails 3. It covers migrating plugins, applications and features gotchas as well as best praciteces.
The conference presentation also included an extensive live coding section in which I migrated an existing application to Grails 3.
Gradle is an open-source build automation tool focused on flexibility, build reproducibility and performance. Over the years, this tool has evolved and introduced new concepts and features around dependency management, publication and other aspects on build and release of artifacts for the Java platform.
Keeping up to date with all these features across several projects can be challenging. How do you make sure that all your projects can be upgraded to the latest version of Gradle? What if you have thousands of projects and hundreds of engineers? How can you abstract common tasks for them and make sure that new releases work as expected?
At Netflix, we built Nebula, a collection of Gradle plugins that helps engineers remove boilerplate in Gradle build files, and makes building software the Netflix way easy. This reduces the cognitive load on developers, allowing them to focus on writing code.
In this talk, I’ll share with you our philosophy on how to build JVM artifacts and the pieces that help us boost the productivity of engineers at Netflix. I’ll talk about:
- What is Nebula
- What are the common problems we face and try to solve
- How we distribute it to every JVM engineer
- How we ensure that Nebula/Gradle changes do not break builds so we can ship new features with confidence at Netflix.
---
About Roberto: Roberto Perez Alcolea is a Senior Software Engineer at Netflix. He is a member of the Java Platform team providing the core language and framework components that enable the Java community at Netflix. He's an active maintainer of Netflix Nebula Plugins (https://nebula-plugins.github.io/) and passionate about Gradle. Prior to that, he spent several years building high performant APIs with Ratpack and web applications using Grails.
Presented at Open Source 101 2022
Presented by Milana Cap, XWP
Abstract: Your code can be all rainbows and unicorns, cutting and shining, but if there’s no documentation, does it even exist?
Documentation can make or break your open source project. Don’t believe me? Let me tell you a story or three about writing and managing documentation for the largest open source CMS community. The WordPress documentation.
Slides from OpenSource101.com Talk (https://opensource101.com/sessions/wtf-is-gitops-why-should-you-care/)
If you’re interested in learning more about Cloud Native Computing or are already in the Kubernetes community you may have heard the term GitOps. It’s become a bit of a buzzword, but it’s so much more! The benefits of GitOps are real – they bring you security, reliability, velocity and more! And the project that started it all was Flux – a CNCF Incubating project developed and later donated by Weaveworks (the GitOps company who coined the term).
Pinky will share from personal experience why GitOps has been an essential part of achieving a best-in-class delivery and platform team. Pinky will give a brief overview of definitions, CNCF-based principles, and Flux’s capabilities: multi-tenancy, multi-cluster, (multi-everything!), for apps and infra, and more.
Pinky will cover a little of Flux’s microservices architecture and how the various components deliver this robust, secure, and trusted open source solution. Through the components of the Flux project, users today are enjoying compatibility with Helm, Jenkins, Terraform, Prometheus, and more as well as with cloud providers such as AWS, Azure, Google Cloud, and more.
Join us for this informative session and get all of your GitOps questions answered by an end user in the community!
Speaker: Priyanka (aka “Pinky”) is a Developer Experience Engineer at Weaveworks. She has worked on a multitude of topics including front end development, UI automation for testing and API development. Previously she was a software developer at State Farm where she was on the delivery engineering team working on GitOps enablement. She was instrumental in the multi-tenancy migration to utilize Flux for an internal Kubernetes offering. Outside of work, Priyanka enjoys hanging out with her husband and two rescue dogs as well as traveling around the globe.
Setting up Notifications, Alerts & Webhooks with Flux v2 by Alison DowdneyWeaveworks
Watch the recording here: https://youtu.be/cakxixc-yQk
❗️ Notifications & Alerts ⚠️
When operating a cluster, different teams may wish to receive notifications about the status of their GitOps pipelines. For example, the on-call team would receive alerts about reconciliation failures in the cluster, while the dev team may wish to be alerted when a new version of an app was deployed and if the deployment is healthy.
Webhook Receivers
The GitOps toolkit controllers are by design pull-based. In order to notify the controllers about changes in Git or Helm repositories, you can setup webhooks and trigger a cluster reconciliation every time a source changes. Using webhook receivers, you can build push-based GitOps pipelines that react to external events.
Alison Dowdney, Developer Experience Engineer at Weaveworks and CNCF Ambassador, walks through how to define a provider, an alert, git commit status, exposing the webhook receiver and defining a git repository and receiver.
Resources
Flux2 Documentation: https://fluxcd.io/docs/
Flux Guide: Setup Notifications: https://fluxcd.io/docs/guides/notifications/
Flux Guide: Setup Webhook receivers: https://fluxcd.io/docs/guides/webhook-receivers/
Flux Roadmap: https://fluxcd.io/docs/roadmap/
Alison's Demo Repo: https://github.com/alisondy/flux-demos
Quick overview of building plugins using pure JavaScript or Google Web Toolkit (GWT), and a group discussion to identify important UI extension points for Gerrit contributors to make available.
These are the slides for a talk/workshop delivered to the Cloud Native Wales user group (@CloudNativeWal) on 2019-01-10.
In these slides, we go over some principles of gitops and a hands on session to apply these to manage a microservice.
You can find out more about GitOps online https://www.weave.works/technologies/gitops/
How Mettle created a self-service platform for developers allowing them to successfully deploy hundreds of changes to production per week with confidence.
Writing Commits for You, Your Friends, and Your Future SelfAll Things Open
Presented at Open Source 101 2022
Presented by Victoria Dye, GitHub
Abstract: For many developers, a Git commit is just a step in their source control ritual: an arbitrary save point at the end of a work day, the Nth attempt at finding the right syntax for a CI workflow, or a satisfying bookend to fixing a bug. But commits provide far more than that: they tell the story of how each line of code was created and modified. Despite their unassuming nature, they are one of the best ways to answer the question "why did I write that?"
The story told by a repository's commit history is both richer and more essential in the open source world. Well-structured commits can guide you in the implementation of a new feature, break down complex reviews into easily-digestible increments, and provide an invaluable resource to future contributors hoping to extend functionality or find the root cause of a bug.
This talk will show how high-quality commit organization can benefit you now and in the future. First, we will introduce strategies and tools to tell a clear story through commits. Later, we will demonstrate how to use an informative commit history while reviewing code and investigating bugs.
Gerrit at Eclipse Foundation have really long history. Initially only EGit and JGit projects could use this tool, but starting from February 2012 Gerrit become fist class citizen in Eclipse ecosystem. Every Eclipse Foundation's project can immediately start using its powerful code review capabilities. Capabilities that together with TDD and CI create safety net against bugs for software development.
For quite long time Gerrit features set was pretty closed and adding new functionality required upstream code base changes. That means either you ended up in port and rebase nightmare or contributed your changes back to community... where they could not have been accepted because they solve your domain's problem not something that is vital for the community edition.
Plugin support in Gerrit was initially introduced in version 2.5. Since then amount of available extension points substantially increased. In this presentation we will understand Gerrit plugins architecture. We will discuss extensions and plugins especially differences between them and which one to choose when. We will see how to combine everything together (including WEB UI) to get your first full blown Gerrit plugin.
August 7th, I attended a meetup of GDG Beijing, and give a presentation:Android Gradle Build System-Overview.
Mainly cover build system background knowledge, source code, interesting part of code, writing a plugin.
Stefan is currently working on a new exciting project, GitOps Toolkit (https://github.com/fluxcd/toolkit), which is an experimental toolkit for assembling CD pipelines the GitOps way
PuppetConf 2016: Using Puppet with Kubernetes and OpenShift – Diane Mueller, ...Puppet
Here are the slides from Diane Mueller and Daniel Dreier's PuppetConf 2016 presentation called Using Puppet with Kubernetes and OpenShift. Watch the videos at https://www.youtube.com/playlist?list=PLV86BgbREluVjwwt-9UL8u2Uy8xnzpIqa
Hybrid and Multi-Cloud Strategies for Kubernetes with GitOpsSonja Schweigert
One of the biggest advantages Kubernetes has to offer is that it is agnostic to infrastructure and capable of managing diverse workloads running on different compute resources. This allows organizations to manage multiple developer platforms, who can operate across many environments such as on premise, hybrid and multiple clouds.
Streamlined processes and automation is pivotal for operations when managing clusters at scale and maintaining security and policy checks. Paul Curtis, Principal Solutions Architect will demonstrate GitOps and Weave Kubernetes Platform in a hybrid and multi-cloud setup.
Learn how to:
Use model-driven automation to increases reliability and stability across environments
Simplify multi-cluster management with GitOps
Enable developers to push code to production daily (self-service)
Improve utilization and capacity management through Kubernetes platforms on cloud and on-premise infrastructure
Presented at Open Source 101 2022
Presented by Milana Cap, XWP
Abstract: Your code can be all rainbows and unicorns, cutting and shining, but if there’s no documentation, does it even exist?
Documentation can make or break your open source project. Don’t believe me? Let me tell you a story or three about writing and managing documentation for the largest open source CMS community. The WordPress documentation.
Slides from OpenSource101.com Talk (https://opensource101.com/sessions/wtf-is-gitops-why-should-you-care/)
If you’re interested in learning more about Cloud Native Computing or are already in the Kubernetes community you may have heard the term GitOps. It’s become a bit of a buzzword, but it’s so much more! The benefits of GitOps are real – they bring you security, reliability, velocity and more! And the project that started it all was Flux – a CNCF Incubating project developed and later donated by Weaveworks (the GitOps company who coined the term).
Pinky will share from personal experience why GitOps has been an essential part of achieving a best-in-class delivery and platform team. Pinky will give a brief overview of definitions, CNCF-based principles, and Flux’s capabilities: multi-tenancy, multi-cluster, (multi-everything!), for apps and infra, and more.
Pinky will cover a little of Flux’s microservices architecture and how the various components deliver this robust, secure, and trusted open source solution. Through the components of the Flux project, users today are enjoying compatibility with Helm, Jenkins, Terraform, Prometheus, and more as well as with cloud providers such as AWS, Azure, Google Cloud, and more.
Join us for this informative session and get all of your GitOps questions answered by an end user in the community!
Speaker: Priyanka (aka “Pinky”) is a Developer Experience Engineer at Weaveworks. She has worked on a multitude of topics including front end development, UI automation for testing and API development. Previously she was a software developer at State Farm where she was on the delivery engineering team working on GitOps enablement. She was instrumental in the multi-tenancy migration to utilize Flux for an internal Kubernetes offering. Outside of work, Priyanka enjoys hanging out with her husband and two rescue dogs as well as traveling around the globe.
Setting up Notifications, Alerts & Webhooks with Flux v2 by Alison DowdneyWeaveworks
Watch the recording here: https://youtu.be/cakxixc-yQk
❗️ Notifications & Alerts ⚠️
When operating a cluster, different teams may wish to receive notifications about the status of their GitOps pipelines. For example, the on-call team would receive alerts about reconciliation failures in the cluster, while the dev team may wish to be alerted when a new version of an app was deployed and if the deployment is healthy.
Webhook Receivers
The GitOps toolkit controllers are by design pull-based. In order to notify the controllers about changes in Git or Helm repositories, you can setup webhooks and trigger a cluster reconciliation every time a source changes. Using webhook receivers, you can build push-based GitOps pipelines that react to external events.
Alison Dowdney, Developer Experience Engineer at Weaveworks and CNCF Ambassador, walks through how to define a provider, an alert, git commit status, exposing the webhook receiver and defining a git repository and receiver.
Resources
Flux2 Documentation: https://fluxcd.io/docs/
Flux Guide: Setup Notifications: https://fluxcd.io/docs/guides/notifications/
Flux Guide: Setup Webhook receivers: https://fluxcd.io/docs/guides/webhook-receivers/
Flux Roadmap: https://fluxcd.io/docs/roadmap/
Alison's Demo Repo: https://github.com/alisondy/flux-demos
Quick overview of building plugins using pure JavaScript or Google Web Toolkit (GWT), and a group discussion to identify important UI extension points for Gerrit contributors to make available.
These are the slides for a talk/workshop delivered to the Cloud Native Wales user group (@CloudNativeWal) on 2019-01-10.
In these slides, we go over some principles of gitops and a hands on session to apply these to manage a microservice.
You can find out more about GitOps online https://www.weave.works/technologies/gitops/
How Mettle created a self-service platform for developers allowing them to successfully deploy hundreds of changes to production per week with confidence.
Writing Commits for You, Your Friends, and Your Future SelfAll Things Open
Presented at Open Source 101 2022
Presented by Victoria Dye, GitHub
Abstract: For many developers, a Git commit is just a step in their source control ritual: an arbitrary save point at the end of a work day, the Nth attempt at finding the right syntax for a CI workflow, or a satisfying bookend to fixing a bug. But commits provide far more than that: they tell the story of how each line of code was created and modified. Despite their unassuming nature, they are one of the best ways to answer the question "why did I write that?"
The story told by a repository's commit history is both richer and more essential in the open source world. Well-structured commits can guide you in the implementation of a new feature, break down complex reviews into easily-digestible increments, and provide an invaluable resource to future contributors hoping to extend functionality or find the root cause of a bug.
This talk will show how high-quality commit organization can benefit you now and in the future. First, we will introduce strategies and tools to tell a clear story through commits. Later, we will demonstrate how to use an informative commit history while reviewing code and investigating bugs.
Gerrit at Eclipse Foundation have really long history. Initially only EGit and JGit projects could use this tool, but starting from February 2012 Gerrit become fist class citizen in Eclipse ecosystem. Every Eclipse Foundation's project can immediately start using its powerful code review capabilities. Capabilities that together with TDD and CI create safety net against bugs for software development.
For quite long time Gerrit features set was pretty closed and adding new functionality required upstream code base changes. That means either you ended up in port and rebase nightmare or contributed your changes back to community... where they could not have been accepted because they solve your domain's problem not something that is vital for the community edition.
Plugin support in Gerrit was initially introduced in version 2.5. Since then amount of available extension points substantially increased. In this presentation we will understand Gerrit plugins architecture. We will discuss extensions and plugins especially differences between them and which one to choose when. We will see how to combine everything together (including WEB UI) to get your first full blown Gerrit plugin.
August 7th, I attended a meetup of GDG Beijing, and give a presentation:Android Gradle Build System-Overview.
Mainly cover build system background knowledge, source code, interesting part of code, writing a plugin.
Stefan is currently working on a new exciting project, GitOps Toolkit (https://github.com/fluxcd/toolkit), which is an experimental toolkit for assembling CD pipelines the GitOps way
PuppetConf 2016: Using Puppet with Kubernetes and OpenShift – Diane Mueller, ...Puppet
Here are the slides from Diane Mueller and Daniel Dreier's PuppetConf 2016 presentation called Using Puppet with Kubernetes and OpenShift. Watch the videos at https://www.youtube.com/playlist?list=PLV86BgbREluVjwwt-9UL8u2Uy8xnzpIqa
Hybrid and Multi-Cloud Strategies for Kubernetes with GitOpsSonja Schweigert
One of the biggest advantages Kubernetes has to offer is that it is agnostic to infrastructure and capable of managing diverse workloads running on different compute resources. This allows organizations to manage multiple developer platforms, who can operate across many environments such as on premise, hybrid and multiple clouds.
Streamlined processes and automation is pivotal for operations when managing clusters at scale and maintaining security and policy checks. Paul Curtis, Principal Solutions Architect will demonstrate GitOps and Weave Kubernetes Platform in a hybrid and multi-cloud setup.
Learn how to:
Use model-driven automation to increases reliability and stability across environments
Simplify multi-cluster management with GitOps
Enable developers to push code to production daily (self-service)
Improve utilization and capacity management through Kubernetes platforms on cloud and on-premise infrastructure
Docker Demystified - Virtual VMs without the FatErik Osterman
DevOps guru Erik Osterman has been at the forefront of large-scale cloud architectures as the Director of Cloud Architecture for CBS Interactive and advisor for numerous successful startups. Now he’s ready to show you why Docker is all the rage.
A new movement is taking cloud by storm; Docker is evolving the way services are deployed by organizations so that they can operate more efficiently at scale — both in the cloud and on bare metal. In the same way shipping containers revolutionized the cargo industry, cheap, zero-penalty Linux Containers (LXC) are like shrink-wrapped VMs but without the fat. What’s not obvious, however, is how to roll your own Docker deployments and all tools you’ll need to leverage along the way.
Tune in to learn how you too can run a micro services architecture that supports thousands of containers controlled effortlessly from your laptop’s command line.
This webinar is free to attend and will cover:
• Principles of Immutable Infrastructure
• Docker Basics
• Docker for Dev & QA
• Docker in Production
• Business Drivers
• Answering the Question: Is Docker Ready for Prime Time?
Webcast at http://webcast.cloudposse.com/
Enhancing the application development process in all its phases—building, scaling, shipping, deploying
and running—plays a vital role in today’s competitive IT industry by shortening the time between writing
code and running it.
Providing introduction to kubernetes, various components and architecture. intended for beginers looking to understand the basics, evaluating kubernetes.
Also talks about the alternatives such as Swarm, Diego, Nomad etc.
Information on resource monitoring using kubernetes using heapster and cAdvisor
Thanks to tools like vagrant, puppet/chef, and Platform as a Service services like Heroku, developers are extremely used to being able to spin up a development environment that is the same every time. What if we could go a step further and make sure our development environment is not only using the same software, but 100% configured and set up like production. Docker will let us do that, and so much more. We'll look at what Docker is, why you should look into using it, and all of the features that developers can take advantage of.
Avi Cavale presentation at DevOpsDays India, September 2015
2014 was the year of Docker. The container-based world exploded on the scene with the promise to reinvent how you think about distributed applications. Continuous Integration/Continuous Delivery in support of DevOps is proving to be a successful early use case for a container-based architecture. Learn how Shippable has designed its Continuous Integration/Continuous Delivery system by fully leveraging containers and a microservices architecture, resulting in reduced Dev/Test cycle times and lower infrastructure costs.
I tried to dockerize my app but I had to PaaSJorge Morales
In this talk I describe how I tried to run my application in Docker containers in production and how difficult and painful the process was, and why a PaaS platform helped me with many things I haven’t thought of before.
Docker Overview - Rise of the ContainersRyan Hodgin
Containers allow for applications to become more portable, organized more efficiently, and configured to make better use of system resources. This presentation will explain Docker's container technology, DevOps approach, partner ecosystem, popularity, performance, challenges, and roadmap. We'll review how containers are changing application and operating system designs.
Docker compose è uno strumento che permette di creare e gestire ambienti di sviluppo e test in modo semplice e ripetibile.
Vediamo come creare un ambiente di sviluppo per node di livello enterprise, che ci permetta di automatizzare task e testare in modo efficace il nostro codice
Docker 101 is a series of workshops that aims to help developers (or interested people) to get started with docker.
The workshop 101 is were the audience has the first contact with docker, from installation to manage multiple containers.
- Installing docker
- managing images (docker rmi, docker pull)
- basic commands (docker info, docker ps, docker images, docker run, docker commit, docker inspect, docker exec, docker diff, docker stop, docker start)
- Docker registry
- container life cycle (running, paused, stopped, restarted)
- Dockerfile
eXoer on the grill: eXo Add-ons factory using Docker and CodenvyeXo Platform
Few months ago, Codenvy released a great tutorial about “Creating an eXo Factory Using Codenvy and Docker” http://blog.codenvy.com/creating-codenvy-factory-exo-extensions-development which gave great details about how eXo, Codenvy and Docker can work together to give developers an easy way to code eXo add-ons.
In this presentation we will bring insights about how we used Codenvy’s Factories (with Docker recipes) to give developers a one-click easy way to begin coding eXo Add-ons.
eXoers on the Grill aims to provide some incentive & fresh air for our staff in order to constantly re-think our methods, spread good practices, promote some technology or tools, generate ideas, etc... All the teams are invited to contribute by picking up some hot topics of their choice and spread to other teams.
Docker All The Things - ASP.NET 4.x and Windows Server ContainersAnthony Chu
Docker is awesome and there's been a lot of excitement over .NET Core running in Linux containers. But why do older apps have to miss out on the fun? With Windows Server 2016 and Windows Server containers, there's finally a way to dockerize .NET 4.6 apps using the same Docker tools and commands as we're used to on Linux. In this intermediate level talk, I'll give an overview of Docker and Windows Server containers. Then I'll demonstrate different ways to run existing ASP.NET Web API, MVC, and even WebForms applications inside Docker containers.
This presentation gives a brief understanding of docker architecture, explains what docker is not, followed by a description of basic commands and explains CD/CI as an application of docker.
Not leading edge but bleeding edge experience Dockerizing Domino server and running XPages applications. Lotus Notes applications run just fine as well.
In the future IBM will make standing up Domino servers more automated. We do have a configuration step that is manual once the server starts... but it is dockerized and replicates with on prem Domino Domain.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
2. www.tothenew.com
About Me
Puneet Behl
Associate Technical Lead
TO THE NEW Digital
puneet.behl@tothenew.com
GitHub: https://github.com/puneetbehl/
Twitter: @puneetbhl
LinkedIn: https://in.linkedin.com/in/puneetbhl
3. www.tothenew.com
Agenda
● Understanding the problem of shipping code
● The solution
● What is Docker Container?
● Benefits of Docker
● Understanding Docker components & architecture.
● Installation
● Hello world demo
● Running Stack of services using Docker Compose
● Moving to Production
4. www.tothenew.com
Agenda
● Understanding the problem of shipping code
● The solution
● What is Docker Container?
● Benefits of Docker
● Understanding Docker components & architecture.
● Installation
● Hello world demo
● Running Stack of services using Docker Compose
● Moving to Production
11. www.tothenew.com
● “Works on my machine” syndrome
● Hard disk crashed -> New Setup -> Nothing Works :(
● Adding new Developer in the team
● Going live? Please DO NOT break on production.
● Dependency Hell - Common dependency with different versions
In Nutshell, what are the challenges?
12. www.tothenew.com
Contents
● Understanding the problem of shipping code
● The solution
● What is Docker Container?
● Understanding Docker components & architecture.
● Installation
● Hello World demo
● Running Stack of services using Docker Compose
● Moving to Production
15. www.tothenew.com
Contents
● Understanding the problem of shipping code
● The solution
● What is Docker Container?
● Benefits of Docker
● Understanding Docker components & architecture.
● Installation
● Hello World demo
● Running Stack of services using Docker Compose
● Moving to Production
16. www.tothenew.com
A Docker Container allows application
developer to package up their application with
all it’s dependencies they are associated with.
What is Docker Container?
24. www.tothenew.com
Contents
● Understanding the problem of shipping code
● The solution
● What is Docker Container?
● Benefits of Docker
● Understanding Docker components & architecture.
● Installation
● Hello World demo
● Running Stack of services using Docker Compose
● Moving to Production
25. www.tothenew.com
● Scalability - The containers are extremely lightweight so scaling up
and scaling down is very easy.
Benefits of using Docker
26. www.tothenew.com
● Scalability - The containers are extremely lightweight so scaling up
and scaling down is very easy.
● Portability - just pull the image and start container.
Benefits of using Docker
27. www.tothenew.com
● Scalability - The containers are extremely lightweight so scaling up
and scaling down is very easy.
● Portability - just pull the image and start container.
● Deployment - because containers can run almost anywhere we can
deploy to Desktop, Physical Server, Virtual Machine, Public/Private
Cloud etc.
Benefits of using Docker
28. www.tothenew.com
● Scalability - The containers are extremely lightweight so scaling up
and scaling down is very easy.
● Portability - just pull the image and start container.
● Deployment - because containers can run almost anywhere we can
deploy to Desktop, Physical Server, Virtual Machine, Public/Private
Cloud etc.
● Efficient Resource Utilization - Multiple isolated containers sharing
resources.
Benefits of using Docker
30. www.tothenew.com
● Build any application in any language using any stack.
● Dockerize application can run anywhere on anything.
● No longer need to cross our fingers when we deploy to production
● Helps improving application design
Why Developers Care?
32. www.tothenew.com
● Easy migrations to different infrastructure
● Replication of different environments is very easy.
● Fix an issue once, it’s fixed everywhere
● Less conflicts with developers, because of same environment.
Why Devops Care?
37. www.tothenew.com
Contents
● Understanding the problem of shipping code
● The solution
● What is Docker Container?
● Benefits of Docker
● Understanding Docker components & architecture.
● Installation
● Hello World demo...
● Running Stack of services using Docker Compose
● Moving to Production
39. www.tothenew.com
Docker Core Components
● Docker Daemon
The Docker daemon runs on a host
machine. The user does not directly
interact with the daemon, but instead
through the Docker client.
41. www.tothenew.com
Docker Core Components
● Docker Daemon
● Docker Client
The Docker client, in the form of the
docker binary, is the primary user
interface to Docker. It accepts
commands from the user and
communicates back and forth with a
Docker daemon.
44. www.tothenew.com
Docker Workflow Components
● Docker Image
A Docker image is a read-only
template to build Docker Containers
● Docker Registries
Docker registries hold images. These
are public or private stores from
where you upload or download
images.
45. www.tothenew.com
Docker Workflow Components
● Docker Image
A Docker image is a read-only
template to build Docker Containers
● Docker Container
Created from images. start, stop, run
● Docker Registries
Docker registries hold images. These
are public or private stores from
where you upload or download
images.
47. www.tothenew.com
Contents
● Understanding the problem of shipping code
● The solution
● What is Docker Container?
● Benefits of Docker
● Understanding Docker components & architecture.
● Installation
● Hello World demo
● Running Stack of services using Docker Compose
● Moving to Production
48. www.tothenew.com
Installation
● Linux
Follow steps for your version of linux
https://docs.docker.com/engine/installation/
apt-get install docker-engine
yum install docker-engine
● Mac or Windows OS
Use docker toolbox isntaller : https://www.docker.com/docker-
toolbox
50. www.tothenew.com
Contents
● Understanding the problem of shipping code
● The solution
● What is Docker Container?
● Benefits of Docker
● Understanding Docker components & architecture.
● Installation
● Hello world demo
● Running Stack of services using Docker Compose
● Moving to Production
54. www.tothenew.com
Contents
● Understanding the problem of shipping code
● The solution
● What is Docker Container?
● Benefits of Docker
● Understanding Docker components & architecture.
● Installation
● Hello world demo
● Running stack of services using Docker compose
● Moving to Production
58. www.tothenew.com
Contents
● Understand the problem
● The solution
● What is Docker Container?
● Benefits of Docker
● Understanding Docker components & architecture.
● Installation
● Hello world demo
● Running Stack of services using Docker Compose
● Moving to Production
59. www.tothenew.com
Orchestration tools needed for docker cluster environment
● Provision servers
● Deploy
● Manage servers
Some of the available orchestration tools
● Docker swarm
● Centurion
● Amazon EC2 container service
Moving to Production
62. www.tothenew.com
Sample Demo: https://github.com/puneetbehl/gr8conf-docker-demo
Docker documentation: https://docs.docker.com
Docker tool-box https://www.docker.com/docker-toolbox
Docker hub: https://hub.docker.com/
Docker up and running from O'Reilly publication Authors: Karl Matthias & Sean P. Kane
Union file system : https://en.wikipedia.org/wiki/UnionFS
http://stackoverflow.com/questions/16047306/how-is-docker-different-from-a-normal-virtual-
machine
Build and push images to docker hub https://youtu.be/QCEWQs6LwAk
Docker-compose https://youtu.be/kn_dUA6f29I
References
Editor's Notes
Let’s build the build the story, and start from the very ground of application.
In this step, we have added more developers which means we need to set-up runtimes on each machine with a very specific version and make sure that application is running. Talk about that going forward
Please note that we need an extra Proxy Server (Apache) in case of production and other environments.
As we need to set-up multiple environments I need to:
As DevOps, need to set-up everywhere again and again.
if installed wrong version, need to fix it everywhere
Set-up a new environment on completely different infrastructure.
Manually repeat.
Make sure/test that my application is working on all the environments every time I am going to deploy a new version.
Talk about how things are getting complex, what
Continue the story that now we break the application into one web application and multiple background workers so that if something breaks on one of the background workers it SHOULD NOT impact my application to which users are interacting. But the problem it introduces is that now
I need to test all applications on each machine where I am going to run.
I need to update deployment process on all of the servers.
Talk about current solution to this problem without Docker
Continue the story, that we decide to break application services into separate applications and create different services for different purposes. So now, we need multiple runtimes in order to run the application, more libraries, more complexities, more challenges, like Dependency Hell? and making sure that
everything runs on every machine.
everything runs on every environment
Adding a new person to the team.
Problem like “Works on my machine”
Challenges with deploying over multiple different infrastructure.
Build > Deploy > Run getting more and more complex.
This slide is like a review of all the challenges we discussed in the earlier flow.
Docker is an engine that
enables any payload to be encapsulated as lightweight, portable, self sufficient container.
can be manipulated using standard operations and run consistently on virtually any environment.
Explain how every service will run in an isolated environment.
Heavyweight Images
Image Layering
Heavyweight Images
Image Layering
Heavyweight Images
Image Layering
Scalability
Let’s say have a web application and all of sudden I get lot of users, so it’s
Portability
Move to new Machine
Let’s say I want to move my current set-up to a new machine, to a Cloud, to Physical network etc.
Deployment
Less risk while deploying to different env
Rollback - installed an application over server apart from code changes, so now I need to Rollback
Efficient Resource Utilization
Scalability
Let’s say have a web application and all of sudden I get lot of users, so it’s
Portability
Move to new Machine
Let’s say I want to move my current set-up to a new machine, to a Cloud, to Physical network etc.
Deployment
Less risk while deploying to different env
Rollback - installed an application over server apart from code changes, so now I need to Rollback
Efficient Resource Utilization
Scalability
Let’s say have a web application and all of sudden I get lot of users, so it’s
Portability
Move to new Machine
Let’s say I want to move my current set-up to a new machine, to a Cloud, to Physical network etc.
Deployment
Less risk while deploying to different env
Rollback - installed an application over server apart from code changes, so now I need to Rollback
Efficient Resource Utilization
Scalability
Let’s say have a web application and all of sudden I get lot of users, so it’s
Portability
Move to new Machine
Let’s say I want to move my current set-up to a new machine, to a Cloud, to Physical network etc.
Deployment
Less risk while deploying to different env
Rollback - installed an application over server apart from code changes, so now I need to Rollback
Efficient Resource Utilization
The technology is particularly appealing for developers because it is now easier than ever to make sure you develop, test and deploy using the same environment as your colleagues, resulting in less issues caused by differences or missing libraries. Docker also offers developers the flexibility to quickly run their apps anywhere, whether its on laptops, VMs or QA servers. More simply put, “Docker helps developers build and ship higher-quality applications, faster.”A clean, safe, hygienic and portable runtime environment for your app.
No worries about missing dependencies, packages and other pain points during subsequent deployments.
Run each app in its own isolated container, so you can run various versions of libraries and other dependencies for each app without worrying
Automate testing, integration, packaging…anything you can script
Reduce/eliminate concerns about compatibility on different platforms, either your own or your customers.
Cheap, zero-penalty containers to deploy services? A VM without the overhead of a VM? Instant replay and reset of image snapshots? That’s the power of Docker
“Docker interests me because it allows simple environment isolation and repeatability. I can create a run-time environment once, package it up, then run it again on any other machine. Furthermore, everything that runs in that environment is isolated from the underlying host (much like a virtual machine). And best of all, everything is fast and simple.”
-Gregory Szorc, Mozilla Foundation
http://gregoryszorc.com/blog/2013/05/19/using-docker-to-build-firefox/
The technology is particularly appealing for developers because it is now easier than ever to make sure you develop, test and deploy using the same environment as your colleagues, resulting in less issues caused by differences or missing libraries. Docker also offers developers the flexibility to quickly run their apps anywhere, whether its on laptops, VMs or QA servers. More simply put, “Docker helps developers build and ship higher-quality applications, faster.”A clean, safe, hygienic and portable runtime environment for your app.
No worries about missing dependencies, packages and other pain points during subsequent deployments.
Run each app in its own isolated container, so you can run various versions of libraries and other dependencies for each app without worrying
Automate testing, integration, packaging…anything you can script
Reduce/eliminate concerns about compatibility on different platforms, either your own or your customers.
Cheap, zero-penalty containers to deploy services? A VM without the overhead of a VM? Instant replay and reset of image snapshots? That’s the power of Docker
“Docker interests me because it allows simple environment isolation and repeatability. I can create a run-time environment once, package it up, then run it again on any other machine. Furthermore, everything that runs in that environment is isolated from the underlying host (much like a virtual machine). And best of all, everything is fast and simple.”
-Gregory Szorc, Mozilla Foundation
http://gregoryszorc.com/blog/2013/05/19/using-docker-to-build-firefox/
Sysadmins are finding the technology useful as well, because of the ability to standardize development environments among other reasons. “Docker helps sysadmins deploy and run any app on any infrastructure, quickly and reliably.”
Make the entire lifecycle more efficient, consistent, and repeatable
Increase the quality of code produced by developers.
Eliminate inconsistencies between development, test, production, and customer environments
Support segregation of duties
Significantly improves the speed and reliability of continuous deployment and continuous integration systems
Because the containers are so lightweight, address significant performance, costs, deployment, and portability issues normally associated with VMs
Sysadmins are finding the technology useful as well, because of the ability to standardize development environments among other reasons. “Docker helps sysadmins deploy and run any app on any infrastructure, quickly and reliably.”
Make the entire lifecycle more efficient, consistent, and repeatable
Increase the quality of code produced by developers.
Eliminate inconsistencies between development, test, production, and customer environments
Support segregation of duties
Significantly improves the speed and reliability of continuous deployment and continuous integration systems
Because the containers are so lightweight, address significant performance, costs, deployment, and portability issues normally associated with VMs
On the business side, the benefits may be huge. By simplifying the way we deploy apps and creating more manageable environments, Docker can streamline all related processes, potentially offering a boost in productivity, reducing risks and reducing costs.
On the business side, the benefits may be huge. By simplifying the way we deploy apps and creating more manageable environments, Docker can streamline all related processes, potentially offering a boost in productivity, reducing risks and reducing costs.
On the business side, the benefits may be huge. By simplifying the way we deploy apps and creating more manageable environments, Docker can streamline all related processes, potentially offering a boost in productivity, reducing risks and reducing costs.
On the business side, the benefits may be huge. By simplifying the way we deploy apps and creating more manageable environments, Docker can streamline all related processes, potentially offering a boost in productivity, reducing risks and reducing costs.
Docker daemon runtime, runs on host machine, spin up containers, explain how it’s different from Windows and Mac
Docker daemon runtime, runs on host machine, spin up containers, explain how it’s different from Windows and Mac
Docker daemon runtime, runs on host machine, spin up containers, explain how it’s different from Windows and Mac
Docker daemon runtime, runs on host machine, spin up containers, explain how it’s different from Windows and Mac
Give example of image
Give example of image
Give example of image
Give example of image
Explain the whole workflow here
Installation is very easy on linux boxes. You can use apt-get or yum to install docker-engine. You don’t need a virtualbox on linux box as the docker runs natively on linux box.
Download and run docker toolbox installer on Windows and Mac OS. You are all set with docker environment on your computer. What you install is actually:d ocker client, server, machine, compose, kitematic and virtualbox. We will talk about compose and kitematic later.
Source code of the slide is a complete web application which can be run using spring boot CLI. No need to worry if you are not aware of Spring boot. I can easily create a jar file with embedded tomcat and other jar dependencies using spring boot CLI tool. To execute jar file you just need Java and nothing else. So the jar file and the java dependency is of more interest in demo.
You can see how easy it to create jar file and run it using java.
Let me check the files in current directory. I have already created the jar file. Let me try this to run using docker container.
docker run java:8 java -jar hello.jar
So command what actually does is to instruct docker server to fetch java image with tag 8. Docker service will first look into local cache and if it is found there, the cached image will be used to create the container. Once the container is created it will execute the command that you have specified in out case execute the jar file. Lets try to run this.
Oops it failed. It was unable to find hello.jar. Very surprising as you already seen this in directory listing that the file exists. Let’s recall the concept of Container. Container runs in isolation and does not have access to files outside the container file system. That’s why hello.jar is not there.
We have to hello.jar available to the container by some means. One quick solution is to mount working dir as a volume to this folder. Lets try this.
docker run -v $PWD:/app java:8 java -jar /app/hello.jar
so all the files and folders of present working directory will be available in /app folder of the container.
Lets try to find the usual text that says tomcat is ready and listening on 8080 port. Here it is.
Lets access this web app using browser. http://localhost:8080 . Hmmm it’s still not working. It seems that nothing is running on port 8080. Any guess what might be the issue.
Well it’s not just the file system that is isolated, the whole network is isolated from the host machine. We have to explicitly publish container’s port on host machine. Let me try this.
docker run -v $PWD:/app -p 8080:8080 java:8 java -jar /app/hello.jar
Violla, its working now.
Lets prepare a docker image to pack all the dependencies that we passed as command line argument so far.
So what’s special with Dockerfiles. It’s just a simple text file containing instructions or commands for
A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image.
Here I am instructing to use java:8 image as a base image.
Adding some meta information.
Copy some file to container.
Expose 8080 port so that host machine can bind to it.
Run java command
Each instruction will be added as a layer and cached locally. Next time you rebuild the image the, Docker will use cached version of layer whenever possible.
Let’s build the image and and push it to Docker hub so that it will be available globally.
file list
check Dockerfile content
docker build -t imagename
docker history to check layers
Let me adjust the zoom so that it is readable
Lets push the image
Each layer is pushed to docker hub
Go to docker hub site
Real life application are not so simple. I have an application for managing TODO. I have divided the application into services which can be independently developed. Noticalble components are the backend, frontend and worker. Backend application uses MongoDB for persistence and spring framework for exposing REST endpoints to manage TODOs. Worker is responsible for sending notifications and depends on Backend service. Worker is a lightweight Java application using Quartz scheduler which sends notification in the form of email and SMS. The frontend application has been written using Nodejs and Angularjs and provides a nice UI to manage TODO. End user can access frontend app from browser. It interacts with backend service REST endpoints.
Docker-compose is a nice tool for managing this type of distributed application consisting of many small applications.
Docker-compose will take care of building image or pulling it from docker hub for each services described your docker-compose.yml file. It will take of linking of services by making necessary changes to /etc/hosts file so that the linked service can be referenced by name instead of hardcoded IP address.
You can see the docker-compose.yml file content on the slide. It describes 3 services: mongodb, backend and frontend. Mongodb is linked in backend service. Backend service is linked to frontend service. Notice backendServiceUrl environment variable which is using backend instead of IP address to refer to the backend application.
Dcoker is installed
Lets check version of docker client and server
Let’s downloaded images > none
Lets check containers if any running or in stopped state > none
java, mongo client, mongodb, nodejs > not installed
what files are available
check docker-compose file content
let’s compose it …. optional you can specify -d for detached
it will check fetch images from local cache or docker hub
once the image is ready it will create the container and bind the linked services if any
Docker-compose is suitable for small applications and is very handy for development.
Production environment is not that simple. You usually have cluster of services and you want a tool that can manage those clusters and keep running the required containers. Scaling of containers should be easily done irrespective of which machine is being used for running new containers.
There are many orchestration tools available. Docker swarm is one of them. You can explore those on your own.