Proper packaging of python code has always been painful unless a very specific approach is strictly followed by its maintainers from the beginning. In many cases, however, a decision to package the code for distribution is made after a lot of code has already been written. This, along with the usage of Django framework, was the killing factor for the whole idea of convenient code distribution for Ralph, the open-source DC management system. This talk describes a few practical approaches along with several compromises Ralph maintainers had to agree to in order to make it available for everyone while keeping the main focus on internal needs.
Managing Open Source software in the Docker era nexB Inc.
Heather Meeker from O'Melveny & Myers and Michael Herzog from nexB discuss the specific impact of Docker on open source software governance and compliance.
Introduction into Docker Containers, the Oracle Platform and the Oracle (Nati...Lucas Jellema
Containers are increasingly popular to package, ship and run applications or microservices with their completely configured runtime environment including platform components such as application server and data store.Continuous Delivery and automated DevOps hinge on containers. Docker Containers are widely used and Oracle has long been involved in the Docker community.This session introduces the Docker Container images published by Oracle for flagship products such as Database, WebLogic, Linux and Java and demonstrates how these can be used in environment provisioning, automated delivery pipelines and microservices architectures. The session shows how containers are built, shipped and run based on these images and shows the Oracle Container Cloud, as well as Wercker Cloud (for automated build and delivery pipelines) and Oracle Cloud Engine - the managed Kubernetes cloud service..
Intro to Docker at the 2016 Evans Developer relations conferenceMano Marks
Building large scale apps traditionally has traditionally meant building large monolithic apps to handle everything. In the new age of the cloud and on premise data centers, increasingly the world is looking to containers and microservices. This allows flexibility and agility. Individual teams can choose the tools they need and be assured they'll work in the environment they want. And it also has implications for how we do developer relations, making it easier to deploy samples without worrying about environment. This session will look at microservices and how they are changing both the enterprise, and our work in developer relations.
Managing Open Source software in the Docker era nexB Inc.
Heather Meeker from O'Melveny & Myers and Michael Herzog from nexB discuss the specific impact of Docker on open source software governance and compliance.
Introduction into Docker Containers, the Oracle Platform and the Oracle (Nati...Lucas Jellema
Containers are increasingly popular to package, ship and run applications or microservices with their completely configured runtime environment including platform components such as application server and data store.Continuous Delivery and automated DevOps hinge on containers. Docker Containers are widely used and Oracle has long been involved in the Docker community.This session introduces the Docker Container images published by Oracle for flagship products such as Database, WebLogic, Linux and Java and demonstrates how these can be used in environment provisioning, automated delivery pipelines and microservices architectures. The session shows how containers are built, shipped and run based on these images and shows the Oracle Container Cloud, as well as Wercker Cloud (for automated build and delivery pipelines) and Oracle Cloud Engine - the managed Kubernetes cloud service..
Intro to Docker at the 2016 Evans Developer relations conferenceMano Marks
Building large scale apps traditionally has traditionally meant building large monolithic apps to handle everything. In the new age of the cloud and on premise data centers, increasingly the world is looking to containers and microservices. This allows flexibility and agility. Individual teams can choose the tools they need and be assured they'll work in the environment they want. And it also has implications for how we do developer relations, making it easier to deploy samples without worrying about environment. This session will look at microservices and how they are changing both the enterprise, and our work in developer relations.
WSO2Con ASIA 2016: Revolutionizing WSO2 App Cloud with Kubernetes & DockerWSO2
Containerization is fast becoming the most efficient way to develop and deploy software solutions in the Cloud. Docker embraced this space by fulfilling the above requirements and attracting the industry within a very short period of time. Google solved container cluster management features by initiating the Kubernetes project over a decade of experience on running container technologies at scale.
WSO2 App Cloud enables you to deploy applications using these technologies. In this tutorial we will demonstrate how WSO2 products can be run on Kubernetes. We will also give a preview of the upcoming WSO2 App Cloud which is deeply integrated with Kubernetes for hosting applications.
This tutorial will include
An introduction to Docker and Kubernetes
Deploying WSO2 products on Kubernetes
Kubernetes as the runtime provider for WSO2 App Cloud
Most people think "adopting containers" means deploying Docker images to production. In practice, adopting containers in the continuous integration process provides visible benefits even if the production environment are VMs.
In this webinar, we will explore this pattern by packaging all build tools inside Docker containers.
Container-based pipelines allow us to create and reuse building blocks to make pipeline creation and management MUCH easier. It's like building with Legos instead of clay.
This not only makes pipeline creation and maintenance much easier, it also solves a myriad of classic CI/CD problems such as:
Putting an end to version conflicts in build machines
Eliminating build machine management in general
Step portability and maintenance
In a very real sense, Docker-based pipelines reflect lessons learned from microservices in CI/CD pipelines. We will share tips and tricks for running these kinds of pipelines while using Codefresh as a CI/CD solution as it fully supports pipelines where each build step is running on its own Docker image.
July OpenNTF Webinar - HCL Presents Keep, a new API for DominoHoward Greenberg
In 2019 the HCL Labs reimagined how a REST API for Domino should look like. The initial prototype was shared with selected customers and partner. Based on the feedback, Project KEEP will ship together with Domino.
KEEP allows applications to interact with Domino servers using simple HTTP calls directly from a browser, desktop or mobile app, or with a application server in the middle. To make this API accessible to a large audience open standards like OpenAPI or JWT were chosen over propriety implementations.
This session will introduce KEEP and the design principles and use cases. Data security and ease of use will be highlighted. Warm up your Postman clients and curl command lines and follow along!
The presenters for this session will be Stephan Wissel and Paul Withers from HCL.
Docker Online Meetup #30: Docker Trusted Registry 1.4.1Docker, Inc.
In this Docker Online Meetup, Docker Software Engineer Tony Holdstock-Brown discusses the latest features in Docker Trusted Registry 1.4.1 including:
- Image deletion and garbage collection
- Set up, and manage user accounts, teams, organizations, and repositories from either APIs or through the Trusted Registry user interface
- Search, browse, and discover images created by other users through either APIs or through the Trusted Registry UI
- New APIs for accessing repositories, account management, indexing, searching, and reindexing
- New experimental feature: Docker Trusted Registry now integrates with Docker Content Trust using Notary
Slide deck overviewing Docker and it's related parts (Swarm, Compose, Machine, Docker Trusted Registry, etc).
Presented in conjunction with Amazon Web Services and 2nd Watch in Seattle on October 28th.
What is Kafka & why is it Important? (UKOUG Tech17, Birmingham, UK - December...Lucas Jellema
Fast data arrives in real time and potentially high volume. Rapid processing, filtering and aggregation is required to ensure timely reaction and actual information in user interfaces. Doing so is a challenge, make this happen in a scalable and reliable fashion is even more interesting. This session introduces Apache Kafka as the scalable event bus that takes care of the events as they flow in and Kafka Streams and KSQL for the streaming analytics. Both Java and Node applications are demonstrated that interact with Kafka and leverage Server Sent Events and WebSocket channels to update the Web UI in real time. User activity performed by the audience in the Web UI is processed by the Kafka powered back end and results in live updates on all clients.
This presentation includes a demonstration of remote database synchronization through Twitter.
Adopting Docker for production applications and services used to be hard. You had to hand-roll a lot of the underlying infrastructure and write lots of custom code for service discovery, load balancing, orchestration, desired state, etc. Today, with the rise of open source container orchestration platforms and cloud-native offerings, it's a lot easier to get up and running.
Github repo for demo: https://github.com/elabor8/dockertalk
Introduction to Docker | Docker and Kubernetes TrainingShailendra Chauhan
Learn to build modern infrastructure using docker and Kubernetes containers. Develop and deploy your ASP.NET Core application using Docker. Leverage to learn container technology to build your ASP.NET Core application.
Moving Legacy Applications to Docker by Josh Ellithorpe, Apcera Docker, Inc.
Looking to move your application to run in a container? Need to move existing x86 legacy applications to Docker? Let's break down your fundamental application concerns. This includes persistent storage, networking, configuration management, policy, logging, health monitoring, and service discovery. You won't want to miss this talk.
.docker : How to deploy Digital Experience in a container, drinking a cup of ...ICON UK EVENTS Limited
Matteo Bisi / Factor-y srl
Andrea Fontana / SOWRE SA
Docker is one of best technologies available on market to install and run and deploy application fastest , securely like never before. In this session you will see how to deploy a complete digital experience inside containers that will enable you to deploy a Portal drinking a cup of coffee. We will start from a deep overview of docker: what is docker, where you can find that, what is a container and why you should use container instead a complete Virtual Machine. After the overview we will enter inside how install IBM software inside a container using docker files that will run the setup using silent setup script. At last part we will talk about possible use of this configuration in real work scenario like staging or development environment or in WebSphere Portal farm setup.
Learn about the advantages of Docker technology, how it enables Informix users and developers to quickly start using Informix. Informix docker image available on docker hub requires no initial setup and/or configuration.
The slides talk about Docker and container terminologies but will also be able to see the big picture of where & how it fits into your current project/domain.
Topics that are covered:
1. What is Docker Technology?
2. Why Docker/Containers are important for your company?
3. What are its various features and use cases?
4. How to get started with Docker containers.
5. Case studies from various domains
We'll give an update on how Facebook manages CentOS at scale on our fleet, how working with the community helps us solve problems at scale and touch upon some of the tooling and processes we've developed. We'll specifically focus on the challenges around upgrading the fleet to a new major release and discuss how we plan to leverage CentOS Stream in our environment.
The Latest Status of CE Workgroup Shared Embedded Linux Distribution ProjectYoshitake Kobayashi
The CE workgroup of Linux Foundation has started a project to share the work of maintaining long-term support for an embedded distribution, by leveraging the work of the Debian and Debian LTS project. Debian gives you pre-compiled binary packages but the meta-debian layer enables to install customized packages to create similar or smaller images. If both usecases are able to share the source code, it is good to share the maintenance effort.
In this talk, Yoshitake will describe the details of meta-debian which provides a meta layer for the Poky build system. This talk will to gives the latest status, technical details and lessons learned from its development.
All source code are available on GitHub and related document also available on Github and elinux wiki.
WSO2Con ASIA 2016: Revolutionizing WSO2 App Cloud with Kubernetes & DockerWSO2
Containerization is fast becoming the most efficient way to develop and deploy software solutions in the Cloud. Docker embraced this space by fulfilling the above requirements and attracting the industry within a very short period of time. Google solved container cluster management features by initiating the Kubernetes project over a decade of experience on running container technologies at scale.
WSO2 App Cloud enables you to deploy applications using these technologies. In this tutorial we will demonstrate how WSO2 products can be run on Kubernetes. We will also give a preview of the upcoming WSO2 App Cloud which is deeply integrated with Kubernetes for hosting applications.
This tutorial will include
An introduction to Docker and Kubernetes
Deploying WSO2 products on Kubernetes
Kubernetes as the runtime provider for WSO2 App Cloud
Most people think "adopting containers" means deploying Docker images to production. In practice, adopting containers in the continuous integration process provides visible benefits even if the production environment are VMs.
In this webinar, we will explore this pattern by packaging all build tools inside Docker containers.
Container-based pipelines allow us to create and reuse building blocks to make pipeline creation and management MUCH easier. It's like building with Legos instead of clay.
This not only makes pipeline creation and maintenance much easier, it also solves a myriad of classic CI/CD problems such as:
Putting an end to version conflicts in build machines
Eliminating build machine management in general
Step portability and maintenance
In a very real sense, Docker-based pipelines reflect lessons learned from microservices in CI/CD pipelines. We will share tips and tricks for running these kinds of pipelines while using Codefresh as a CI/CD solution as it fully supports pipelines where each build step is running on its own Docker image.
July OpenNTF Webinar - HCL Presents Keep, a new API for DominoHoward Greenberg
In 2019 the HCL Labs reimagined how a REST API for Domino should look like. The initial prototype was shared with selected customers and partner. Based on the feedback, Project KEEP will ship together with Domino.
KEEP allows applications to interact with Domino servers using simple HTTP calls directly from a browser, desktop or mobile app, or with a application server in the middle. To make this API accessible to a large audience open standards like OpenAPI or JWT were chosen over propriety implementations.
This session will introduce KEEP and the design principles and use cases. Data security and ease of use will be highlighted. Warm up your Postman clients and curl command lines and follow along!
The presenters for this session will be Stephan Wissel and Paul Withers from HCL.
Docker Online Meetup #30: Docker Trusted Registry 1.4.1Docker, Inc.
In this Docker Online Meetup, Docker Software Engineer Tony Holdstock-Brown discusses the latest features in Docker Trusted Registry 1.4.1 including:
- Image deletion and garbage collection
- Set up, and manage user accounts, teams, organizations, and repositories from either APIs or through the Trusted Registry user interface
- Search, browse, and discover images created by other users through either APIs or through the Trusted Registry UI
- New APIs for accessing repositories, account management, indexing, searching, and reindexing
- New experimental feature: Docker Trusted Registry now integrates with Docker Content Trust using Notary
Slide deck overviewing Docker and it's related parts (Swarm, Compose, Machine, Docker Trusted Registry, etc).
Presented in conjunction with Amazon Web Services and 2nd Watch in Seattle on October 28th.
What is Kafka & why is it Important? (UKOUG Tech17, Birmingham, UK - December...Lucas Jellema
Fast data arrives in real time and potentially high volume. Rapid processing, filtering and aggregation is required to ensure timely reaction and actual information in user interfaces. Doing so is a challenge, make this happen in a scalable and reliable fashion is even more interesting. This session introduces Apache Kafka as the scalable event bus that takes care of the events as they flow in and Kafka Streams and KSQL for the streaming analytics. Both Java and Node applications are demonstrated that interact with Kafka and leverage Server Sent Events and WebSocket channels to update the Web UI in real time. User activity performed by the audience in the Web UI is processed by the Kafka powered back end and results in live updates on all clients.
This presentation includes a demonstration of remote database synchronization through Twitter.
Adopting Docker for production applications and services used to be hard. You had to hand-roll a lot of the underlying infrastructure and write lots of custom code for service discovery, load balancing, orchestration, desired state, etc. Today, with the rise of open source container orchestration platforms and cloud-native offerings, it's a lot easier to get up and running.
Github repo for demo: https://github.com/elabor8/dockertalk
Introduction to Docker | Docker and Kubernetes TrainingShailendra Chauhan
Learn to build modern infrastructure using docker and Kubernetes containers. Develop and deploy your ASP.NET Core application using Docker. Leverage to learn container technology to build your ASP.NET Core application.
Moving Legacy Applications to Docker by Josh Ellithorpe, Apcera Docker, Inc.
Looking to move your application to run in a container? Need to move existing x86 legacy applications to Docker? Let's break down your fundamental application concerns. This includes persistent storage, networking, configuration management, policy, logging, health monitoring, and service discovery. You won't want to miss this talk.
.docker : How to deploy Digital Experience in a container, drinking a cup of ...ICON UK EVENTS Limited
Matteo Bisi / Factor-y srl
Andrea Fontana / SOWRE SA
Docker is one of best technologies available on market to install and run and deploy application fastest , securely like never before. In this session you will see how to deploy a complete digital experience inside containers that will enable you to deploy a Portal drinking a cup of coffee. We will start from a deep overview of docker: what is docker, where you can find that, what is a container and why you should use container instead a complete Virtual Machine. After the overview we will enter inside how install IBM software inside a container using docker files that will run the setup using silent setup script. At last part we will talk about possible use of this configuration in real work scenario like staging or development environment or in WebSphere Portal farm setup.
Learn about the advantages of Docker technology, how it enables Informix users and developers to quickly start using Informix. Informix docker image available on docker hub requires no initial setup and/or configuration.
The slides talk about Docker and container terminologies but will also be able to see the big picture of where & how it fits into your current project/domain.
Topics that are covered:
1. What is Docker Technology?
2. Why Docker/Containers are important for your company?
3. What are its various features and use cases?
4. How to get started with Docker containers.
5. Case studies from various domains
We'll give an update on how Facebook manages CentOS at scale on our fleet, how working with the community helps us solve problems at scale and touch upon some of the tooling and processes we've developed. We'll specifically focus on the challenges around upgrading the fleet to a new major release and discuss how we plan to leverage CentOS Stream in our environment.
The Latest Status of CE Workgroup Shared Embedded Linux Distribution ProjectYoshitake Kobayashi
The CE workgroup of Linux Foundation has started a project to share the work of maintaining long-term support for an embedded distribution, by leveraging the work of the Debian and Debian LTS project. Debian gives you pre-compiled binary packages but the meta-debian layer enables to install customized packages to create similar or smaller images. If both usecases are able to share the source code, it is good to share the maintenance effort.
In this talk, Yoshitake will describe the details of meta-debian which provides a meta layer for the Poky build system. This talk will to gives the latest status, technical details and lessons learned from its development.
All source code are available on GitHub and related document also available on Github and elinux wiki.
В продолжение темы непрерывной интеграции, Макс расскажет о своем подходе организации непрерывной интеграции и деплоймента в Symfony проектах. Рассказ включает следующие темы:
- Управления зависимостями
- Процесс и инструменты для сборки
- Сервера непрерывной интеграции и в частности Jenkins, плагины к нему, jobs
- Процесс разработки в git
- Процесс выгрузки релиза
- Миграция БД
- Откат релиза
We'll talk about how Facebook is leveraging CentOS Stream to manage our production fleet at scale. We'll cover the latest updates on our fleet migration from CentOS 7, talk about the tooling and processes we've developed and how they've evolved, and how we're working with the CentOS and Fedora communities. This talk is a followup to "Upgrading CentOS on the Facebook fleet" (https://www.youtube.com/watch?v=EajAjFCZz4Q&t=3s) from DevConf.cz 2020.
Public briefing from Unicon's IAM team on observations and highlights about Apereo/Jasig CAS, Internet 2 Shibboleth, and Internet 2 Grouper. Unicon Open Source Support development progress and intentions for the next quarter are also shared. http://www.unicon.net/support
Docker is quickly becoming an invaluable development and deployment tool for many organizations. Come and spend the day learning about what Docker is, how to use it, how to integrate it into your workflow, and build an environment that works for you and the rest of your team. This hands-on tutorial will give you the kick-start needed to start using Docker effectively.
Conda is a cross-platform package manager that lets you quickly and easily build environments containing complicated software stacks. It was built to manage the NumPy stack in Python but can be used to manage any complex software dependencies.
Bring Your Own Container: Using Docker Images In ProductionDatabricks
Condé Nast is a global leader in the media production space housing iconic brands such as The New Yorker, Wired, Vanity Fair, and Epicurious, among many others. Along with our content production, Condé Nast invests heavily in companion products to improve and enhance our audience’s experience. One such product solution is Spire, Condé Nast’s service for user segmentation and targeted advertising for over a hundred million users.
While Spire started as a set of databricks notebooks, we later utilized DBFS for deploying Spire distributions in the form of Python Whls, and more recently, we have packaged the entire production environment into docker images deployed onto our Databricks clusters. In this talk, we will walk through the process of evolving our python distributions and production environment into docker images, and discuss where this has streamlined our deployment workflow, where there were growing pains, and how to deal with them.
There's so much happening in the .NET ecosystem nowadays. During this session, we are going to discuss innovations which are applicable for all .NET stacks – desktop, mobile, cloud and Web. We will be talking about the new standard way of creating .NET libraries - .NET Standard, about the massive changes in the project and build sub-systems brought by Visual Studio 2017 and NuGet 4.0.
asp.net vNext is the next major version on .net on the server. It’s a completely new way to work with awesome possibilities ; It contains a new flexible and cross-platform runtime, new modular HTTP request pipeline, Cloud-ready CLR, an unified programming model that combines MVC, Web API, and Web Pages, a no-compilation dev experience, ability to self-host or host on IIS, …
Best of all : it’s Open source in GitHub (https://github.com/aspnet/Home)
Moby is an open source project providing a "LEGO set" of dozens of components, the framework to assemble them into specialized container-based systems, and a place for all container enthusiasts to experiment and exchange ideas.
One of these assemblies is Docker CE, an open source product that lets you build, ship, and run containers.
This talk will explain how you can leverage the Moby project to assemble your own specialized container-based system, whether for IoT, cloud or bare metal scenarios.
We will cover Moby itself, the framework, and tooling around the project, as well as many of it’s components: LinuxKit, InfraKit, containerd, SwarmKit, Notary.
Then we will present a few use cases and demos of how different companies have leveraged Moby and some of the Moby components to create their own container-based systems.
Video at https://www.youtube.com/watch?v=kDp22YkD6WY
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Packaging a Python application after you messed up - Roman Prykhodchenko
1. Packaging a Python
application after you
messed up
Roman Prykhodchenko, Allegro
@romcheg
me@romcheg.me
November 2019, Warsaw
www.devopsdays.pl
2. DevOpsDays Warsaw 2019
• Maintained and used in house
• Heavily modified django-admin
• Python + HTML + JS
• Open-source + private extensions
• In house: Ubuntu + Docker image
• In community: mostly Ubuntu + deb package but....
DCIM system: Ralph
2
3. DevOpsDays Warsaw 2019
Problem
• Community builds are often broken
• Troubleshooting takes more time than we have
• No motivation to support the community
3
4. DevOpsDays Warsaw 2019
Reasons
• Two separate delivery pipelines
• TravisCI: Debian package built and published once a week
• Bamboo: Docker image with private extensions available on demand
• Different artefacts in community and local packages
• Supporting the community is not the team's #1 priority
4
6. DevOpsDays Warsaw 2019
• ... a file with an .rpm or .deb in the end
• ... a python wheel
• ... a tarball with a source code and a Makefile
• ... a container image?! 0_o
Package is...
6
7. DevOpsDays Warsaw 2019
Package is a distributable set of idempotent alternations
bringing desired artefacts and state changes to a target
system.
7
11. DevOpsDays Warsaw 2019
Target platforms
Runtime dependencies are
available in vendor repositories.
Runtime dependencies are
shipped within the package.
11
19. DevOpsDays Warsaw 2019
• Copy dependencies to the source tree
• Keeping wheels in the source tree
• Package a virtual environment 0_o
• No source-code changes required
• Use standard management tools
• Gradual migration to system packages
Other options?
19
20. DevOpsDays Warsaw 2019
• Create one
• Install python stuff
• Required packages
• Source code
• Fix the symlinks
• Pack the virtualenv into the deb package
Virtualenv
20
23. DevOpsDays Warsaw 2019
• Use all matching requirements from the vendor's
repository
• Use dh_virtualenv
• Put the rest of the requirements along with the source code into the
virtualenv
• Gradually migrate the code to the libraries available in the vendor's
repositories
Deb package – summary
23
24. DevOpsDays Warsaw 2019
Docker image – extra layer of complexity
• Supporting different container orchestrators
• Configuration without breaking immutability
• Performing operations without entering running containers
• Serving static files
24
25. DevOpsDays Warsaw 2019
• Single entry point script
• Avoid exposing the filesystem structure
• Acts like a facade for all entry points
• By default starts the service
Startup and operations
25
26. DevOpsDays Warsaw 2019
• Select few essential configuration options and define
environment variables for each
• The entry point script puts the values of those variables
into configuration files
• Those in need of supplying advanced configuration should mount the
entire configuration as a volume
Configuration
26
27. DevOpsDays Warsaw 2019
• Build a separate image with static files
• Use a lightweight image like nginx as a base
• Users will route requests to one of the containers depending
on the container orchestrator
Static files
27
28. DevOpsDays Warsaw 2019
• Custom entry point acting like a facade
• Essential configuration options as environment variables
• Advanced configuration is done by mounting configuration
as a volume
• Static files are available in a separate image
• Publish both images at the same time
Docker image – summary
28
Let me just brief you quickly with the background -- I am an engineer in a team supporting certain part of the technical platform at our company and our team had a problem!
Running our services requires an extensive infrastructure built with a few thousands of assets. In order to handle that we have created a DCIM Ralph with the main goal to be the software that fulfils our internal needs.
At some point of time it was published under apache license just because... somehow it gained some popularity
For the company Ralph is not a product therefore the team's focus must stay on the in-house needs, including maintenance of other systems. Open-sourcing this sort of software is debatable, yet here we are and there's no way back.
distributable --
That means all the constraints apply.
Immutable -- you should supply runtime dependencies
Mutable -- you may fetch dependencies from a trusted source
Then we gave it another thought and realised the cornerstone of the mess is the last issue and as the matter of fact it's the only one from the list we cannot fix since it is a direct consequence of the business model of the company.
Following the DevOps philosophy means taking responsibility for more then just the development, so we could not ignore that.
The basic idea was that whatever is given to the community should be also used by us.
That required to make several unpopular, yet necessary changes in the initial plan -- and the first of them was to sacrifice the idea of RPM builds.
Moreover -- we had to select only one Debian-based distribution of GNU/Linux which is used in the vast majority of use cases.
Django application often ends up being dependent on a ton of libraries from the Django ecosystem and the combination often works only when all requirements have very specific range of versions.
External python libraries are available in canonical's repositories under certain names.
Canonical maintains those packages and provides patches and security updates
Yet those libraries have specific versions that won't match the requirements of the python code.
There's a way to install python libraries -- pip install/easy_install -- installs from PyPi/other repos.
With django that's a huge problem.
Python package management is non deterministic -- requirements do not have strict versions, you never know what gets installed on the target system.
Even if you freeze the dependency tree:
Installing python packages may require build tools to compile extensions written in C.
Random failures
We have to bring those requirements inside the package.
huge source tree, long installation, more c-libraries required, changes in the source code may be necessary to fix imports
Complicated installation process, more steps to update requirements, cannot use standard python package management tools
Seems funny, but...
Good as a transitional stage
Therefore it is necessary to add some sort of script to the pipeline that will do the following steps:
Sounds like too much of a hustle for a transitional stage.
When designing a container image, it is necessary to keep in mind that "docker run" is not how containers are often run on production. Container orchestrators like "Kubernetes" or even simple "docker-compose" require some work to be done before an image can be actually used.
Docker isolates what's inside, yet it's often necessary to get under the hood, e.g. to configure the software or to perform different operations.
Since static files are inside the container, serving them won't be as easy
To summarise – when open-sourcing your software while the human-power is limited or when priorities do not put community to the 1st place follow these 3 strategies:
Well done opinionated software is much better than weak one that provides many options.
Trying to start with a "by the book" approach is likely to be too time consuming or too expensive.
Be ready to receive tons of negative feedback or anger.