Ansible is one of the new breed of tools that encompasses configuration management, orchestration and software defined infrastructure. Find out how many companies are spinning up entire environments from source code including vm's, networks, dns, firewalls, load balancers etc.
Video: https://www.youtube.com/watch?v=unPVe2pcego
Join DevOps Exchange London here: http://www.meetup.com/DevOps-Exchange-London
Follow DOXLON on twitter http://www.twitter.com/doxlon
Eric Williams (Rackspace) - Using Heat on OpenStackOutlyer
Rackspace talk about how software defined infrastructure is done on their Rackspace cloud. If you're running OpenStack then this is a great way to learn how to take automation to the next level.
Video: https://www.youtube.com/watch?v=EY-yNymyiIA
Join DevOps Exchange London here: http://www.meetup.com/DevOps-Exchange-London
Follow DOXLON on twitter http://www.twitter.com/doxlon
"Cooking with Heat" is an introduction to Heat and how to get started integrating OpenStack's infrastructure orchestration into your cloud applications. Presented by Eric Williams for DevOps Exchange London, February 2015
Orchestration across multiple cloud platforms using HeatCoreStack
Heat allows the user to set up HOT templates that describe the dependencies and the flow of the infrastructure resources that should be deployed to meet specific use case requirements. The Heat engine understands the order it needs to orchestrate the execution of the flow defined in the template.
Apart from Orchestrating the template execution on one OpenStack platform we can make HEAT orchestrate across multiple cloud platforms by extending HEAT plug-ins. There are many use cases that can be realized using this approach such as Cloud Bursting which involves provisioning & shifting of workload between environments and Catalog based approach for templates to orchestrate across multi-cloud environment.
It covers the following,
Heat plugin architecture for orchestrating other clouds
Dynamic Authentication for other cloud platforms
Managing centralized Heat template repository with indexing and search
Eric Williams (Rackspace) - Using Heat on OpenStackOutlyer
Rackspace talk about how software defined infrastructure is done on their Rackspace cloud. If you're running OpenStack then this is a great way to learn how to take automation to the next level.
Video: https://www.youtube.com/watch?v=EY-yNymyiIA
Join DevOps Exchange London here: http://www.meetup.com/DevOps-Exchange-London
Follow DOXLON on twitter http://www.twitter.com/doxlon
"Cooking with Heat" is an introduction to Heat and how to get started integrating OpenStack's infrastructure orchestration into your cloud applications. Presented by Eric Williams for DevOps Exchange London, February 2015
Orchestration across multiple cloud platforms using HeatCoreStack
Heat allows the user to set up HOT templates that describe the dependencies and the flow of the infrastructure resources that should be deployed to meet specific use case requirements. The Heat engine understands the order it needs to orchestrate the execution of the flow defined in the template.
Apart from Orchestrating the template execution on one OpenStack platform we can make HEAT orchestrate across multiple cloud platforms by extending HEAT plug-ins. There are many use cases that can be realized using this approach such as Cloud Bursting which involves provisioning & shifting of workload between environments and Catalog based approach for templates to orchestrate across multi-cloud environment.
It covers the following,
Heat plugin architecture for orchestrating other clouds
Dynamic Authentication for other cloud platforms
Managing centralized Heat template repository with indexing and search
Autoscaling OpenStack Natively with Heat, Ceilometer and LBaaSShixiong Shang
Autoscaling OpenStack Natively with Heat, Ceilometer and LBaaS workshop I delivered at OpenStack Vancouver Summit (May, 2015) jointly with Jason and Sharmin from Cisco System.
More details can be found at https://github.com/grimmtheory/autoscale
Operating OpenStack - Case Study in the Rackspace CloudRainya Mosher
Presentation given in Seoul, South Korea at the Cloud and Data Center Conference in March 2014. Introduces the concept of the Rackspace Hybric Cloud Experience, the product platforms that are being used to make that happen, and then focuses on the operation and deployment of the Public Cloud.
Automating Application over OpenStack using WorkflowsYaron Parasol
OpenStack Heat is gaining momentum as a DevOps tool to orchestrate the creation of OpenStack cloud environments. Heat is based on a DSL describing simple orchestration of cloud objects, but lacks better representation of the middleware and the application components as well as more complex deployment and post-deployment orchestration workflows. The Heat community has started discussing a higher level DSL that will support not just infrastructure components.
This session will present a further extended suggestion for a DSL based on the TOSCA specification, which covers broader aspects of an application behavior and deployment such as the installation, configuration management, continuous deployment, auto-healing and scaling. We will also share some of our thoughts on how this DSL can interface with native OpenStack projects, such as Heat, Keystone and Ceilometer.
Deploying and Managing Red Hat Enterprise Linux in Amazon Web ServicesDLT Solutions
The Federal Cloud First policy mandates that agencies take full advantage of cloud computing benefits to maximize capacity utilization, improve IT flexibility and responsiveness, and minimize cost. But how can you safely and reliably begin to deploy and manage your Red Hat instances at cloud scale? With IT automation, you can more easily deploy and manage your Red Hat instances in the Amazon Web Services (AWS) public cloud.
In this webinar, we’ll demonstrate how to:
Automate the creation of Red Hat Enterprise Linux-based AWS instances
Apply a security baseline to the instances
Deploy and manage an application
Regardless of where you are in the cloud adoption process, leveraging IT automation can help smooth the transition to the cloud. Join the webinar to learn how.
Best Practices of Infrastructure as Code with TerraformDevOps.com
When your organization is moving to cloud, the infrastructure layer transitions from running dedicated servers at limited scale to a dynamic environment, where you can easily adjust to growing demand by spinning up thousands of servers and scaling them down when not in use.
The future of DevOps is infrastructure as code. Infrastructure as code supports the growth of infrastructure and provisioning requests. It treats infrastructure as software: code that can be re-used, tested, automated and version controlled. HashiCorp Terraform adopts infrastructure as code throughout its tool to prevent configuration drift, manage immutable infrastructure and much more!
Join this webinar to learn why Infrastructure as Code is the answer to managing large scale, distributed systems and service-oriented architectures. We will cover key use cases, a demo of how to use Infrastructure as Code to provision your infrastructure and more:
Agenda:
Intro to Infrastructure as Code: Challenges & Use cases
Writing Infrastructure as Code with Terraform
Collaborating with Teams on Infrastructure
Heat is the main project in the OpenStack Orchestration program. It implements an orchestration engine to launch multiple composite cloud applications based on templates in the form of text files that can be treated like code. A native Heat template format is evolving, but Heat also endeavours to provide compatibility with the AWS CloudFormation template format, so that many existing CloudFormation templates can be launched on OpenStack. Heat provides both an OpenStack-native ReST API and a CloudFormation-compatible Query API.
(BDT205) Your First Big Data Application on AWS | AWS re:Invent 2014Amazon Web Services
Want to get ramped up on how to use Amazon's big data web services and launch your first big data application on AWS? Join us on our journey as we build a big data application in real-time using Amazon EMR, Amazon Redshift, Amazon Kinesis, Amazon DynamoDB, and Amazon S3. We review architecture design patterns for big data solutions on AWS, and give you access to a take-home lab so that you can rebuild and customize the application yourself.
In this slideshare we introduce the basic concepts of a simple REST applications with Python and present some examples, see our Github repository. In addition we’ll go under the hood to see how Hammock provides abstraction and I’ll also show simple benchmarks that measure the library overhead.
Docker and AWS have been working together to improve the Docker experience you already know and love. Deploying from Docker straight to AWS with your existing workflow has never been easier. Developers can use Docker Compose and Docker Desktop to deploy applications on Amazon ECS on AWS Fargate. This new functionality streamlines the process of deploying and managing containers in AWS from a local development environment running Docker. Join us for a hands-on walk through of how you can get started today.
Using HashiCorp’s Terraform to build your infrastructure on AWS - Pop-up Loft...Amazon Web Services
Using Terraform to automate your infrastructure on AWS. What is Terraform and how is it different from Ansible. How to control cloud deployments using Terraform.
Introduction to Apache CloudStack by David Nalleybuildacloud
Apache CloudStack is a mature, easy to deploy IaaS platform. That doesn't mean that it can be done without thought or preparation. Learn how CloudStack can be most efficiently deployed, and the problems to avoid in the process.
About David Nalley
David is a recovering sysadmin with a decade of experience. He’s a committer on the Apache CloudStack (incubating) project, a contributor to the Fedora Project and the Vice President of Infrastructure at the Apache Software Foundation.
Autoscaling OpenStack Natively with Heat, Ceilometer and LBaaSShixiong Shang
Autoscaling OpenStack Natively with Heat, Ceilometer and LBaaS workshop I delivered at OpenStack Vancouver Summit (May, 2015) jointly with Jason and Sharmin from Cisco System.
More details can be found at https://github.com/grimmtheory/autoscale
Operating OpenStack - Case Study in the Rackspace CloudRainya Mosher
Presentation given in Seoul, South Korea at the Cloud and Data Center Conference in March 2014. Introduces the concept of the Rackspace Hybric Cloud Experience, the product platforms that are being used to make that happen, and then focuses on the operation and deployment of the Public Cloud.
Automating Application over OpenStack using WorkflowsYaron Parasol
OpenStack Heat is gaining momentum as a DevOps tool to orchestrate the creation of OpenStack cloud environments. Heat is based on a DSL describing simple orchestration of cloud objects, but lacks better representation of the middleware and the application components as well as more complex deployment and post-deployment orchestration workflows. The Heat community has started discussing a higher level DSL that will support not just infrastructure components.
This session will present a further extended suggestion for a DSL based on the TOSCA specification, which covers broader aspects of an application behavior and deployment such as the installation, configuration management, continuous deployment, auto-healing and scaling. We will also share some of our thoughts on how this DSL can interface with native OpenStack projects, such as Heat, Keystone and Ceilometer.
Deploying and Managing Red Hat Enterprise Linux in Amazon Web ServicesDLT Solutions
The Federal Cloud First policy mandates that agencies take full advantage of cloud computing benefits to maximize capacity utilization, improve IT flexibility and responsiveness, and minimize cost. But how can you safely and reliably begin to deploy and manage your Red Hat instances at cloud scale? With IT automation, you can more easily deploy and manage your Red Hat instances in the Amazon Web Services (AWS) public cloud.
In this webinar, we’ll demonstrate how to:
Automate the creation of Red Hat Enterprise Linux-based AWS instances
Apply a security baseline to the instances
Deploy and manage an application
Regardless of where you are in the cloud adoption process, leveraging IT automation can help smooth the transition to the cloud. Join the webinar to learn how.
Best Practices of Infrastructure as Code with TerraformDevOps.com
When your organization is moving to cloud, the infrastructure layer transitions from running dedicated servers at limited scale to a dynamic environment, where you can easily adjust to growing demand by spinning up thousands of servers and scaling them down when not in use.
The future of DevOps is infrastructure as code. Infrastructure as code supports the growth of infrastructure and provisioning requests. It treats infrastructure as software: code that can be re-used, tested, automated and version controlled. HashiCorp Terraform adopts infrastructure as code throughout its tool to prevent configuration drift, manage immutable infrastructure and much more!
Join this webinar to learn why Infrastructure as Code is the answer to managing large scale, distributed systems and service-oriented architectures. We will cover key use cases, a demo of how to use Infrastructure as Code to provision your infrastructure and more:
Agenda:
Intro to Infrastructure as Code: Challenges & Use cases
Writing Infrastructure as Code with Terraform
Collaborating with Teams on Infrastructure
Heat is the main project in the OpenStack Orchestration program. It implements an orchestration engine to launch multiple composite cloud applications based on templates in the form of text files that can be treated like code. A native Heat template format is evolving, but Heat also endeavours to provide compatibility with the AWS CloudFormation template format, so that many existing CloudFormation templates can be launched on OpenStack. Heat provides both an OpenStack-native ReST API and a CloudFormation-compatible Query API.
(BDT205) Your First Big Data Application on AWS | AWS re:Invent 2014Amazon Web Services
Want to get ramped up on how to use Amazon's big data web services and launch your first big data application on AWS? Join us on our journey as we build a big data application in real-time using Amazon EMR, Amazon Redshift, Amazon Kinesis, Amazon DynamoDB, and Amazon S3. We review architecture design patterns for big data solutions on AWS, and give you access to a take-home lab so that you can rebuild and customize the application yourself.
In this slideshare we introduce the basic concepts of a simple REST applications with Python and present some examples, see our Github repository. In addition we’ll go under the hood to see how Hammock provides abstraction and I’ll also show simple benchmarks that measure the library overhead.
Docker and AWS have been working together to improve the Docker experience you already know and love. Deploying from Docker straight to AWS with your existing workflow has never been easier. Developers can use Docker Compose and Docker Desktop to deploy applications on Amazon ECS on AWS Fargate. This new functionality streamlines the process of deploying and managing containers in AWS from a local development environment running Docker. Join us for a hands-on walk through of how you can get started today.
Using HashiCorp’s Terraform to build your infrastructure on AWS - Pop-up Loft...Amazon Web Services
Using Terraform to automate your infrastructure on AWS. What is Terraform and how is it different from Ansible. How to control cloud deployments using Terraform.
Introduction to Apache CloudStack by David Nalleybuildacloud
Apache CloudStack is a mature, easy to deploy IaaS platform. That doesn't mean that it can be done without thought or preparation. Learn how CloudStack can be most efficiently deployed, and the problems to avoid in the process.
About David Nalley
David is a recovering sysadmin with a decade of experience. He’s a committer on the Apache CloudStack (incubating) project, a contributor to the Fedora Project and the Vice President of Infrastructure at the Apache Software Foundation.
CloudStack Collab Conference 2015 Run CloudStack in DockerCloudOps2005
Slides from Pierre-Luc Dion's presentation on what he has learned running CloudStack in Docker at the CloudStack Collaboration Conference in Dublin, October 2015.
Building Deploying and Managing Microservices-based Applications with Azure P...CodeOps Technologies LLP
This presentation covers:
* Setup AKS cluster on Azure
* Deploy a sample microservice-based highly available and scalable app to the cluster
* Set up Azure pipeline for CI and CD
* Automate deployment of the application on Git commit to AKS cluster
Presented as part of Cloud Community Days - 19 June - ccdays.konfhub.com
OSDC 2015: Mitchell Hashimoto | Automating the Modern Datacenter, Development...NETWAYS
Physical, virtual, containers. Public cloud, private cloud, hybrid cloud. IaaS, PaaS, SaaS. These are the choices that we're faced with when architecting a datacenter of today. And the choice is not one or the other; it is often a combination of many of these. How do we remain in control of our datacenters? How do we deploy and configure software, manage change across disparate systems, and enforce policy/security? How do we do this in a way that operations engineers and developers alike can rejoice in the processes and workflow?
In this talk, I will discuss the problems faced by the modern datacenter, and how a set of open source tools including Vagrant, Packer, Consul, and Terraform can be used to tame the rising complexity curve and provide solutions for these problems.
Kyle Bassett's from @ Arctiq (www.arctiq.ca) Presentation from the Halifax DevOps Meet-up on July.19th - 2017.
Linux Container Platform on Azure
(Kubernetes, OpenShift, Ansible Automation)
Pipeline Automation
(From Code to Containers, Automated CI / CD on Azure
Murat Karslioglu, VP Solutions @ OpenEBS - Containerized storage for containe...Outlyer
What is wrong w/ stateful workloads on containers today? What is happening at the Linux kernel to improve the security of containers as a platform FOR storage? Could containers and Kubernetes become the foundations of a new approach to storage? Quick demo of the OpenEBS project.
Video: https://youtu.be/rhx_TnZe_E4
This talk is from the DevOps Exchange San Francisco September Meetup: https://www.meetup.com/DevOps-Exchange-SanFrancisco
Feature flags are a valuable DevOps technique to deliver better, more reliable software faster. Feature flags can be used for both release management (dark launches, canary rollouts, betas) as well as long term control (entitlement management, user segmentation personalization).
However, if not managed properly, feature flags can be very destructive technical debt. Feature flags need to be managed properly with visibility and control to both engineering and business users.
Why You Need to Stop Using "The" Staging ServerOutlyer
Old staging methodology is broken for modern development. In fact, the staging server is left over from when we built monolithic applications. Find out why microservice architectures are driving ephemeral testing environments & why every sized dev shop should deliver true continuous deployment.
Staging servers slow down development with merge conflicts, slow iteration loops, and manhour intensive processes. To build better software faster containers and infrastructure as code are key in 2017. Dev Ops professionals miss this talk at their own peril.
How GitHub combined with CI empowers rapid product delivery at Credit Karma Outlyer
Amit and Kashyap will discuss how GitHub and self service continuous integration (CI) helps Credit Karma rapidly deliver new features to over 60 million members. They will review how Credit Karma streamlined and scaled growing CI needs stemming from an army of engineers decomposing monolith into services.
Docker is often used as an end-to-end solution where services are packaged using a Dockerfile, pushed to a container registry and then deployed to a container orchestration like Kubernetes. In this talk, I would like to show you how nix, the purely functional package manager, can replace and improve over docker in the development and build phase of the applications' lifecycle.
Minimum Viable Docker: our journey towards orchestrationOutlyer
While Kubernetes and Mesos are all the rage, you don't necessarily need a complex orchestration layer to start using and benefiting from Docker. We will present how Babylon Health is running its dockerised AI microservices in production, pros and cons, and what we have in store for the future.
Ops is the past! DevOps is the present ! SRE is for giants! NoOps is the future! Fowler even says that a DevOps Engineer is an anti-pattern!
So will our job disappear in 10 years? What can we do about it? What is the next set of skills that we need? A startup is often a precursor to larger changes. I'll tell you what we are trying to do at Curve, a Fintech startup where developers build Kubernetes clusters and the SRE team codes microservices.
The service mesh: resilient communication for microservice applicationsOutlyer
Modern application architecture is shifting from monolith to microservices: componentized, containerized, and orchestrated with systems like Kubernetes, Mesos, and Docker Swarm. While this environment is resilient to many failures of both hardware and software, applications require more than this to be truly resilient. In this talk, we introduce the notion of a "service mesh": a userspace infrastructure layer designed to manage service-to-service communication in microservice applications, including handling partial failures and unexpected load, while reducing tail latencies and degrading gracefully in the presence of component failure.
Microservices: Why We Did It (and should you?) Outlyer
Mason will present a skeptical, humorous, and practical look at whether companies should consider microservices, and why/not. The story includes the reasons why Credit Karma did make the move, the approach we took, and shares some of our learnings so far.
Renan Dias: Using Alexa to deploy applications to KubernetesOutlyer
It's time to bring voice commands into continuous deployment pipelines. In this talk, Renan will walk you through the steps of setting up a powerful and cutting-edge continuous deployment pipeline, which will allow you to deploy your products to Kubernetes clusters using just your voice. "Alexa, deploy API to production". If you have never imagined yourself doing that, or you have but don't know where to start, this talk is definitely for you.
Alex Dias: how to build a docker monitoring solution Outlyer
Alex will be talking about how docker container monitoring was built at Outlyer. He'll be diving into the details behind how you actually monitor everything in such an environment and the challenges that come with it. Namely, how the Docker API, Cgroups, and the Netlink Linux kernel interface can be leveraged to get specific metrics for each container.
How to build a container monitoring solution - David Gildeh, CEO and Co-Found...Outlyer
David will be talking about how he's built the container monitoring at Outlyer. He'll also be diving into the details behind how you actually monitor everything in a container environment and the challenges that come with it.
Heresy in the church of - Corey Quinn, Principal at The Quinn Advisory Group Outlyer
Docker (and by extension, microservices based architecture) has expanded our horizons with respect to how the industry builds and supports applications at scale. It’s changed the way we think about our code, what production looks like, and how we live. But in our rush to embrace this exciting new paradigm, are we throwing away the lessons of the past?
In this entertaining and somewhat irreverent talk, Corey presents the ”other side” of the containerization craze: how configuration management fits into a world consumed by the Docker Docker Docker madness, how ”containers all the way down” can let you down when you least expect it, and how promising technologies should perhaps be vetted a bit more thoroughly before you try to run critical services on top of them.
Anatomy of a real-life incident -Alex Solomon, CTO and Co-Founder of PagerDutyOutlyer
Major incidents can be very stressful, frustrating and chaotic experiences, especially if the on-call responders lack the proper process, training and coordination.
In this talk, we will walk through a real incident from PagerDuty’s own history, to illustrate what an effective incident response looks like. We will recreate the incident timeline step by step and go over all of the different roles involved, including the incident commander, scribe, customer/business liaison and subject matter experts. We will also cover the process and tooling needed to respond quickly and effectively to major incidents in order to minimize customer and business impact.
A Holistic View of Operational Capabilities—Roy Rapoport, Insight Engineering...Outlyer
Roy Rapoport will discuss the framework Insight Engineering at Netflix uses to think about the real-time operational insight space, the capabilities that any successful organization will eventually need in that space, and what Netflix has done in pursuit of addressing these needs at extremely large scale.
The Network Knows—Avi Freedman, CEO & Co-Founder of Kentik Outlyer
Apps generate the traffic, but the network delivers it. Many devops and netops stacks are completely separate, but it doesn't have to be that way!
In this talk we'll talk a bit about network traffic telemetry - sources, tools, and methods - and show how that data can be linked to metric, log, and APM systems.
Building a production-ready, fully-scalable Docker Swarm using Terraform & Pa...Outlyer
Bobby is a Consultant DevOps Engineer who currently works with UK Cloud’s clients to help them understand DevOps, how to improve their automation and migrate to a cloud-native environment. Bobby has over twenty years of experience working with the web and has most recently been working with public sector clients on their latest projects.
On the surface, the tech behind a payments API may look like any other startup’s. You'll probably find some Rails apps, a database, and a bunch of stuff off to the sides to glue it together. GoCardless found it's mostly not the tech that differs, but the approach.
Using their high-availability Postgres cluster as a running example, they explore how reliability became so important to them, and dive into the most recent feature they built into the cluster: zero-downtime patch upgrades.
DOXLON November 2016: Facebook Engineering on cgroupv2Outlyer
Cgroupv1 (or just "cgroups") has helped revolutionize the way that we manage and use containers over the past 8 years. In kernel 4.5, a complete overhaul is coming -- cgroupv2. This talk will go into why a new control group system was needed, the changes from cgroupv1, and practical uses that you can apply to improve the level of control you have over the processes on your servers.
DOXLON November 2016 - ELK Stack and Beats Outlyer
Jon Hammant, Head of Cloud & DevOps for UK & EU for Epam Systems, presented an overview of using the ELK stack together with the Beats Plugin data shippers to provide detailed system metrics, network traffic, file analysis, and more. In addition, he provided an overview of how to monitor multiple Docker containers in a cloud native environment, with logs sent back to a central host.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Neuro-symbolic is not enough, we need neuro-*semantic*
Paul Angus (ShapeBlue) - Push infrastructure with Ansible #DOXLON
1. Push
Infrastructure
with
Ansible
Paul
Angus
Cloud
Architect
&
Chief
Technology
Strategist
ShapeBlue
paul.angus@shapeblue.com
Twitter:
@CloudyAngus
2. @CloudyAngus
v How
we
use
Ansible
to
push
out
entire
cloud
infrastructures
v A
bit
of
background
v How
we
use
Ansible
v Work
flow
v Code
snippets
Push
Infrastructure
with
Ansible
3. @CloudyAngus
v Cloud
Architect
&
Chief
Technology
Strategist
for
ShapeBlue
v Apache
CloudStack
Committer
v Specialise
in….
v Designing
and
deploying
enterprise
and
public
clouds
v Helping
organisations
use
their
cloud
v Involved
with
CloudStack
before
donation
to
Apache
Foundation
v Designed
clouds
for
Orange,
TomTom,
PaddyPower,
Ascenty,
BSkyB
About
Me
Unofficial
Ansible
Evangelist/Cheerleader
4. @CloudyAngus
“ShapeBlue
are
expert
builders
of
public
&
private
clouds.
They
are
the
leading
global
Apache
CloudStack
integrator
&
consultancy”
About
ShapeBlue
7. @CloudyAngus
CloudStack
is
a
open
source
IaaS
platform.
Hypervisor
agnostic
-‐
KVM,
vSphere,
XenServer,
LXC,
OVM,
Baremetal
CloudStack
orchestrates
hypervisors
and
network
appliances
to
give
simple
control
to
complex
tasks
through
API
or
web
GUI.
(Yes,
it’s
like
OpenStack)
What
is
CloudStack
8. @CloudyAngus
Think
‘your
own
Amazon
Web
Services’
Public
clouds
(SPs/MSPs)
General
public
can
create
or
log
into
instances
themselves
Private
Clouds
(Enterprises)
Anyone
who
wants
to
be
able
to
orchestrate
their
environment
Hybrid
Clouds
(Enterprises)
Balance/share
load
between
their
own
DC
and
a
Public
Cloud
Who
uses
CloudStack
13. @CloudyAngus
Why
Talented
Cloud
Architect
“Noooo,
if
we
can
automate
the
building
of
environments
using
a
powerful,
simple
and
agentless
technology
we
can
make
building
at
scale
easy
while
ensuring
that
our
results
are
consistent
and
repeatable.”
“Building
CloudStack
environments
using
Ansible?
Are
you
just
having
fun
with
Ansible?”
14. @CloudyAngus
Why
CEO
“That
would
be
excellent.
Go
ahead.
Oh,
and
here’s
a
pay
rise”
“Noooo,
if
we
can
automate
the
building
of
environments
using
a
powerful,
simple
and
agentless
technology
we
can
make
building
at
scale
easy
while
ensuring
that
our
results
are
consistent
and
repeatable.”
15. @CloudyAngus
Some
of
that
might
actually
have
happened.
Disclaimer
Why
16. @CloudyAngus
CSForge™
v CSForge
delivers
the
rapid
deployment
of
a
standardised
CloudStack
powered
IaaS
cloud
for
small
production
deployments,
or
medium
scale
POCs
or
pilots.
The
framework
can
be
used
as
a
basis
for
public
cloud
or
enterprise
private
cloud
deployments
Production
v Cloud-‐scale
environments
initial
deployment
typically
24
–
100s
of
hosts
v Often
multi-‐hypervisor
Why
17. @CloudyAngus
Test/Dev
Need
to
be
able
to
create
full
environments
to
test:
v CloudStack
release
candidates
v CloudStack
features
v ShapeBlue
patches
Why
18. @CloudyAngus
Why
Ansible
v Technical:
v Client/Server
architecture
not
required
v Only
SSH
connectivity
required
(password
or
public/private
keys)
v …making
it
easier
to
build
virgin
environments
v Modules
can
be
in
any
language
capable
of
returning
JSON
or
key=value
text
pairs
v Has
an
API
v User:
v Much
shallower
learning
curve
v Don’t
need
to
learn
a
programming
language
(i.e.
Ruby)
v Not
as
many
pre-‐existing
playbooks
(recipes/manifests)
about,
but
improving
with
Ansible
Galaxy
19. @CloudyAngus
Typical
Logical
Production
Topology
Management (1Gb)
CIMC (1GB)
CIMC (1GB)
Management (1GB LACP Bond)
WWW
Compute Hosts
Storage Nodes
Guest
Public
Storage
(10GB LACP Bond)
ManagementHosts
Management
1GB Active/Passive Bond
Load Balancers
DNS/NTP Servers
MySQL Master & Slave
ACS Management Servers
Deployment
Server
Management
(1GB Active/Passive Bond)
Storage
(10GB LACP Bond)
iDRAC (1GB)
Storage (10GB LACP Bond)
Public
Cloud Compute Hosts
Storage Link to ACS Managers
20. @CloudyAngus
v 3
zones
v 2
geographic
locations
v Upgrade
done,
then
tests
run
for
a
week.
Then
VRs
restarted
Client
Test
Environment
CCP3.0.7B
MySQL
CPBM 2.2
MySQL
ESXi 1a
ESXi 1b
ESXi 1c
vCenter
Appliance
ESXi 2a
ESXi 2b
ESXi 2c
vCenter
Appliance
Zone 1 (local)
Zone 2 (local)
Zone 3 (remote)
NFS
NFS
VPN VPN
ESXi 2a
ESXi 2b
ESXi 2c
vCenter
Appliance
NFS
22. @CloudyAngus
#
Copyright
(C)
ShapeBlue
Ltd
-‐
All
Rights
Reserved
#
Unauthorized
copying
of
this
file,
via
any
medium
is
strictly
prohibited
#
Proprietary
and
confidential
#
Released
by
ShapeBlue
<info@shapeblue.com>,
April
2014
-‐-‐-‐
-‐
name:
Ensure
selinux
python
bindings
are
installed
yum:
name=libselinux-‐python
state=present
-‐
name:
Ensure
the
Apache
Cloudstack
Repo
file
is
configured
template:
src=cloudstack.repo.j2
dest=/etc/yum.repos.d/cloudstack.repo
-‐
name:
Ensure
selinux
is
to
permissive
command:
setenforce
permissive
changed_when:
false
-‐
name:
Ensure
selinux
is
set
permanently
selinux:
policy=targeted
state=permissive
-‐
name:
Ensure
CloudStack
packages
are
installed
yum:
name=cloudstack-‐management
state=present
-‐
name:
Ensure
MySQL
Client
is
present
yum:
name=mysql
state=present
-‐
name:
Ensure
vhd-‐util
is
present
get_url:
url="{{
vhdutil_url
}}"
dest=/usr/share/cloudstack-‐common/
scripts/vm/hypervisor/xenserver/vhd-‐util
mode=0755
-‐
name:
Ensure
CloudStack
Usage
Service
is
installed
yum:
name=cloudstack-‐usage
state=present
-‐
name:
Ensure
CloudStack
Usage
Service
is
started
service:
name=cloudstack-‐usage
state=started
-‐
include:
./../galera-‐cluster/tasks/main.yml
when:
"{{
db_type
}}
==
'galera'"
-‐
include:
./setupdb.yml
CloudStack
Management
Server
Role
23. @CloudyAngus
#
Copyright
(C)
ShapeBlue
Ltd
-‐
All
Rights
Reserved
#
Unauthorized
copying
of
this
file,
via
any
medium
is
strictly
prohibited
#
Proprietary
and
confidential
#
Released
by
ShapeBlue
<info@shapeblue.com>,
April
2014
[cloudstack]
name=cloudstack
baseurl=http://{{
acs_build_repo
}}/{{
acs_build_path
}}
enabled=1
gpgcheck=1
cloudstack.repo.j2
25. @CloudyAngus
Create
bare
VM
+
yum
install
git
git
clone
mega
repo
with
roles
etc.
Install
&
configure
Ansible
(included
in
repo)
Update
hosts
and
group_vars
Create
Deployment
server
(from
role)
locally
PXE
boot
hosts/
mgmt
VMs
to
bare
OS
Push
application
configuration
to
VMs
and
Hosts
Building
Environments
34. @CloudyAngus
# Specifies the keyboard layout
keyboard {{ keyboard_lang }}
# Used with an HTTP install to specify where the install files are located
url -‐-‐url htp://{{ hostvars[inventory_hostname]['mgmt_ip'] }}/{{ centos_iso_version }}
# Assign a stacc IP address upon first boot & set the hostname
network -‐-‐onboot yes -‐-‐device {{ ks_device }} -‐-‐bootproto dhcp -‐-‐noipv6
# Set the root password
rootpw {{ mgmt_root_password }}
mkdir /root/.ssh
curl htp://{{ hostvars[inventory_hostname]['mgmt_ip'] }}/{{ publickey_file_name }} >> /root/.ssh/authorized_keys
kickstart.j2
36. @CloudyAngus
v Reverse Zone – Format:
2 IN PTR blah1.domain.com.
; Address Records
{% for host in groups['management_hosts'] %}
{{ hostvars[host]['mgmt_ip']|split('.')[3] }} IN PTR {{ hostvars[host]['hostname'] }}.
{% endfor %}
{% for host in groups['xenserver_hosts'] %}
{{ hostvars[host]['mgmt_ip']|split('.')[3] }} IN PTR {{ hostvars[host]['hostname'] }}.
{% endfor %}
DNS
Entries
into
Zone
File
–
Jinja2
37. @CloudyAngus
v Custom
Filter
to
Split
string
import re
def split_string(string, seperator=' '):
return string.split(seperator)
def split_regex(string, seperator_patern):
return re.split(seperator_patern, string)
class FilterModule(object):
''' A filter to split a string into a list. '''
def filters(self):
return {
'split' : split_string,
'split_regex' : split_regex,
}
Custom
Filter
38. @CloudyAngus
v ansible.cfg
# set plugin path directories here, separate with colons
accon_plugins = /usr/share/ansible_plugins/accon_plugins
callback_plugins = /usr/share/ansible_plugins/callback_plugins
conneccon_plugins = /usr/share/ansible_plugins/conneccon_plugins
lookup_plugins = /usr/share/ansible_plugins/lookup_plugins
vars_plugins = /usr/share/ansible_plugins/vars_plugins
filter_plugins = /usr/share/ansible_plugins/filter_plugins:/CSForge/custom_plugins/filter_plugins
Add
path
if
required