Azure Automation wants you to automate everything, everywhere. Hybrid Workers allow Azure Automation to reach new places within your infrastructure, allowing for more automation and less complexity. Learn how to deploy Hybrid Workers, balance automation workloads across groups of workers, trigger jobs off via web hooks, monitor jobs, remove scheduled tasks and much more.
Global Azure Bootcamp 2016 - Azure Automation Invades Your Data Centrekieranjacobsen
Azure Automation wants you to automate everything, everywhere. Hybrid Workers allow Azure Automation to reach new places within your infrastructure, allowing for more automation and less complexity. Learn how to deploy Hybrid Workers, balance automation workloads across groups of workers, trigger jobs off via web hooks, monitor jobs, remove scheduled tasks and much more.
My talk in Prague focused on the challenges we had with Code Deployments in the past and how we managed to solve them by leveraging AWS as our backbone.
Flynn Bundy - 60 micro-services in 6 months WinOps Conf
In this talk, I want to take the audience on a journey of how we (Coolblue) migrated 60 .Net micro-services to the AWS Cloud. This talk covers the high’s, low’s and everything in between when working in a multi-disciplinary Developer / Operations Cloud team. This talk will cover the evolution of our processes and toolsets to align with Chaos Engineering best practices. Most importantly, I want to highlight how we changed the way we thought about services and servers in general.
The key takeaways from this talk would be related to:
Continous Inspection (TeamCity)
Continous Deployment (Octopus Deploy)
Infrastructure as Code (Cloudformation)
Chaos Engineering (Chaos Monkey)
Monitoring and Logging (Datadog and Splunk)
.Net and .Net Core (on Windows Server 2016)
Automation in AWS Cloud
Global Azure Bootcamp 2016 - Azure Automation Invades Your Data Centrekieranjacobsen
Azure Automation wants you to automate everything, everywhere. Hybrid Workers allow Azure Automation to reach new places within your infrastructure, allowing for more automation and less complexity. Learn how to deploy Hybrid Workers, balance automation workloads across groups of workers, trigger jobs off via web hooks, monitor jobs, remove scheduled tasks and much more.
My talk in Prague focused on the challenges we had with Code Deployments in the past and how we managed to solve them by leveraging AWS as our backbone.
Flynn Bundy - 60 micro-services in 6 months WinOps Conf
In this talk, I want to take the audience on a journey of how we (Coolblue) migrated 60 .Net micro-services to the AWS Cloud. This talk covers the high’s, low’s and everything in between when working in a multi-disciplinary Developer / Operations Cloud team. This talk will cover the evolution of our processes and toolsets to align with Chaos Engineering best practices. Most importantly, I want to highlight how we changed the way we thought about services and servers in general.
The key takeaways from this talk would be related to:
Continous Inspection (TeamCity)
Continous Deployment (Octopus Deploy)
Infrastructure as Code (Cloudformation)
Chaos Engineering (Chaos Monkey)
Monitoring and Logging (Datadog and Splunk)
.Net and .Net Core (on Windows Server 2016)
Automation in AWS Cloud
What does Serverless mean for DevOps, in practical terms? While Serverless does reduce the need for server-centric DevOps, it poses new challenges in many areas including security, app deployment and cloud resource provisioning, partly due to an explosion of "nanoservices". Based on a current project using AWS, we cover relevant tools, techniques and tips to deliver a smooth serverless experience for development through to production.
Delivered at Bristol DevOps meetup, 27 Jun 2018. To see detailed notes covering extra points not on slides, click the Notes link just below (or download the Powerpoint).
Update: here's the correct link for Gojko Adzic talk on the Backendless slide - https://www.youtube.com/watch?v=w7X4gAQTk2E
Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning.
Infrastructure Automation on AWS using a Real-World Customer ExampleAPI Talent
This technical session focuses on a customer use case and how using the AWS Cloud together with automation has enabled them to standardise and automate their systems.
This talk will describe how this is achieved with two tools, Cloud formation and Puppet. Cloud formation is a declarative templating language that enables the deployment of environments in a standardised way. Combined with a configuration management tool like Puppet allows for the automation of ongoing software deployments and maintenance in a low overhead manner. Puppet is a Configuration Management tool that installs and configures software on instances. Taken together a complete system can be built from the ground up.
Adcloud TechTalk #5
introducing how yoochoose.com uses Asgard for its recommender system
http://dev.adcloud.com/blog/2013/02/27/asgard/
http://www.yoochoose.com
Rainbows, Unicorns, and other Fairy Tales in the Land of Serverless DreamsJosh Carlisle
When done correctly Serverless offers fantastic potential but can also lead to spectacular failure when critical concepts are overlooked. With over a dozen Serverless implementations on Azure Functions over the last couple years, I’ve learned some lessons the hard way. In this talk, I will be sharing a few of the most impactful hard-earned lessons and how I was able to overcome them. I’ll be touching on topics ranging from considerations using traditional relational databases, managing service and data connections to managing complexity and increasing observability. The talk is done in the context of Azure Functions but whose concepts apply equally to all Serverless Platforms.
Cleaning out your IT Closet - Offloading Infrastructure and Headaches to Windows Azure IaaS. SharePoint Saturday Redmond Presentation. Learn how an Azure Virtual Private Network can help you move your servers into the cloud, including entire SharePoint farms.
Dissection of the arguments against using public cloud providers from the Chef Compliance event in Dallas April 25, 2016. Compared and contrasted benefits of AWS vs. Azure vs. GCP.
Join me for the presentation where a blue-screen of death, is the desired result! MS15-034 was a particularly interesting vulnerability that turned out to have more bark than bite. Using PowerShell to test for MS15-034 presents us with a number of unique challenges, the solution is to look at a lower level, with TCP connections. This presentation will discuss MS15-034, what the vulnerability was, and how we can exploit it. Learn about working directly with TCP connections in PowerShell and the ins and outs you need to know.
PowerShell, the must have tool and the long overlooked security challenge. Learn how PowerShell’s deep integration with the Microsoft platform can be utilized as a powerful attack platform within the enterprise space. Watch as a malicious actor moves from a compromised end user PC to the domain controllers and learn how we can begin to defend these types of attacks.
What does Serverless mean for DevOps, in practical terms? While Serverless does reduce the need for server-centric DevOps, it poses new challenges in many areas including security, app deployment and cloud resource provisioning, partly due to an explosion of "nanoservices". Based on a current project using AWS, we cover relevant tools, techniques and tips to deliver a smooth serverless experience for development through to production.
Delivered at Bristol DevOps meetup, 27 Jun 2018. To see detailed notes covering extra points not on slides, click the Notes link just below (or download the Powerpoint).
Update: here's the correct link for Gojko Adzic talk on the Backendless slide - https://www.youtube.com/watch?v=w7X4gAQTk2E
Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning.
Infrastructure Automation on AWS using a Real-World Customer ExampleAPI Talent
This technical session focuses on a customer use case and how using the AWS Cloud together with automation has enabled them to standardise and automate their systems.
This talk will describe how this is achieved with two tools, Cloud formation and Puppet. Cloud formation is a declarative templating language that enables the deployment of environments in a standardised way. Combined with a configuration management tool like Puppet allows for the automation of ongoing software deployments and maintenance in a low overhead manner. Puppet is a Configuration Management tool that installs and configures software on instances. Taken together a complete system can be built from the ground up.
Adcloud TechTalk #5
introducing how yoochoose.com uses Asgard for its recommender system
http://dev.adcloud.com/blog/2013/02/27/asgard/
http://www.yoochoose.com
Rainbows, Unicorns, and other Fairy Tales in the Land of Serverless DreamsJosh Carlisle
When done correctly Serverless offers fantastic potential but can also lead to spectacular failure when critical concepts are overlooked. With over a dozen Serverless implementations on Azure Functions over the last couple years, I’ve learned some lessons the hard way. In this talk, I will be sharing a few of the most impactful hard-earned lessons and how I was able to overcome them. I’ll be touching on topics ranging from considerations using traditional relational databases, managing service and data connections to managing complexity and increasing observability. The talk is done in the context of Azure Functions but whose concepts apply equally to all Serverless Platforms.
Cleaning out your IT Closet - Offloading Infrastructure and Headaches to Windows Azure IaaS. SharePoint Saturday Redmond Presentation. Learn how an Azure Virtual Private Network can help you move your servers into the cloud, including entire SharePoint farms.
Dissection of the arguments against using public cloud providers from the Chef Compliance event in Dallas April 25, 2016. Compared and contrasted benefits of AWS vs. Azure vs. GCP.
Join me for the presentation where a blue-screen of death, is the desired result! MS15-034 was a particularly interesting vulnerability that turned out to have more bark than bite. Using PowerShell to test for MS15-034 presents us with a number of unique challenges, the solution is to look at a lower level, with TCP connections. This presentation will discuss MS15-034, what the vulnerability was, and how we can exploit it. Learn about working directly with TCP connections in PowerShell and the ins and outs you need to know.
PowerShell, the must have tool and the long overlooked security challenge. Learn how PowerShell’s deep integration with the Microsoft platform can be utilized as a powerful attack platform within the enterprise space. Watch as a malicious actor moves from a compromised end user PC to the domain controllers and learn how we can begin to defend these types of attacks.
Evolving your automation with hybrid workerskieranjacobsen
Azure Automation wants you to automate everything, everywhere. Hybrid Workers allow Azure Automation to reach new places within your infrastructure, allowing for more automation and less complexity. This session covers the basics of Hybrid Workers before looking at balancing workloads, managing resource dependencies, integrating with web hooks and monitoring job execution. The is a great session for anyone who is automating infrastructure or cloud resources.
DevSecOps, or SecDevOps has the ambitious goal of integrating development, security and operations teams together, encouraging faster decision making and reducing issue resolution times. This session will cover the current state of DevOps, how DevSecOps can help, integration pathways between teams and how to reduce fear, uncertainty and doubt. We will look at how to move to security as code, and integrating security into our infrastructure and software deployment processes.
Infrastructure Saturday - Level Up to DevSecOpskieranjacobsen
DevSecOps, or SecDevOps has the ambitious goal of integrating development, security and operations teams together, encouraging faster decision making and reducing issue resolution times. This session will cover the current state of DevOps, how DevSecOps can help, integration pathways between teams and how to reduce fear, uncertainty and doubt. We will look at how to move to security as code, and integrating security into our infrastructure and software deployment processes.
DevSecOps, or SecDevOps has the ambitious goal of integrating development, security and operations teams together, encouraging faster decision making and reducing issue resolution times. This session will cover the current state of DevOps, how DevSecOps can help, integration pathways between teams and how to reduce fear, uncertainty and doubt. We will look at how to move to security as code, and integrating security into our infrastructure and software deployment processes.
Learn how to leverage various tools to quickly and consistently create full environments in minutes.
Like most things in life, there's an easy way and a hard way. The same holds true when working in cloud environments such as Microsoft Azure. The Azure management portal and Visual Studio can be great for relatively simple projects, but quickly become tedious when trying to create the multiple resources that often make up a real-world solution. This session will demonstrate how to leverage various tools, such as PowerShell, Azure Resource Manager, Azure Automation, and the Azure Management Library, to quickly and consistently create full environments in minutes.
You will learn:
- How to use Azure Management Library to create various Azure assets
- How to use Azure PowerShell cmdlets to query Azure services, deploy VMs and Cloud Services
- How to leverage Azure Automation to reduce operating costs and other management tasks
How do organizations ensure that they maintain control over their costs when adopting Cloud?
Ultimately, the key to controlling cost for cloud infrastructure is to ensure that the organization has visibility over resources that are being provisioned — a task that is easier said than done when developers can provision resources in a single API Call.
This talk was presented at the 2014 OpenStack Summit in Atlanta.
Key considerations when adopting cloud: expectations vs hurdlesScalr
Everyone is talking about it: Cloud is the next big thing in IT.
But what are the results your business should expect from cloud adoption? What are the keys to making it work? What are the pitfalls you should avoid?
In this talk driven by our experience working with cloud adopters, we'll show that successfully adopting Cloud is a process that actively involves IT and business units, and we’ll be sure to consider and reconcile both perspectives.
This is a talk 100% driven by customer stories, delivered by Sebastian Stadil for the December 3rd 2013 Virtual Build a Cloud Day event.
CCCEU14 - A Real World Outlook on Hybrid Cloud: Why and HowScalr
Why pursue hybrid cloud? What are success strategies to make it work, and pitfalls to be mindful of?
Thomas Orozco's speaker slides from the CloudStack Conference Europe in Budapest (Nov. 2014).
Since its release in 2010, the Hak5 Rubber Ducky has been an overlooked component to an attackers arsenal. With almost every computer on the planet accepting input via keyboards and the USB standard known as HID or Human Interface Device, the Ducky abuses one of the ultimate trust relationships within a computer. The Ducky makes use of an extremely simple scripting language for the development of payloads which can then be executed at speeds beyond 1000 words per minute. This presentation will cover off the creation of your very first through to advanced payloads as well as looking at some of the tools you can use to develop your own.
Learn about the advances in Windows 8.1 and Windows Server 2012R2 that allow your users to work from anywhere in the world. Kieran Jacobsen will cover topics client seamless corporate connectivity with DirectAccess, managing BitLocker with MBAM, user document synchronization with Work Folders, addressing the needs of enterprise security and any performance requirements you might have.
Deployment Automation for Hybrid Cloud and Multi-Platform EnvironmentsIBM UrbanCode Products
Today, competitive advantage is often driven by software. The business that can deploy solutions to their customers more quickly across a range of platforms, with the flexibility to continuously delivery new functionality, is poised to succeed. DevOps enables organizations to manage complex enterprise applications that are hybrid in nature - often with cloud or mobile components being fed by data from traditional back-end systems like databases or mainframes.
This eSeminar explores hybrid cloud use cases, along with solutions that equip businesses to deliver value to their customers with speed, quality, and security.
2016.09.10 System Center User Group Japan 第15回勉強会のセッション資料です。サンプルコードは GitHub で公開します。
PowerShell DSC の概要(おさらい)と PowerShell DSC for Linux の利用方法、さらに Azure Automation DSC を使用した Aure VM の構成についてお話しました。PowerShell Core for Linux についても簡単にまとめています。
Are you considering deploying DirectAccess? DirectAccess is Microsoft’s next generation remote access solution providing a seamless corporate network connectivity experience. The session will cover a number of issues that IT professionals deploying DirectAccess should be aware of including load balancing, certificates, and IP Infrastructure requirements.
Infrastructure Saturday 2011 - Understanding PKI and Certificate Serviceskieranjacobsen
In every organization, there is a growing need for a strong well-designed public key infrastructure solution and in many of these; Active Directory Certificate Services will be used. This session will guide you through a solution based on best practice, shed some light on common issues encountered and some shortcuts to assist in management with PowerShell.
The IT industry has experienced rapid change and consolidation. The introduction of Cloud, Agile, DevOps and shortages in skilled staff have created immense pressure on enterprise IT teams. Organisations are concerned about the costs of data breaches, and need to act to ensure they do not become the next Yahoo, OPM or Target.
DevSecOps (or SecDevOps) integrates development, security and operations teams together to encourage faster decision making and reduce issue resolution times.
This session will cover the current state of DevOps, and how DevSecOps can help integrate pathways between teams to reduce fear, uncertainty and doubt. We will look at how to move to security as code, and integrate security into our infrastructure and software deployment processes.
Eclipse Dirigible is one of the flagmans of the Cloud Development at Eclipse. Its in-system programming model nature along with the vast variety of built-in rapid application development tools, makes it the pragmatic choice for the Cloud based business applications.
https://www.eclipsecon.org/europe2018/sessions/whats-new-eclipse-dirigible-3
In this presentation we will look at strategies we can use to make a more nimble commerce platform that developers are excited to contribute too and customers are wow'ed by its ease of use.
Multi-Tenant Hybrid Solution based on Hybrid Connections & App ServiceAlexander Laysha
During the session you'll get deep insight into hybrid architecture chosen for production project based on Azure. You will walk through analyzed Azure technologies, PoC results, decision making factors, finalized architecture, future evolvement options as well as challenges occurred during development phase.
It's a wrap - closing keynote for nlOUG Tech Experience 2017 (16th June, The ...Lucas Jellema
Closing keynote for the Tech Experience 2017 conference in Amersfoort, The Netherlands (16th June 2017). Touches upon the role of The Oracle Database in a changing landscape with NoSQL, CQRS, REST & JSON, Hadoop and Elastic Search. Discusses the gaps that Oracle professionals have to bridge in order to broaden their horizon and prepare for the (near) future. The session discusses the cloud - and how it will impact most organizations and Oracle specialists. It summarizes the main topics and themes from the Tech Experience 2017 conference.
Contains basic information regarding Automation Anywhere which is a tool that comes under the Robotic process automation Umbrella. This PPT describes all of the basic information along with it's pros and cons. Enjoy Reading :)
SPSBoise - Business Process Automation and SharePointSteve Dark
This presentation will focus on a case study of a large education school district in the State of Washington. This organization has utilized SharePoint and Nintex to automate all sorts of business processes, from employee on-boarding/off-boarding to tabulating daily lunch orders from students. Through this automation project the school district has significantly reduced hours of unnecessary administrative work and costs. This presentation describes automation considerations and provides a demo of in-place business automation examples on this platform.
GAMAKA AI SOLUTION is an advanced computing center which offer multple courses such as DATA SCIENCE, ARTIFICIAL INTELLIGENCE, PYTHON, PHP, JAVA, DOT NET, SOFTWARE TESTNG, MACHINE LEARNING, ANGULAR 4/5, BIG DATA HADOOP
The Collision of Payroll, HR, and Time & Attendance in the Cloud: It's Inevit...APS
We will explore what a cloud solution is and how a unified cloud solution helps businesses streamline Payroll, HR, and Time & Attendance.
Conducting Business in the Cloud
Today's business environment requires a configurable cloud based solution to meet the modern company structure. Multiple roles have different information needs and this requires the need to control access to proprietary and confidential data. Companies have to set access controls based on roles throughout all levels of an organization. Hourly employees clocking in and out, PTO request submissions, timesheets reviews and approvals are just a few examples of different tasks that require different access levels within an organization.
Now is the Time for Change
Now more than ever, businesses are embracing the use of cloud solutions to streamline the process of delivering information. Cloud solutions go beyond the distribution of materials to include payroll management and human resources applications. The use of this type of tool not only simplifies day-to-day operations, but also reduces operations costs.
Where Does the Information Go?
A cloud is an online network used to store and access valuable information so it is easily accessible to many people. The result is an intuitive SaaS (Software-as-a-Service) product that integrates with a businesses’ existing technology.
In an ever-changing, fast-paced work environment, businesses are turning to more efficient, economical approach to managing data distribution and maintenance. Cloud technology is becoming the go-to solution to meet these needs. It is an excellent solution for companies who are looking to better strategize their budgetary and technology efforts, while also integrating functionalities for important internal processes.
Conducting Business in the Cloud
Today's business environment requires a configurable cloud based solution to meet the modern company structure. Multiple roles have different information needs and this requires the need to control access to proprietary and confidential data. Companies have to set access controls based on roles throughout all levels of an organization. Hourly employees clocking in and out, PTO request submissions, timesheets reviews and approvals are just a few examples of different tasks that require different access levels within an organization.
Now is the Time for Change
Now more than ever, businesses are embracing the use of cloud solutions to streamline the process of delivering information. Cloud solutions go beyond the distribution of materials to include payroll management and human resources applications. The use of this type of tool not only simplifies day-to-day operations, but also reduces operational costs.
Where Does the Information Go?
A cloud is an online network used to store and access valuable information so it is easily accessible to many people. The result is an intuitive SaaS (Software-as-a-Service) product that integrates with a business’s existing technology.
In an ever-changing, fast-paced work environment, businesses are turning to more efficient, economical approach to managing data distribution and maintenance. Cloud technology is becoming the go-to solution to meet these needs. It is an excellent solution for companies who are looking to better strategize their budgetary and technology efforts, while also integrating functionalities for important internal processes.
This presentation details how each company can employ the cloud to find a competitive advantage within the marketplace. It discusses cost savings, disaster preparedness, and agility. It also provides a short description of what the cloud is, how it works, and why it is making such a large impact on our lives today.
Welcome to Senlogic Automation Pvt Ltd
Ever since year 2002, “Senlogic Automation Private Ltd.”, is an ISO 9001:2008 certified organization, approved by RDSO engaged in manufacturing, supplying and exporting a commendable array of Weighing and Loading Systems. Our product-line encompasses of Millennium, Dumper Load, On Board Weighing Solutions, Dumper Weighing System, Truck Weighing System and Rail in Motion Weighing System. In addition to this, we offer In Motion Train Weighing Systems, Rail In Motion, Static Rail Weighing System, Electronic Hanging Scales, Road Weighing Bridge and many more.
It’s one thing to support many data sources with megabytes of data. It’s a completely different problem supporting thousands of data sources with terabytes of data every day. How do you create systems that scale infinitely?
The answer is; you don’t . You can not design for infinite scalability. Rather, consider a pod approach where each pod supports a defined capacity. Scalability results from deployment of multiple cooperating pods.
Systems handling extremely large data sources with significant processing requirements are difficult at best to validate. Attempting to deploy such a system without well understood capacity limits is destined for failure.
This was first presented at Cloud Expo NYC.
Similar to Azure automation invades your data centre (20)
The Boring Security Talk - Azure Global Bootcamp Melbourne 2019kieranjacobsen
Troy Hunt and Scott Helme have spoken about all the exciting security things, so let’s talk about the boring bits! When we think about application and infrastructure security, we often think about the big shiny things and forget the boring bits. In this talk, we’ll look at the security of our package dependencies, CI/CD tools, how we send email and even resolve hostnames. Over the last few months, hackers have managed to inject cryptocurrency miners into all these places. Security incidents in these components might not result in an entry in Have I Been Pwned?, but they'll result in a bad day.
Troy Hunt and Scott Helme have spoken about all the exciting security things, so let’s talk about the boring bits! When we think about application and infrastructure security, we often think about the big shiny things and forget the boring bits. In this talk, we’ll look at the security of our package dependencies, CI/CD tools, how we send email and even resolve hostnames. Over the last few months, hackers have managed to inject cryptocurrency miners into all these places. Security incidents in these components might not result in an entry in Have I Been Pwned?, but they'll result in a bad day.
Troy Hunt and Scott Helme have spoken about all the exciting security things, so let’s talk about the boring bits! When we think about application and infrastructure security, we often think about the big shiny things and forget the boring bits. In this talk, we’ll look at the security of our package dependencies, CI/CD tools, how we send email and even resolve hostnames. Over the last few months, hackers have managed to inject cryptocurrency miners into all these places. Security incidents in these components might not result in an entry in Have I Been Pwned?, but they'll result in a bad day.
Troy Hunt and Scott Helme have spoken about all the exciting security things, so let’s talk about the boring bits! When we think about application and infrastructure security, we often think about the big shiny things and forget the boring bits. In this talk, we’ll look at the security of our package dependencies, CI/CD tools, how we send email and even resolve hostnames. Over the last few months, hackers have managed to inject cryptocurrency miners into all these places. Security incidents in these components might not result in an entry in Have I Been Pwned?, but they'll result in a bad day.
This was presented at DDD Melbourne, which is a shortened version of this presentation.
Microsoft has provided an almost unlimited number of ways for you to securely deploy Azure resources; but people continue to make simple mistakes. In 2017 many organisations had breaches due to poor cloud deployment practices.
In this session, you’ll learn how to use Azure Resource Manager (ARM) templates to deploy resources in a secure manner. This session will look at Azure Storage, App Services, SQL, Virtual Machines and Virtual Networks. I'll discuss the costs, benefits and trade-offs of different design patterns and how you can secure your deployment pipelines.
Ransomware made headlines in 2017, with attacks shutting down the UK's NHS and costing Maersk shipping over $300m in lost revenue. Ransomware is a massive business for cybercriminals, driving the cost of bitcoin from $1200 to over $7000 per coin. We often see ransomware as some unbeatable force, however with some common sense controls and simple tricks, the damage can be reduced or even stopped. Join Kieran to learn some simple, free steps you can do to stop ransomware in its tracks.
The truth is that money can’t buy security just as it cannot buy happiness. Ransomware has become a cybercriminal’s most profitable enterprise, and something that IT professionals and even the general public now fear. Ransomware is actually pretty simple and unsophisticated code, and at times the damage can stopped with some simple tricks. Best of all, these are FREE!
DevSecOps, or SecDevOps has the ambitious goal of integrating development, security and operations teams together, encouraging faster decision making and reducing issue resolution times. This session will cover the current state of DevOps, how DevSecOps can help, integration pathways between teams and how to reduce fear, uncertainty and doubt. We will look at how to move to security as code, and integrating security into our infrastructure and software deployment processes.
PowerShell, the must have tool for administrators, and the long overlooked security challenge. See Kieran Jacobsen present how PowerShell, with its deep Microsoft platform integration can be utilised by an attack to become a powerful attack tool. Learn how an attacker can move from a compromised workstation to a domain controller using PowerShell and WinRM whilst learning how to defend against these attacks.
CMDLets, scripts, functions, methods and modules all make PowerShell sound very complicated however with some simple guidelines you too can become a PowerShell automation Pro!
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
4. AUTOMATION CAN MEAN MANY THINGS
• CLOUD SERVICE AUTOMATION
• INFRASTRUCTURE AUTOMATION
• PROCESS AUTOMATION
5. AZURE AUTOMATION
• MANAGED SERVICE
• AZURE AND CLOUD FOCUS
• BACKED BY POWERSHELL
• DR, HA, PROVISIONING, MONITORING, PATCHING, BACKUPS
• HIGHLY AVAILABLE
8. AZURE WORKER LIMITATIONS
• LIMITED TO SPECIFYING WHICH AZURE REGION
• NO CONTROL OVER IP ADDRESS
• TRACEABILITY
• FIREWALLS
• LIMITED CONTROL OVER MAKE UP OF AZURE WORKER
9. HYBRID WORKERS
• RUNBOOKS RUNNING WITHIN YOUR DC
• REQUIRE OPERATIONS MANAGEMENT SUITE WITH AUTOMATION SOLUTION/PLUGIN
• SUPPORT SCRIPT, WORKFLOW AND GRAPHICAL RUNBOOKS
• NO INBOUND FIREWALL REQUIREMENTS
13. HYBRID WORKER LIMITATIONS
• MODULE DEPLOYMENT
• EXECUTION CONTEXT
• NO SIMPLE FILE OR EVENT TRIGGERS
• NO PRIORITISATION OF WORKERS IN A GROUP
• DOCUMENTATION
14. AZURE AUTOMATION AUTHORING TOOLKIT
• MANAGE AZURE AUTOMATION ACCOUNTS FROM ISE
• CREATE, EDIT AND MODIFY RUNBOOKS AND ASSETS
• AVAILABLE FROM THE POWERSHELL GALLERY
HTTPS://WWW.POWERSHELLGALLERY.COM/PACKAGES/AZUREAUTOMATIONAUTHORINGTOOLKIT
16. WEB HOOKS
• START JOBS FROM HTTP REQUESTS
• IDEA FOR APPLICATION AND 3RD PARTY INTEGRATION
• GREAT FOR STARTING JOBS IF AZURE CMDLETS ARE NOT INSTALLED
• RUNBOOKS MAY NEED MODIFICATIONS TO RUN FROM WEBHOOKS
18. LINKS
• BLOG: HTTP://POSHSECURITY.COM
• TWITTER: @KJACOBSEN
• RUNBOOKS FROM THIS PRESENTATION: HTTPS://GITHUB.COM/POSHSECURITY/POSHSECURITYAZUREAUTOMATION
• HYBRID WORKERS: HTTPS://AZURE.MICROSOFT.COM/EN-US/DOCUMENTATION/ARTICLES/AUTOMATION-HYBRID-
RUNBOOK-WORKER/#
• WEB HOOKS: HTTP://BLOG.CORETECH.DK/JGS/AZURE-AUTOMATION-USING-WEBHOOKS-PART-1-INPUT-DATA/
• AZURE AUTOMATION AUTHORING TOOLKIT:
HTTPS://WWW.POWERSHELLGALLERY.COM/PACKAGES/AZUREAUTOMATIONAUTHORINGTOOLKIT
Editor's Notes
Hi everyone, My name is Kieran Jacobsen and tonight I want to talk to you about Microsoft Azure Automation and using the new hybrid workers within your data centre.
So just a little bit about me.
I work as a Technical Lead at Readify, my role is to manage and support not only Readify’s infrastructure but that of our customers as well. In terms of scale, we are looking at almost 100 Azure subscriptions, and similar numbers of Azure AD and VSTS instances. This causes some unique challenges and requires some unique solutions.
I have also lived a lot of what we have preached. About a year ago, we made the call to move from co-located infrastructure to Azure IAAS, this has lead to me, as a system administrator to live a lot of the things we have often spoken and heard about. I have been working on moving all sorts of infrastructure components from a classic on premise deployment to Azure. It has been one of the most interesting infrastructure projects I have ever worked on.
Automation has always been a massive thing for me. Since my first job, almost 10 years ago, to now, I have always made use of automation. Anything that can make my job easier is something I will want to do.
In my first role, I was the Windows guy in a team of Unix and mainframe engineers. Automation here was all about server deployments and server maintenance tasks, with a little bit of monitoring thrown in. At my next gig, I was automating all the tiny bits and pieces the team were doing, from WSUS to Certificate authorities and the bits in between. I then ended up supporting a bank. Now that was some crazy level automation! The automation there wasn’t the traditional server deployment or user management automation, in this case it was moving files around representing millions of dollars. Precision was a must, you needed to be absolutely positive that things occurred in a certain way, every time, and if something didn’t happen correctly that people knew about it. When peoples pay checks are in the balance, you need to ensure you know what you are doing.
Now at Readify, I find myself automating all sorts of things. In the past 12 months I have looked at automated user creations, deletions and just keeping user data in sync between traditional systems like HR and Payroll systems, to systems like Active Directory and then on to cloud systems like Office 365, Azure Active Directory, CRM Online and a bunch of other places. I also need to automate infrastructure management and deployment tasks, like deploying servers, setting up DNS, configuring Office 365, cleaning up log files, buying certificates, backing up and restoring files, even whitelisting TOR addresses at times.
So tonight's agenda is pretty simple, we will cover off what Azure Automation is and some basic concepts, look at the limitations of the Azure Work, then take a look at Hybrid Workers, groups and their limitations.
I will then show you the Azure Automation Authoring Toolkit and web hooks. We will finish off with a nice end to end demo showing off some user creation steps.
So one of the big things about automation is that it means so many different things, different people have different ideas and goals for automation.
For some, they see automation as just something that occurs between different cloud systems, they want to automate between different platforms using their publicly available API. Now the big thing for this style of automation is that the automation is typically outside of our network, we are often connecting multiple public cloud systems together. Azure Automation was originally designed with this style of automation in mind.
Now for those who have more of a system administration background, they might see automation as something that happens on premise, within the corporate network. Often or not, some of the automation tasks within our environment focus on lower level infrastructure, this might be our core network switches, storage area networks, or even a mainframe. Infrastructure automation like this often requires not only a connection to the corporate network, but we might need to install third party applications. We typically are not going to open up our core switching infrastructure to the Internet, so we need our automation system to be connected to our network. Products like CA UniCenter are great examples in this space.
Finally we have process automation. Process automation aims at turning existing business processes, no matter what they maybe, into repeatable execution steps. Process automation doesn’t just target the little things, like how do we deploy a server, or how do we create a user, but looks at the business process from start to finish. With process automation, we are not just looking at automating things that occur in the IT team, but the enterprise as a whole. One of the big things with process automation is that it may require access to all sorts of part of the enterprise, be it our internal network or cloud services.
In April 2014, Microsoft released a preview of Azure Automation. Microsoft’s goal with Azure Automation is to provide a managed service for scripting and automation, focusing on simplifying cloud management with process automation.
It is really important to know that from its early infancy, Azure Automation was heavily designed to provide automation for Azure and third party cloud services. We will see later on, that many of the limitations it has as a product, come from this focus.
One of the best thing about Azure Automation, is that it lives and breathes PowerShell. At first it only supported workflows, thankfully this changed and since last year it has not only supported standard PowerShell scripts, but it now includes a graphic runbook development method.
Azure Automation really is about targeting our processes, and not just individual tasks. Processes like disaster recovery, scaling and high availability, provisioning, monitoring and patching are a big focus in Azure Automation. In fact most of Microsoft’s early examples focused on these aspects.
The big thing though with Azure Automation, and the reason I am such a huge fan, is its availability. One of the big risks with the more traditional automation platforms is the loss of your primary server can often throw your entire environment, and even your organisation into a complete spin. Anything that keeps my organisation going and doesn’t require me to wake up at 3am to fix, is freaking awesome.
So there are a few concepts you should be aware of when looking at Azure Automation.
At the top, we have our automation account. Your account contains everything you want to do and everything you need to make it happen.
Next we have runbooks. Runbooks contain our processes or procedures that we want to execute in a repeatable fashion. Think of these as checklist almost, if you want to get something done, we will follow the steps outlined in a runbook to accomplish the required task.
Assets are reusable components or items that are shared across all runbooks, they could be schedules specifying when our runbooks should be run, or they could be PowerShell modules, certificates for authentication, credentials, connections or variables. Variables store pieces of information that we might need across multiple runbooks or the execution of the same runbook. Variables can be strings, Boolean values, integers or datetime values.
Jobs are an executed instance of a runbook. Jobs contain a snapshot of the runbook and required assets at the time when it was started. Jobs get executed by workers, and have a state of either new, completed, suspended, queued, running, failed or stopped.
Finally we have the often workers. This is an often overlooked component of any automation system. Originally, there was only one type of worker and it ran on Azure. Whilst this was good for quite a few automation tasks, it has some serious limitations.
So let’s take a quick look around Azure Automation. So here is my automation account in the new Azure Portal.
You can quickly see how I have three runbooks, 18 assets, a dsc configuration, a hybrid worker group and 2 dsc nodes. You can also see that I have had a whole bunch of jobs run, most completed, but there are some suspended, and a few failed as well.
Scrolling down, you can see that my runbooks are synchronized from GitHub. If you are not using some sort of source code repository for your scripts, please start to do so. It is 2016, and we should all be using something, git, VSTS, it doesn’t matter as long as you are using something. Right now Azure Automation only support GitHub, VSTS is coming soon. Hopefully VSTS will be coming very soon.
If we drill in to Runbooks, we have the runbooks you will be seeing tonight, going back and selecting assets, you can see that we haven’t got many. I have 13 modules, Azure Automation comes with 10 normally, however I have added some DSC modules in.
Whilst we are here, why don’t we run a runbook. If I go to Runbooks, and select Get-MyFirstRunBook, this book simply returns a nice hello world message. Let’s hit run, for now we will specify to run it on Azure. Now we will wait for the job to complete, notice the states it is going through, queued, running and then hopefully completed. I can also view a list of all of the jobs by selecting the jobs tile under details. Here I can see jobs as they are running, as well as go back and view previous jobs.
Overall the interface is pretty easy to navigate, if just a tiny bit annoying with all of the blades.
Let’s talk about Azure Workers.
Microsoft designed Azure Automation as a platform where we could run automation tasks from anywhere, call it free range automation. Whilst it is fantastic that we can run our automation tasks from anywhere, we often need to know actually where these tasks are being executed. Whilst we do get to specify which Azure region our account is created, and that to an extent will control where our Azure Workers are located, that is it in terms of control. This lack of control introduces a bunch of challenges.
We can’t specify what IP address Azure workers have, we can’t even specify it as a static address like we can with our virtual machines. Now some people might ask, why is this important? Well it introduces a few issues.
Have you ever tried to confirm if an event log entry was caused by a worker or a malicious user? Turns out this is pretty tricky. Whilst we can go back and see some of history information for previous jobs, the IP address of the worker sadly isn’t one of them. This might seem silly to some, but this is crucial for quite a lot of enterprise environments. If you get pwned, you are going to need to make sense of those log files.
Ever wanted to create firewall rules for the incoming connections from an Azure worker? That is going to be hard. Some of us need to integrate our automation with systems that have IP restrictions, things like HR and payroll systems, private APIs and payment gateways. I need to be able to tell some network guy at a partner organisation what IP address to expect a connection from.
What is the make up of the worker? What operating system, the .net framework version and PowerShell version does it have? With the Azure Worker, we don’t have any control over these, we get the versions Microsoft tells us we can have.
Whilst we can specify additional PowerShell modules, if it is more complex than that, we are at a dead end. Say we needed a 3rd party vendor application to accomplish our automation goals? Well, we are out of luck.
We don’t even have the option of connecting a worker to an Azure virtual network. If we could do that, we would at least have corporate network connectivity.
Now one solution would be to use Windows Remoting and have the worker connect to a server within our control, but this opens up issues with double hop, credssp and firewall rules.
Overall, these limitations can make it a hard for Azure Automation to be adopted into enterprise environments.
Enter Hybrid Workers.
Hybrid workers allow us to develop more advanced runbooks than we could previously, allowing for runbooks to access resources within your network, integrate with 3rd party frameworks, and give us finer grained control over the execution environment. They solve many of the limitations with the Azure worker.
To make use of Hybrid workers, you will need to implement the Operations Management Suite. Now I haven’t tested if hybrid workers will function if you are using OMS via the SCOM connector, however I have read of this being possible. For my production environments, and even tonight’s presentation they are direct attached. You will also need to install and configure the OMS Automation solution as well.
Hybrid workers support all three runbook types, and most importantly you don’t need to open any inbound firewall ports, instead the worker agents will connect out to Azure over HTTPS, and monitor for jobs that they need to perform. I have taken a peak at the internals, and all of this is achieved via Azure Service Bus. I really do wish I could hook PowerShell into custom Azure Service Bus instances as well, if anyone has any neat solutions, please let me know.
Now Microsoft’s documentation here refers a lot to resources within your local data centre, however I see hybrid workers as being highly useful to IAAS situations just as they are on premise.
Let’s take a look at hybrid workers.
So lets run our first job on a hybrid worker. For tonight's demonstrations, I have two Windows Server 2012R2 servers, they are domain controllers for a domain called CORE.
Firstly, I am going to show you the OMS console. In the OMS console, you can see that I have the automation solution added, and it is configured to my azure automation account, poshsecurity-aa.
Let’s go back to the Azure portal. Whilst I have my hybrid workers already configured and running, if you wanted to set your own up, there are two values you need, and we get both of these from the Key icon here. We need to take a note of one of the access keys, and then the URL endpoint for our azure automation account. Adding a hybrid worker is as simple as calling add-hybridrunbookworker, and specifying these two values and the name of the group to add them to. We will talk about the groups in a minute.
Let’s take a look at our group, if I go into Hybrid Worker Groups, we can see a single group. Digging in to that, we can see there is two hybrid workers, DC01 and DC02.
Now on to running our first hybrid job. I am going to run a job called Get-Hostname. This runbook simply outputs the hostname of the worker it is running on. If we hit start on this runbook, we will be asked once again where do we want to run this job, let’s select hybrid and then our domaincontrollers group. Now this is going to be queued up, and then executed, once it is completed, lets look at the output. As you can see, that DC01/DC02 the hostname of one of our works is displayed.
Hybrid Worker Groups are collections of workers, a little bit like a server farm, that can complete our automation activities. There is no reason why we couldn’t have multiple groups, each configured or placed in different places on our network. You might have one group setup that has access to your internal HR systems, another group might near your webserver farm to perform activities there.
When a Job is created, one, and only one worker in the group that job has been assigned to, will complete it. Don’t think of groups as load balancing, whilst they will to an extent distribute the jobs, this isn’t so much designed for load balancing and more designed for high availability. Now just to note, the failover isn’t as smooth and as seamless as it could be. If a worker does fail, it make take some time for everything to work it out. The main driver for work groups is to ensure that we always have a worker available to complete our automation tasks. Workers in a group do not need to be in the same data centre, they could represent geographically dispersed systems at multiple locations for availability.
Workers run jobs under the same execution also called a run as account. No matter what runbook job is sent to the group, they are all executed as the same account.
This time, why don’t we start a bunch of jobs and see what happens. I have some PowerShell code, here that will spin up 5 jobs for us, and then read the output back for us.
for ($a = 0; $a -le 10;$a++ )
{
"Starting Job $a"
$null = Start-AzureRmAutomationRunbook -Name 'Get-Hostname' -RunOn 'DomainControllers' -ResourceGroupName 'poshsecurity-aa' -AutomationAccountName 'poshsecurity-aa'
}
$Jobs = Get-AzureRmAutomationJob -ResourceGroupName 'poshsecurity-aa' -AutomationAccountName 'poshsecurity-aa' | select-object -first 10
foreach ($job in $Jobs)
{
(Get-AzureRmAutomationJobOutput -Id $job.id -ResourceGroupName 'poshsecurity-aa' -AutomationAccountName 'poshsecurity-aa').text
}
We should see that some ran on DC01 and others ran on DC02. Pretty neat Eh?
Now let's take a look at changing the account that these runbook jobs are running as.
So I have another runbook, Get-RunningUser, will simply return as output the user account that we are running as. Let's run it and see what it returns. So let's select to run on the hybrid worker. And we can see that it returns that the runbook was running as nt authority\system.
Now before we change the account jobs will be run as, we need to ensure we have a credential asset defined with the appropriate settings. If I go to assets, and then credentials, you can see I have one called AutomationAccount. These are domain credentials that we want to use to run our jobs.
Now if I go back into the group settings, then select "hybrid worker group settings". Now as you can see, we have the run as selected as "default". Let's select custom, next we will be asked to select a credental, and select the AutomationAccount.
I am going to save, and go back to runbooks, and run the get-runninguser. And if we look at the output, then we see that the account is core\azureautomation, which is the user it was configured for.
Who here is sick of all the jumping around in the portal yet? I know I am.
Unfortunately, all this comes with some limitations. Now most of these might not be a show stopper for you, they might not even be an issue, it is still best that you are aware of them.
Modules are not automatically deployed to hybrid workers. Unlike with Azure Workers, modules installed as assets will not be deployed automatically. Either script the prerequisite module install or use DSC. If you have come this far, why not sure Azure Automation DSC?
Execution context, as I mentioned earlier, is tied to the worker group. Now for most people, you probably don’t care about executing one runbook as a different user account than another. Thankfully there are some easy solutions to this one.
Now I for one would like to see file close triggers, and I would love if the story of trigging from event logs was much simpler. You certainly can trigger jobs from Windows events, but it is a lot of work.
One thing that would be nice to see is weighting or prioritization within the worker groups. It would be nice to be able to say, run the runbooks here on this worker, unless it is dead. Each hybrid worker in a group has the same chance to perform the job as the others. Whilst this might not cause issues to most people, there are probably situations where this could be an issue.
Now for the big one. Documentation. Right now, there is quite a bit in Azure that either isn’t documented, has minimal documentation or the documentation contains errors; and this goes well and truly for Automation and particularly hybrid workers. The documentation actually says that you cannot change the execution context, it also says that web hooks cannot trigger jobs on hybrid workers. These things will get fixed, but I recommend that you don’t trust the documentation, just because it says one thing, doesn’t mean that is actually the case.
Who here has heard of the Azure Automation Authoring Tool kit? It is also called the Azure Automation ISE Add-On?
The toolkit makes working with Azure Automation incredibly easy, by bringing all of the elements of Automation into the ISE. We can manage automation activities, create and edit runbooks and assets locally, push changes to our Automation account and also test PowerShell workflows and scripts locally, in Azure Workers and in Hybrid Workers as well. You can even synchronize the automation account with your github repos right from the ISE.
I was only put onto this about 3 or so weeks ago, and I have been amazed how useful this has been to me. It reduces the amount of time spent randomly clicking around the new Azure Portal.
There are a few limitations; you can’t setup webhooks or schedules on runbooks, and you can only modify some assets, connection, credentials and variables.
Let’s take a look at a quick demo of the toolkit.
So here I have the PowerShell ISE. As you can see over to the right, I have the add-on visible.
On the first Tab, you can see the base path of where the add-on will store runbooks and assets, you can see that I have signed in to azure, selected a subscription and an azure automaiton account.
On the next tab, you can see my runbooks in the account. From here I can download them locally, create new runbooks and delete runbooks. If I make changes to a runbook, I can upload a draft back to the automation account, test the draft, and finally publish the draft. I can also synchronize the Azure Account with the associated source control repository.
On the Assets tab, I can work with items like credentials and variables.
Let’s run a runbook from here. If I click on the Get-HostnameRunbook, this is the runbook we have been using earlier, and then select “Test”, I will be presented with this test screen. From here If I select Start new Job, we should see the execution <click>. It is going to ask us where we want to execute the job, lets select the domaincontrollers hybrid worker group. We can see that the job has been created, and its status is new. It will go to the running status, and we should eventually see it completed, and get the output.
Web Hooks are a surprisingly useful way to trigger off runbook execution. From a single HTTP request we cat start a configured runbooks execution.
Web Hooks are suited for integrating Azure Automation into things like your build and deployment pipelines, VSTS, GitHub, Slack, SharePoint or pretty much anything else you can think of.
They also provide us with an alternative to triggering runbooks when we don’t have the Azure CMDLets installed, or in situations where we don’t want to maintain large execution workflows with third party applications. I am currently looking at Web Hooks as a simpler way for team members who don’t have the Azure CMDLet stack installed to trigger off chunks of automation. The idea would be that they simply run the CMDLet, invoke-stuff, and that would simply perform the JSON call to start the process off. They don’t need to have the unstable Azure CMDLets installed, nor wear the performance hits of trying to login in to Azure, they execute a tidy bit of code, and then Azure Automation does the rest. The other advantage to this approach is my team wouldn’t need to maintain large git repos locally and ensure they keep them updated.
One limitation with Web Hooks is that they currently do not integrate with the normal parameter mechanism. Runbooks triggered by web hooks will need to be configured to receive a web hook parameter, this contains things like the Web Hook name, the request headers and the request body. The Automation environment doesn’t provide any assistance with this, if you are sending information as a web hook, say via json, you will need to convert it from json in the runbook.
I don’t want to scare you off web hooks, they are extremely powerful, and extremely useful in our automation life cycle.
So I am going to show you two demos on integrating with web hooks. We will start by creating our own webhook and calling it from PowerShell.
Firstly we go to the Runbook that we want to run, and select webhook. We then customize the settings, entering a name, expiry and make sure you copy the URL!!!
Now this runbook doesn't need parameters, but don't forget to say, run on hybrid worker. Then we hit create.
Now that it is created, we go to powershell, and call invoke-restmethod, the url we specified, and the method post. When this returns, it will return the Job ID for the job we just kicked off.
And If I switch back to the azure portal, we can see a job has been queued up, and has been executed successfully.
Now let’s look at a more interesting example. Let’s take a look at it in the ISE. New-ADUser is a runbook that accepts data via a webhook. Specifically firstname and lastname. It will then create an active directory user based upon that information. After it creates the account, it is going to send me a message in Slack with the accounts password.
I have also created a little function to kick the whole process off. So Let’s paste that into a PowerShell window, execute it, and then switch over to slack.
And there we have the password, and if I look in AD, we can see the account has been created.
So that is everything I have for tonight. I want to thank you all for coming and listening to me.
I will be posting up the slide deck on my blog, PoshSecurity.com. I write heavily about automation and security, and often security automation. You can follow me on Twitter at @kjacobsen.
The runbooks for tonight are in GitHub, you can find them at that address.
I have included some links, I recommend the link on web hooks, it covers quite a bit of the basics. Finally, I recommend that you look at the Azure Automation Authoring Toolkit.
Once again, thank you. Does anyone have any questions?