Azure Automation wants you to automate everything, everywhere. Hybrid Workers allow Azure Automation to reach new places within your infrastructure, allowing for more automation and less complexity. Learn how to deploy Hybrid Workers, balance automation workloads across groups of workers, trigger jobs off via web hooks, monitor jobs, remove scheduled tasks and much more.
Azure Automation wants you to automate everything, everywhere. Hybrid Workers allow Azure Automation to reach new places within your infrastructure, allowing for more automation and less complexity. Learn how to deploy Hybrid Workers, balance automation workloads across groups of workers, trigger jobs off via web hooks, monitor jobs, remove scheduled tasks and much more.
Flynn Bundy - 60 micro-services in 6 months WinOps Conf
In this talk, I want to take the audience on a journey of how we (Coolblue) migrated 60 .Net micro-services to the AWS Cloud. This talk covers the high’s, low’s and everything in between when working in a multi-disciplinary Developer / Operations Cloud team. This talk will cover the evolution of our processes and toolsets to align with Chaos Engineering best practices. Most importantly, I want to highlight how we changed the way we thought about services and servers in general.
The key takeaways from this talk would be related to:
Continous Inspection (TeamCity)
Continous Deployment (Octopus Deploy)
Infrastructure as Code (Cloudformation)
Chaos Engineering (Chaos Monkey)
Monitoring and Logging (Datadog and Splunk)
.Net and .Net Core (on Windows Server 2016)
Automation in AWS Cloud
This document describes building a serverless log analytics platform. It discusses the challenges with conventional logging architectures that require managing servers and have scalability issues. The document then introduces a serverless approach using AWS services like Kinesis, S3, Elasticsearch, and Kibana that allows logging infrastructure to scale infinitely with no server management. Code examples show how to set up logging pipelines to stream logs in real time to storage and analytics using this serverless architecture.
Speed and agility are the most expected in today’s analytics tools. The quicker you get from idea to insights, the more you can innovate & perform ad-hoc data analysis. I will be talking about how we can use AWS serverless architecture to stream IoT data, managed by python. We can be up and running in minutes―starting small, but able to easily grow to millions of devices and billions of messages.
Infrastructure Automation on AWS using a Real-World Customer ExampleAPI Talent
This technical session focuses on a customer use case and how using the AWS Cloud together with automation has enabled them to standardise and automate their systems.
This talk will describe how this is achieved with two tools, Cloud formation and Puppet. Cloud formation is a declarative templating language that enables the deployment of environments in a standardised way. Combined with a configuration management tool like Puppet allows for the automation of ongoing software deployments and maintenance in a low overhead manner. Puppet is a Configuration Management tool that installs and configures software on instances. Taken together a complete system can be built from the ground up.
This document provides an overview of serverless computing using Azure Functions. It discusses the benefits of serverless such as increased server utilization, instant scaling, and reduced time to market. Serverless allows developers to focus on business logic rather than managing servers. Azure Functions is introduced as a way to develop serverless applications using triggers and bindings in languages like C#, Node.js, Python and more. Common serverless patterns are also presented.
Azure Automation wants you to automate everything, everywhere. Hybrid Workers allow Azure Automation to reach new places within your infrastructure, allowing for more automation and less complexity. Learn how to deploy Hybrid Workers, balance automation workloads across groups of workers, trigger jobs off via web hooks, monitor jobs, remove scheduled tasks and much more.
Flynn Bundy - 60 micro-services in 6 months WinOps Conf
In this talk, I want to take the audience on a journey of how we (Coolblue) migrated 60 .Net micro-services to the AWS Cloud. This talk covers the high’s, low’s and everything in between when working in a multi-disciplinary Developer / Operations Cloud team. This talk will cover the evolution of our processes and toolsets to align with Chaos Engineering best practices. Most importantly, I want to highlight how we changed the way we thought about services and servers in general.
The key takeaways from this talk would be related to:
Continous Inspection (TeamCity)
Continous Deployment (Octopus Deploy)
Infrastructure as Code (Cloudformation)
Chaos Engineering (Chaos Monkey)
Monitoring and Logging (Datadog and Splunk)
.Net and .Net Core (on Windows Server 2016)
Automation in AWS Cloud
This document describes building a serverless log analytics platform. It discusses the challenges with conventional logging architectures that require managing servers and have scalability issues. The document then introduces a serverless approach using AWS services like Kinesis, S3, Elasticsearch, and Kibana that allows logging infrastructure to scale infinitely with no server management. Code examples show how to set up logging pipelines to stream logs in real time to storage and analytics using this serverless architecture.
Speed and agility are the most expected in today’s analytics tools. The quicker you get from idea to insights, the more you can innovate & perform ad-hoc data analysis. I will be talking about how we can use AWS serverless architecture to stream IoT data, managed by python. We can be up and running in minutes―starting small, but able to easily grow to millions of devices and billions of messages.
Infrastructure Automation on AWS using a Real-World Customer ExampleAPI Talent
This technical session focuses on a customer use case and how using the AWS Cloud together with automation has enabled them to standardise and automate their systems.
This talk will describe how this is achieved with two tools, Cloud formation and Puppet. Cloud formation is a declarative templating language that enables the deployment of environments in a standardised way. Combined with a configuration management tool like Puppet allows for the automation of ongoing software deployments and maintenance in a low overhead manner. Puppet is a Configuration Management tool that installs and configures software on instances. Taken together a complete system can be built from the ground up.
This document provides an overview of serverless computing using Azure Functions. It discusses the benefits of serverless such as increased server utilization, instant scaling, and reduced time to market. Serverless allows developers to focus on business logic rather than managing servers. Azure Functions is introduced as a way to develop serverless applications using triggers and bindings in languages like C#, Node.js, Python and more. Common serverless patterns are also presented.
Cleaning out your IT Closet - Offloading Infrastructure and Headaches to Windows Azure IaaS. SharePoint Saturday Redmond Presentation. Learn how an Azure Virtual Private Network can help you move your servers into the cloud, including entire SharePoint farms.
Serverless architectures rely on third-party services and remote procedure calls rather than maintaining servers. Azure Functions is a serverless computing service that allows developers to write code without managing infrastructure. Functions can be triggered by events and connected to other Azure services through bindings. Functions scale automatically based on demand and only charge for execution time and resources used.
NGINX Amplify: Monitoring NGINX with Advanced Filters and Custom DashboardsNGINX, Inc.
On-demand recording: https://nginx.webex.com/nginx/lsr.php?RCID=4bcbaff57fd6a02e4b3ca249917d3a1f
NGINX Amplify is a new diagnostic tool that gives engineers and DevOps professionals visibility and control of NGINX instances and NGINX-delivered applications.
Our new product provides insights to help you quickly troubleshoot application health and performance issues within a highly customizable interface. In addition to NGINX metrics, NGINX Amplify provides configuration analysis and reports, configurable alerts, and system-level metrics.
Join us in this webinar to learn:
* How to quickly install the NGINX Amplify agent on your server or in a container
* How to build custom dashboards of metrics gathered from your NGINX instances
* How to use advanced filters to pinpoint performance issues
This document summarizes a presentation on monitoring best practices. It recommends starting monitoring early in the development process during testing and staging rather than waiting until production. Monitoring should be integrated into continuous integration to gather metrics and detect issues early. The development team should be involved in setting up monitoring to ensure the right metrics are collected. The goal is to get a comprehensive view of system behavior and performance from an early stage using monitoring.
Dissection of the arguments against using public cloud providers from the Chef Compliance event in Dallas April 25, 2016. Compared and contrasted benefits of AWS vs. Azure vs. GCP.
What does Serverless mean for DevOps, in practical terms? While Serverless does reduce the need for server-centric DevOps, it poses new challenges in many areas including security, app deployment and cloud resource provisioning, partly due to an explosion of "nanoservices". Based on a current project using AWS, we cover relevant tools, techniques and tips to deliver a smooth serverless experience for development through to production.
Delivered at Bristol DevOps meetup, 27 Jun 2018. To see detailed notes covering extra points not on slides, click the Notes link just below (or download the Powerpoint).
Update: here's the correct link for Gojko Adzic talk on the Backendless slide - https://www.youtube.com/watch?v=w7X4gAQTk2E
Getting Started with Infrastructure as Code (IaC)Noor Basha
Are you looking to automate your infrastructure but not sure where to start? View this presentation on Getting started with Infrastructure as code to learn how to leverage IaC to deploy and manage resources on Azure. You will learn:
• Introduction to IaC
• Develop a simple IaC using Terraform
• Manage the deployed infrastructure using Terraform
Microservices, Spring Cloud & Cloud FoundryEmilio Garcia
Microservices, Spring Cloud & Cloud Foundry
The document discusses microservices architecture, distributed system patterns, Spring Boot, Spring Cloud, and Cloud Foundry. It defines microservices and compares monolithic vs microservices styles. Key advantages of microservices include using the right tool for each job and easier scaling. Challenges include complexity and coordination. Distributed patterns like centralized configuration, service registry, dynamic routing, and circuit breakers help address challenges. Spring Boot and Spring Cloud simplify building microservices and provide tools that implement common patterns. Cloud Foundry is a PaaS that makes deploying microservices applications easy.
Trent Hornibrook gave a recent talk at the Infracoders meet-up playing a thought experiment with the audience on 'what would be your tech decisions if you were given a blank cheque at at startup'.
Trent, recently working for a start-up then shared what decisions he made, and why
Scaling on Amazon AWS : From the perspective of AWS, and the application stack. Talks about the available options on AWS, and also the architecture of the scalable application.
The document discusses serverless computing and introduces Microsoft Azure Functions as a serverless platform, highlighting how Functions allows developers to write code that runs in response to events using triggers and bindings to integrate with other Azure services, and provides examples of common serverless patterns that can be implemented using Functions.
Meetup#7: AWS LightSail - The Simplicity of VPS - The Power of AWSAWS Vietnam Community
This document provides an overview of Amazon Lightsail, including what it is, when to use it, available plans, key features, and a demo. Lightsail offers simple virtual private servers with bundled compute, storage and networking starting at $5 per month. It provides an easy way to launch fully configured servers in seconds and manage them through an intuitive console. Lightsail can be used to host simple websites, apps, or testing environments and allows access to additional AWS services.
AWS Meetup - Nordstrom Data Lab and the AWS CloudNordstromDataLab
The document discusses Nordstrom's development of a recommendations API and service called Recommendo using AWS services like DynamoDB, Elastic Beanstalk, and Node.js. Some key points:
- Recommendo provides product recommendations to Nordstrom's website and emails, serving over 4 billion recommendations from 105 days of development.
- It was built on AWS using services like DynamoDB for storage, Elastic Beanstalk for deployment, and Node.js for the backend. This allowed a small team to build and deploy it quickly.
- Performance was improved through tuning, and the system now handles the load with an average latency of 90ms from a few auto-scaling servers.
- Lessons learned
Beyond Heroku: Hosting Your Rails App Yourselfstcarpenter
This document discusses hosting a Rails application yourself as an alternative to Heroku. It recommends using Unicorn as the application server, nginx as a reverse proxy, and deploying with Capistrano. Affordable hosting options mentioned include Amazon, Windows Azure, Digital Ocean, and Linode. Configuration management software and the features of Unicorn, nginx, and Capistrano are also outlined.
This document summarizes serverless design patterns and tools. It begins with a brief history of cloud computing and an introduction to serverless computing. Common serverless use cases like event-driven applications and stream processing are described. Several serverless patterns are then outlined, such as hosting a static website or REST API using AWS Lambda and API Gateway. Finally, the document demonstrates a serverless application and discusses future directions for serverless technologies.
3 Ways to Automate App Deployments with NGINXNGINX, Inc.
Watch on demand: www.nginx.com/resources/webinars/three-ways-to-automate-with-nginx-and-nginx-plus
The process of deploying applications in many organizations today is slowed down by manual processes. These manual processes create extra work for developers and operations teams, cause unnecessary delays, and increase the time it takes to get new features and critical bug and security fixes into the hands of customers. Automating common tasks – using tools, scripts, and other techniques – is a great way to improve operational efficiency and accelerate the rollout of new features and apps.
The potential improvements of automation are impressive. With the proper components in place, some companies have been able to deploy new code to production more than 50 times per day, creating a more stable application and increasing customer satisfaction.
High-performing DevOps teams turn to open source NGINX and NGINX Plus to build fully automated, self-service pipelines that developers use to effortlessly push out new features, security patches, bug fixes, and whole new applications to production with no manual intervention.
* Best practices for continuous delivery and automated deployments
* How to quickly and easily deploy new features or bug fixes into production with the push of a button
* Techniques to orchestrate and manage your NGINX-powered infrastructure using tools like Ansible, Chef, and Puppet
* How to use Jenkins to modify your NGINX configuration
* Methods to automate the discovery of new services using NGINX Plus
The document summarizes Recommendo, a RESTful product recommendations API built by Nordstrom and hosted on AWS. Some key details:
- Recommendo serves over 2 billion recommendations to Nordstrom.com customers via API and emails.
- It was built by 2 developers and 2 data scientists and deployed to production on AWS in just 105 days.
- The API sees over 3 million hits per day and scales automatically on AWS with average request latency of 70ms.
- Lessons learned include the difficulty of zero downtime deployments and importance of health checks and error handling.
The document discusses multi-tenancy on Windows Azure cloud. It covers multi-tenant architecture, Windows Azure Active Directory, ASP.NET multi-tenant applications, SQL database federations, deployment models, and auto-scaling. The session aims to build a multi-tenant ASP.NET web application on Windows Azure with prerequisites of Visual Studio 2013 Express and a Windows Azure account.
This document summarizes a presentation about playing with PHP on Azure using the Zend Framework. It discusses:
- Using the Zend Framework 2 with Azure Web Sites to build scalable PHP applications in the cloud.
- Key Azure services like Web Sites, Storage, and Mobile that can be used to deploy and scale PHP applications.
- Steps to create a new Zend Framework 2 application on an Azure Web Site and connect it to Azure SQL and Storage.
- Ensuring applications can be reversed from the cloud to on-premise environments through configuration.
- Monitoring tools for cloud applications like New Relic and Application Insights.
Cleaning out your IT Closet - Offloading Infrastructure and Headaches to Windows Azure IaaS. SharePoint Saturday Redmond Presentation. Learn how an Azure Virtual Private Network can help you move your servers into the cloud, including entire SharePoint farms.
Serverless architectures rely on third-party services and remote procedure calls rather than maintaining servers. Azure Functions is a serverless computing service that allows developers to write code without managing infrastructure. Functions can be triggered by events and connected to other Azure services through bindings. Functions scale automatically based on demand and only charge for execution time and resources used.
NGINX Amplify: Monitoring NGINX with Advanced Filters and Custom DashboardsNGINX, Inc.
On-demand recording: https://nginx.webex.com/nginx/lsr.php?RCID=4bcbaff57fd6a02e4b3ca249917d3a1f
NGINX Amplify is a new diagnostic tool that gives engineers and DevOps professionals visibility and control of NGINX instances and NGINX-delivered applications.
Our new product provides insights to help you quickly troubleshoot application health and performance issues within a highly customizable interface. In addition to NGINX metrics, NGINX Amplify provides configuration analysis and reports, configurable alerts, and system-level metrics.
Join us in this webinar to learn:
* How to quickly install the NGINX Amplify agent on your server or in a container
* How to build custom dashboards of metrics gathered from your NGINX instances
* How to use advanced filters to pinpoint performance issues
This document summarizes a presentation on monitoring best practices. It recommends starting monitoring early in the development process during testing and staging rather than waiting until production. Monitoring should be integrated into continuous integration to gather metrics and detect issues early. The development team should be involved in setting up monitoring to ensure the right metrics are collected. The goal is to get a comprehensive view of system behavior and performance from an early stage using monitoring.
Dissection of the arguments against using public cloud providers from the Chef Compliance event in Dallas April 25, 2016. Compared and contrasted benefits of AWS vs. Azure vs. GCP.
What does Serverless mean for DevOps, in practical terms? While Serverless does reduce the need for server-centric DevOps, it poses new challenges in many areas including security, app deployment and cloud resource provisioning, partly due to an explosion of "nanoservices". Based on a current project using AWS, we cover relevant tools, techniques and tips to deliver a smooth serverless experience for development through to production.
Delivered at Bristol DevOps meetup, 27 Jun 2018. To see detailed notes covering extra points not on slides, click the Notes link just below (or download the Powerpoint).
Update: here's the correct link for Gojko Adzic talk on the Backendless slide - https://www.youtube.com/watch?v=w7X4gAQTk2E
Getting Started with Infrastructure as Code (IaC)Noor Basha
Are you looking to automate your infrastructure but not sure where to start? View this presentation on Getting started with Infrastructure as code to learn how to leverage IaC to deploy and manage resources on Azure. You will learn:
• Introduction to IaC
• Develop a simple IaC using Terraform
• Manage the deployed infrastructure using Terraform
Microservices, Spring Cloud & Cloud FoundryEmilio Garcia
Microservices, Spring Cloud & Cloud Foundry
The document discusses microservices architecture, distributed system patterns, Spring Boot, Spring Cloud, and Cloud Foundry. It defines microservices and compares monolithic vs microservices styles. Key advantages of microservices include using the right tool for each job and easier scaling. Challenges include complexity and coordination. Distributed patterns like centralized configuration, service registry, dynamic routing, and circuit breakers help address challenges. Spring Boot and Spring Cloud simplify building microservices and provide tools that implement common patterns. Cloud Foundry is a PaaS that makes deploying microservices applications easy.
Trent Hornibrook gave a recent talk at the Infracoders meet-up playing a thought experiment with the audience on 'what would be your tech decisions if you were given a blank cheque at at startup'.
Trent, recently working for a start-up then shared what decisions he made, and why
Scaling on Amazon AWS : From the perspective of AWS, and the application stack. Talks about the available options on AWS, and also the architecture of the scalable application.
The document discusses serverless computing and introduces Microsoft Azure Functions as a serverless platform, highlighting how Functions allows developers to write code that runs in response to events using triggers and bindings to integrate with other Azure services, and provides examples of common serverless patterns that can be implemented using Functions.
Meetup#7: AWS LightSail - The Simplicity of VPS - The Power of AWSAWS Vietnam Community
This document provides an overview of Amazon Lightsail, including what it is, when to use it, available plans, key features, and a demo. Lightsail offers simple virtual private servers with bundled compute, storage and networking starting at $5 per month. It provides an easy way to launch fully configured servers in seconds and manage them through an intuitive console. Lightsail can be used to host simple websites, apps, or testing environments and allows access to additional AWS services.
AWS Meetup - Nordstrom Data Lab and the AWS CloudNordstromDataLab
The document discusses Nordstrom's development of a recommendations API and service called Recommendo using AWS services like DynamoDB, Elastic Beanstalk, and Node.js. Some key points:
- Recommendo provides product recommendations to Nordstrom's website and emails, serving over 4 billion recommendations from 105 days of development.
- It was built on AWS using services like DynamoDB for storage, Elastic Beanstalk for deployment, and Node.js for the backend. This allowed a small team to build and deploy it quickly.
- Performance was improved through tuning, and the system now handles the load with an average latency of 90ms from a few auto-scaling servers.
- Lessons learned
Beyond Heroku: Hosting Your Rails App Yourselfstcarpenter
This document discusses hosting a Rails application yourself as an alternative to Heroku. It recommends using Unicorn as the application server, nginx as a reverse proxy, and deploying with Capistrano. Affordable hosting options mentioned include Amazon, Windows Azure, Digital Ocean, and Linode. Configuration management software and the features of Unicorn, nginx, and Capistrano are also outlined.
This document summarizes serverless design patterns and tools. It begins with a brief history of cloud computing and an introduction to serverless computing. Common serverless use cases like event-driven applications and stream processing are described. Several serverless patterns are then outlined, such as hosting a static website or REST API using AWS Lambda and API Gateway. Finally, the document demonstrates a serverless application and discusses future directions for serverless technologies.
3 Ways to Automate App Deployments with NGINXNGINX, Inc.
Watch on demand: www.nginx.com/resources/webinars/three-ways-to-automate-with-nginx-and-nginx-plus
The process of deploying applications in many organizations today is slowed down by manual processes. These manual processes create extra work for developers and operations teams, cause unnecessary delays, and increase the time it takes to get new features and critical bug and security fixes into the hands of customers. Automating common tasks – using tools, scripts, and other techniques – is a great way to improve operational efficiency and accelerate the rollout of new features and apps.
The potential improvements of automation are impressive. With the proper components in place, some companies have been able to deploy new code to production more than 50 times per day, creating a more stable application and increasing customer satisfaction.
High-performing DevOps teams turn to open source NGINX and NGINX Plus to build fully automated, self-service pipelines that developers use to effortlessly push out new features, security patches, bug fixes, and whole new applications to production with no manual intervention.
* Best practices for continuous delivery and automated deployments
* How to quickly and easily deploy new features or bug fixes into production with the push of a button
* Techniques to orchestrate and manage your NGINX-powered infrastructure using tools like Ansible, Chef, and Puppet
* How to use Jenkins to modify your NGINX configuration
* Methods to automate the discovery of new services using NGINX Plus
The document summarizes Recommendo, a RESTful product recommendations API built by Nordstrom and hosted on AWS. Some key details:
- Recommendo serves over 2 billion recommendations to Nordstrom.com customers via API and emails.
- It was built by 2 developers and 2 data scientists and deployed to production on AWS in just 105 days.
- The API sees over 3 million hits per day and scales automatically on AWS with average request latency of 70ms.
- Lessons learned include the difficulty of zero downtime deployments and importance of health checks and error handling.
The document discusses multi-tenancy on Windows Azure cloud. It covers multi-tenant architecture, Windows Azure Active Directory, ASP.NET multi-tenant applications, SQL database federations, deployment models, and auto-scaling. The session aims to build a multi-tenant ASP.NET web application on Windows Azure with prerequisites of Visual Studio 2013 Express and a Windows Azure account.
This document summarizes a presentation about playing with PHP on Azure using the Zend Framework. It discusses:
- Using the Zend Framework 2 with Azure Web Sites to build scalable PHP applications in the cloud.
- Key Azure services like Web Sites, Storage, and Mobile that can be used to deploy and scale PHP applications.
- Steps to create a new Zend Framework 2 application on an Azure Web Site and connect it to Azure SQL and Storage.
- Ensuring applications can be reversed from the cloud to on-premise environments through configuration.
- Monitoring tools for cloud applications like New Relic and Application Insights.
Disaster Recovery to the Cloud with Microsoft AzureLai Yoong Seng
In this session, we will look into DR planning scenario to protect your workload with one solution for different infrastructure either hyper-v, vmware, storage or physical server.
This document provides a list of 7 URL links related to security topics. The URLs cover various security domains and appear to point to blog posts or articles on securing systems and networks. In summary, the document shares several online resources for information about cybersecurity issues and defenses.
Join me for the presentation where a blue-screen of death, is the desired result! MS15-034 was a particularly interesting vulnerability that turned out to have more bark than bite. Using PowerShell to test for MS15-034 presents us with a number of unique challenges, the solution is to look at a lower level, with TCP connections. This presentation will discuss MS15-034, what the vulnerability was, and how we can exploit it. Learn about working directly with TCP connections in PowerShell and the ins and outs you need to know.
Permasalahan tersebut membahas tentang penyaluran bahan makanan pokok dari beberapa kota di pulau Jawa ke kota-kota di luar pulau Jawa dengan menggunakan metode transportasi Vogel. Ringkasannya adalah:
1. Penyaluran bahan pokok makanan dari Bandung, Semarang, Surabaya, dan Tegal ke Maluku, Malang, Yogyakarta, dan Lampung dengan total 2200 ton dan biaya Rp. 6,2 juta
2. Metode ini digunakan untuk m
Learn about the advances in Windows 8.1 and Windows Server 2012R2 that allow your users to work from anywhere in the world. Kieran Jacobsen will cover topics client seamless corporate connectivity with DirectAccess, managing BitLocker with MBAM, user document synchronization with Work Folders, addressing the needs of enterprise security and any performance requirements you might have.
Deployment Automation for Hybrid Cloud and Multi-Platform EnvironmentsIBM UrbanCode Products
This document discusses how IBM's UrbanCode Deploy product can be used to automate application deployments across hybrid cloud and multi-platform environments. It provides examples of how UrbanCode Deploy supports deploying applications to systems like IBM z/OS, distributed systems, private clouds, public clouds and PaaS platforms in an automated and unified manner using patterns and templates. The document also discusses reference architectures and case studies for implementing continuous delivery pipelines spanning both on-premise and cloud infrastructures.
Are you considering deploying DirectAccess? DirectAccess is Microsoft’s next generation remote access solution providing a seamless corporate network connectivity experience. The session will cover a number of issues that IT professionals deploying DirectAccess should be aware of including load balancing, certificates, and IP Infrastructure requirements.
Infrastructure Saturday 2011 - Understanding PKI and Certificate Serviceskieranjacobsen
In every organization, there is a growing need for a strong well-designed public key infrastructure solution and in many of these; Active Directory Certificate Services will be used. This session will guide you through a solution based on best practice, shed some light on common issues encountered and some shortcuts to assist in management with PowerShell.
The IT industry has experienced rapid change and consolidation. The introduction of Cloud, Agile, DevOps and shortages in skilled staff have created immense pressure on enterprise IT teams. Organisations are concerned about the costs of data breaches, and need to act to ensure they do not become the next Yahoo, OPM or Target.
DevSecOps (or SecDevOps) integrates development, security and operations teams together to encourage faster decision making and reduce issue resolution times.
This session will cover the current state of DevOps, and how DevSecOps can help integrate pathways between teams to reduce fear, uncertainty and doubt. We will look at how to move to security as code, and integrate security into our infrastructure and software deployment processes.
Our company underwent a DevOps transformation, moving from a waterfall process to agile methodologies and practices like sprints, continuous delivery, and monitoring. This allowed us to accelerate delivery, improve repeatability, and optimize resources. We also transitioned our on-premises box product to a cloud service hosted on Microsoft Azure.
The cloud is all the rage. Does it live up to its hype? What are the benefits of the cloud? Join me as I discuss the reasons so many companies are moving to the cloud and demo how to get up and running with a VM (IaaS) and a database (PaaS) in Azure. See why the ability to scale easily, the quickness that you can create a VM, and the built-in redundancy are just some of the reasons that moving to the cloud a “no brainer”. And if you have an on-prem datacenter, learn how to get out of the air-conditioning business!
CMDLets, scripts, functions, methods and modules all make PowerShell sound very complicated however with some simple guidelines you too can become a PowerShell automation Pro!
PowerShell, the must have tool for administrators, and the long overlooked security challenge. See Kieran Jacobsen present how PowerShell, with its deep Microsoft platform integration can be utilised by an attack to become a powerful attack tool. Learn how an attacker can move from a compromised workstation to a domain controller using PowerShell and WinRM whilst learning how to defend against these attacks.
Evolving your automation with hybrid workerskieranjacobsen
Azure Automation wants you to automate everything, everywhere. Hybrid Workers allow Azure Automation to reach new places within your infrastructure, allowing for more automation and less complexity. This session covers the basics of Hybrid Workers before looking at balancing workloads, managing resource dependencies, integrating with web hooks and monitoring job execution. The is a great session for anyone who is automating infrastructure or cloud resources.
PowerShell, the must have tool and the long overlooked security challenge. Learn how PowerShell’s deep integration with the Microsoft platform can be utilized as a powerful attack platform within the enterprise space. Watch as a malicious actor moves from a compromised end user PC to the domain controllers and learn how we can begin to defend these types of attacks.
This document provides information about Azure Site Recovery including contact information for Asaf Nakash and Yaruslav Minialov, key features of Azure Site Recovery, a high-level overview of how it works including replication of VMs to Azure, and pricing information for Azure Site Recovery suites. It aims to educate readers on disaster recovery and migration capabilities between on-premises and Azure using Azure Site Recovery.
DevSecOps, or SecDevOps has the ambitious goal of integrating development, security and operations teams together, encouraging faster decision making and reducing issue resolution times. This session will cover the current state of DevOps, how DevSecOps can help, integration pathways between teams and how to reduce fear, uncertainty and doubt. We will look at how to move to security as code, and integrating security into our infrastructure and software deployment processes.
Windows Azure is an open and flexible cloud computing platform that allows users to build, deploy, and manage applications and services through Microsoft's global network of datacenters. It provides compute, network, storage and application services that allow users to build applications using any language, tool or framework. The platform offers advantages of speed, scale and lower costs compared to traditional application development models. Key services include virtual machines, web sites, cloud services, SQL and NoSQL data storage, media services and more.
This document provides an overview of Microsoft Azure cloud computing services. It highlights Azure's core infrastructure services including compute, storage, networking and security. It also lists advanced workloads like web/mobile development, IoT, microservices, serverless computing and more. The document shares case studies on how companies have used Azure services like Azure Storage, SQL Data Warehouse, Site Recovery and SAP on Azure to improve performance, reduce costs and gain operational efficiencies. It promotes Azure's tools for DevOps, analytics, cognitive services and high performance computing. Finally, it provides links to learn more about specific Azure services and capabilities.
This document discusses Microsoft automation tools including Service Management Automation, PowerShell workflows, Azure Automation, and PowerShell Desired State Configuration. It provides an overview of each tool's architecture and capabilities. The document demonstrates how to author PowerShell workflows using tools like the Azure Automation Authoring Toolkit. It also demonstrates PowerShell DSC and how to configure systems using a pull server model both on-premises and with Azure Automation DSC in the cloud. The key takeaway is that Microsoft provides a comprehensive set of automation tools to configure, manage, and automate hybrid cloud environments.
This document discusses high performance computing (HPC) on Microsoft Azure. It begins with an overview of the HPC opportunity in the cloud, highlighting how the cloud provides elasticity and scale to accommodate variable computing demands. It then outlines Azure's value proposition for HPC, including its productive, trusted and hybrid capabilities. The document reviews the various HPC resources available on Azure like VMs, GPUs, and Cray supercomputers. It also discusses solutions for HPC like Azure Batch, Azure Machine Learning Compute, Azure CycleCloud and Avere vFXT. Example industry use cases are provided for automotive, financial services, manufacturing, media/entertainment and oil/gas. The summary reiterates that Azure is uniquely positioned
This document discusses setting up System Center Configuration Manager (SCCM) on Microsoft Azure. It begins with an overview of cloud computing benefits and Microsoft Azure features. It then reviews the System Center suite and describes the SCCM on Azure architecture with a SQL database, IIS, and load balancer. Steps are provided for deploying the base configuration in Azure. The document demonstrates SCCM functionality and concludes with notes on additional configuration topics.
Udai introduces Service Fabric Mesh, a new managed service from Microsoft for deploying containerized microservices applications. Some key points about SF Mesh: it focuses on the application rather than infrastructure; provides a fully managed cluster; applications can be deployed from CI/CD pipelines; and it includes auto-scaling and blue/green deployments. The presentation provides an overview of SF Mesh and its resource model, how it works, things to know in preview, and a demo of deploying an application using the CLI commands.
Azure Database Services for MySQL PostgreSQL and MariaDBNicholas Vossburg
This document summarizes the Azure Database platform for relational databases. It discusses the different service tiers for databases including Basic, General Purpose, and Memory Optimized. It covers security features, high availability, scaling capabilities, backups and monitoring. Methods for migrating databases to Azure like native commands, migration wizards, and replication are also summarized. Best practices for achieving performance are outlined related to network latency, storage, and CPU.
This document provides an overview of Microsoft Azure cloud computing services. It defines cloud computing as the delivery of services like data storage, servers, databases, and software over the internet. Azure offers infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). The document discusses how Azure can be used for backup/disaster recovery, hosting/developing web/mobile apps, supplementing Active Directory, innovating with IoT solutions, and more. It emphasizes Azure's flexibility, scalability, and security for a variety of uses.
This document discusses Azure Machine Learning services for data scientists. It provides an overview of Azure Machine Learning Studio for building and deploying machine learning models with over 100 modules. Numbers show hundreds of thousands of deployed models serving billions of requests. It also discusses Azure Batch AI for scalable machine learning training without managing infrastructure, and Azure Databricks for Apache Spark as a managed service on Azure. The document outlines the machine learning development lifecycle supported in Azure and tools for experimentation, model management, and operationalization of models.
This document discusses using PHP on the Azure cloud platform. It provides an overview of Azure's global data center presence and scale. It then covers hosting PHP applications on Azure Web Apps, using Azure services like storage and SQL, and other tools like the PHP SDK and CLI. The document aims to help developers get started with PHP on Azure.
This document provides an overview of Azure core services, including compute, storage, and networking options. It discusses Azure management tools like the portal, PowerShell, and CLI. For compute, it covers virtual machines, containers, App Service, and serverless options. For storage, it discusses SQL Database, Cosmos DB, blob, file, queue, and data lake storage. It also discusses networking concepts like load balancing and traffic management. The document ends with potential exam questions related to Azure services.
The breath and depth of Azure products that fall under the AI and ML umbrella can be difficult to follow. In this presentation I’ll first define exactly what AI, ML, and deep learning is, and then go over the various Microsoft AI and ML products and their use cases.
Azure Data Lake and Azure Data Lake AnalyticsWaqas Idrees
This document provides an overview and introduction to Azure Data Lake Analytics. It begins with defining big data and its characteristics. It then discusses the history and origins of Azure Data Lake in addressing massive data needs. Key components of Azure Data Lake are introduced, including Azure Data Lake Store for storing vast amounts of data and Azure Data Lake Analytics for performing analytics. U-SQL is covered as the query language for Azure Data Lake Analytics. The document also touches on related Azure services like Azure Data Factory for data movement. Overall it aims to give attendees an understanding of Azure Data Lake and how it can be used to store and analyze large, diverse datasets.
Big data journey to the cloud 5.30.18 asher bartchCloudera, Inc.
We hope this session was valuable in teaching you more about Cloudera Enterprise on AWS, and how fast and easy it is to deploy a modern data management platform—in your cloud and on your terms.
Comment envisager l'architecture d'une solution dans le Cloud ? Quelles différences avec un hébergement classique ?
Nous illustrerons les grands principes du développement Cloud en prenant l'exemple d'une application web typique. Nous construirons l'architecture étape par étape pour la rendre scalable et lui faire bénéficier des avantages du Cloud.
Nous verrons ensuite les différents types d'implémentations et choix technologiques possibles de cette architecture sur le Cloud Microsoft Azure. Nous envisagerons aussi bien des services d'infrastructure (VMs, conteneurs, …) que des services de plus haut niveau de type plateforme, du serverless, des bases de données managées…
Nous zoomerons ensuite sur l'acquisition de la donnée et son traitement dans un contexte Big Data et verrons les caractéristiques d'une architecture lambda et ses implémentations possibles sur Azure (Hadoop, …). Nous terminerons par les différentes manières d'ajouter de l'intelligence dans sa solution : de la plus simple à mettre en œuvre pour le développeur via des APIs pré-packagées, à la plus élaborée et personnalisable pour le Data Scientist. Mais aussi comment la rendre plus facilement accessible par l'utilisateur via un bot Skype, Facebook, Slack, email, SMS...
Support du meetup https://www.meetup.com/fr-FR/Duchess-France-Meetup/events/238437772/
Adelaide Global Azure Bootcamp 2018 - Azure 101Balabiju
The document provides an overview of a Global Azure Bootcamp event in Adelaide that included a Microsoft Azure 101 session. The session was presented by Balasubramanian Murugesan, a Microsoft Cloud Architect with over 15 years of experience across technologies and sectors, including 7+ years experience with Azure and Office 365. The presentation covered topics such as cloud computing, the benefits of Azure, Azure services and platforms, Azure management portals, Azure compute, storage, identity, backup and recovery solutions, and web app services. It included demonstrations of the Azure management portal and a racing game built on Azure.
Security in the cloud Workshop HSTC 2014Akash Mahajan
A broad overview of what it takes to be secure. This is more of an introduction where we introduce the basic terms around Cloud Computing and how do we go about securing our information assets(Data, Applications and Infrastructure)
The workshop was fun because all the slides were paired with real world examples of security breaches and attacks.
2014.10.22 Building Azure Solutions with Office 365Marco Parenzan
This document discusses building Azure solutions with Office 365. It provides an overview of Microsoft Azure services including compute, storage, networking and identity services. It also discusses Office 365 APIs for integrating with calendar, mail and contacts. Code samples are shown for accessing these APIs through REST calls and a library that abstracts away the REST requests. The document concludes with a demonstration of an application that integrates Office 365 and Azure services.
The Boring Security Talk - Azure Global Bootcamp Melbourne 2019kieranjacobsen
The document discusses common issues with email security such as spam, phishing, spear phishing, whaling and impersonation. It then provides an example of an email header from the Have I Been Pwned service, highlighting the various authentication and security details contained within the header such as SPF, DKIM and DMARC validation. The header is used to demonstrate how threat actors may spoof legitimate email domains and services to conduct phishing attacks.
Troy Hunt and Scott Helme have spoken about all the exciting security things, so let’s talk about the boring bits! When we think about application and infrastructure security, we often think about the big shiny things and forget the boring bits. In this talk, we’ll look at the security of our package dependencies, CI/CD tools, how we send email and even resolve hostnames. Over the last few months, hackers have managed to inject cryptocurrency miners into all these places. Security incidents in these components might not result in an entry in Have I Been Pwned?, but they'll result in a bad day.
Troy Hunt and Scott Helme have spoken about all the exciting security things, so let’s talk about the boring bits! When we think about application and infrastructure security, we often think about the big shiny things and forget the boring bits. In this talk, we’ll look at the security of our package dependencies, CI/CD tools, how we send email and even resolve hostnames. Over the last few months, hackers have managed to inject cryptocurrency miners into all these places. Security incidents in these components might not result in an entry in Have I Been Pwned?, but they'll result in a bad day.
Troy Hunt and Scott Helme have spoken about all the exciting security things, so let’s talk about the boring bits! When we think about application and infrastructure security, we often think about the big shiny things and forget the boring bits. In this talk, we’ll look at the security of our package dependencies, CI/CD tools, how we send email and even resolve hostnames. Over the last few months, hackers have managed to inject cryptocurrency miners into all these places. Security incidents in these components might not result in an entry in Have I Been Pwned?, but they'll result in a bad day.
This was presented at DDD Melbourne, which is a shortened version of this presentation.
Microsoft has provided an almost unlimited number of ways for you to securely deploy Azure resources; but people continue to make simple mistakes. In 2017 many organisations had breaches due to poor cloud deployment practices.
In this session, you’ll learn how to use Azure Resource Manager (ARM) templates to deploy resources in a secure manner. This session will look at Azure Storage, App Services, SQL, Virtual Machines and Virtual Networks. I'll discuss the costs, benefits and trade-offs of different design patterns and how you can secure your deployment pipelines.
Ransomware made headlines in 2017, with attacks shutting down the UK's NHS and costing Maersk shipping over $300m in lost revenue. Ransomware is a massive business for cybercriminals, driving the cost of bitcoin from $1200 to over $7000 per coin. We often see ransomware as some unbeatable force, however with some common sense controls and simple tricks, the damage can be reduced or even stopped. Join Kieran to learn some simple, free steps you can do to stop ransomware in its tracks.
The truth is that money can’t buy security just as it cannot buy happiness. Ransomware has become a cybercriminal’s most profitable enterprise, and something that IT professionals and even the general public now fear. Ransomware is actually pretty simple and unsophisticated code, and at times the damage can stopped with some simple tricks. Best of all, these are FREE!
The document discusses DevSecOps and identifies its four main components as training, communication, integration, and code. It provides examples of tools that can be used to implement DevSecOps, such as backlogs, support ticket tools, linting tools, and automated vulnerability assessment tools. The document emphasizes integrating security into all phases of the development lifecycle from planning through deployment.
DevSecOps, or SecDevOps has the ambitious goal of integrating development, security and operations teams together, encouraging faster decision making and reducing issue resolution times. This session will cover the current state of DevOps, how DevSecOps can help, integration pathways between teams and how to reduce fear, uncertainty and doubt. We will look at how to move to security as code, and integrating security into our infrastructure and software deployment processes.
Infrastructure Saturday - Level Up to DevSecOpskieranjacobsen
DevSecOps, or SecDevOps has the ambitious goal of integrating development, security and operations teams together, encouraging faster decision making and reducing issue resolution times. This session will cover the current state of DevOps, how DevSecOps can help, integration pathways between teams and how to reduce fear, uncertainty and doubt. We will look at how to move to security as code, and integrating security into our infrastructure and software deployment processes.
Takashi Kobayashi and Hironori Washizaki, "SWEBOK Guide and Future of SE Education," First International Symposium on the Future of Software Engineering (FUSE), June 3-6, 2024, Okinawa, Japan
Introducing Crescat - Event Management Software for Venues, Festivals and Eve...Crescat
Crescat is industry-trusted event management software, built by event professionals for event professionals. Founded in 2017, we have three key products tailored for the live event industry.
Crescat Event for concert promoters and event agencies. Crescat Venue for music venues, conference centers, wedding venues, concert halls and more. And Crescat Festival for festivals, conferences and complex events.
With a wide range of popular features such as event scheduling, shift management, volunteer and crew coordination, artist booking and much more, Crescat is designed for customisation and ease-of-use.
Over 125,000 events have been planned in Crescat and with hundreds of customers of all shapes and sizes, from boutique event agencies through to international concert promoters, Crescat is rigged for success. What's more, we highly value feedback from our users and we are constantly improving our software with updates, new features and improvements.
If you plan events, run a venue or produce festivals and you're looking for ways to make your life easier, then we have a solution for you. Try our software for free or schedule a no-obligation demo with one of our product specialists today at crescat.io
Hand Rolled Applicative User ValidationCode KataPhilip Schwarz
Could you use a simple piece of Scala validation code (granted, a very simplistic one too!) that you can rewrite, now and again, to refresh your basic understanding of Applicative operators <*>, <*, *>?
The goal is not to write perfect code showcasing validation, but rather, to provide a small, rough-and ready exercise to reinforce your muscle-memory.
Despite its grandiose-sounding title, this deck consists of just three slides showing the Scala 3 code to be rewritten whenever the details of the operators begin to fade away.
The code is my rough and ready translation of a Haskell user-validation program found in a book called Finding Success (and Failure) in Haskell - Fall in love with applicative functors.
Zoom is a comprehensive platform designed to connect individuals and teams efficiently. With its user-friendly interface and powerful features, Zoom has become a go-to solution for virtual communication and collaboration. It offers a range of tools, including virtual meetings, team chat, VoIP phone systems, online whiteboards, and AI companions, to streamline workflows and enhance productivity.
What is Augmented Reality Image Trackingpavan998932
Augmented Reality (AR) Image Tracking is a technology that enables AR applications to recognize and track images in the real world, overlaying digital content onto them. This enhances the user's interaction with their environment by providing additional information and interactive elements directly tied to physical images.
A Study of Variable-Role-based Feature Enrichment in Neural Models of CodeAftab Hussain
Understanding variable roles in code has been found to be helpful by students
in learning programming -- could variable roles help deep neural models in
performing coding tasks? We do an exploratory study.
- These are slides of the talk given at InteNSE'23: The 1st International Workshop on Interpretability and Robustness in Neural Software Engineering, co-located with the 45th International Conference on Software Engineering, ICSE 2023, Melbourne Australia
Atelier - Innover avec l’IA Générative et les graphes de connaissancesNeo4j
Atelier - Innover avec l’IA Générative et les graphes de connaissances
Allez au-delà du battage médiatique autour de l’IA et découvrez des techniques pratiques pour utiliser l’IA de manière responsable à travers les données de votre organisation. Explorez comment utiliser les graphes de connaissances pour augmenter la précision, la transparence et la capacité d’explication dans les systèmes d’IA générative. Vous partirez avec une expérience pratique combinant les relations entre les données et les LLM pour apporter du contexte spécifique à votre domaine et améliorer votre raisonnement.
Amenez votre ordinateur portable et nous vous guiderons sur la mise en place de votre propre pile d’IA générative, en vous fournissant des exemples pratiques et codés pour démarrer en quelques minutes.
E-commerce Application Development Company.pdfHornet Dynamics
Your business can reach new heights with our assistance as we design solutions that are specifically appropriate for your goals and vision. Our eCommerce application solutions can digitally coordinate all retail operations processes to meet the demands of the marketplace while maintaining business continuity.
GraphSummit Paris - The art of the possible with Graph TechnologyNeo4j
Sudhir Hasbe, Chief Product Officer, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
WhatsApp offers simple, reliable, and private messaging and calling services for free worldwide. With end-to-end encryption, your personal messages and calls are secure, ensuring only you and the recipient can access them. Enjoy voice and video calls to stay connected with loved ones or colleagues. Express yourself using stickers, GIFs, or by sharing moments on Status. WhatsApp Business enables global customer outreach, facilitating sales growth and relationship building through showcasing products and services. Stay connected effortlessly with group chats for planning outings with friends or staying updated on family conversations.
Software Engineering, Software Consulting, Tech Lead, Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Transaction, Spring MVC, OpenShift Cloud Platform, Kafka, REST, SOAP, LLD & HLD.
Neo4j - Product Vision and Knowledge Graphs - GraphSummit ParisNeo4j
Dr. Jesús Barrasa, Head of Solutions Architecture for EMEA, Neo4j
Découvrez les dernières innovations de Neo4j, et notamment les dernières intégrations cloud et les améliorations produits qui font de Neo4j un choix essentiel pour les développeurs qui créent des applications avec des données interconnectées et de l’IA générative.
Odoo ERP software
Odoo ERP software, a leading open-source software for Enterprise Resource Planning (ERP) and business management, has recently launched its latest version, Odoo 17 Community Edition. This update introduces a range of new features and enhancements designed to streamline business operations and support growth.
The Odoo Community serves as a cost-free edition within the Odoo suite of ERP systems. Tailored to accommodate the standard needs of business operations, it provides a robust platform suitable for organisations of different sizes and business sectors. Within the Odoo Community Edition, users can access a variety of essential features and services essential for managing day-to-day tasks efficiently.
This blog presents a detailed overview of the features available within the Odoo 17 Community edition, and the differences between Odoo 17 community and enterprise editions, aiming to equip you with the necessary information to make an informed decision about its suitability for your business.
9. ■Limited to specifying which Azure region
■Cannot be connected to Azure virtual networks
■No control over IP address
■Limited control over make up of Azure worker
Azure Worker Limitations
10. ■Runbooks running within your DC
■Uses OMS
■Support script, workflow and graphical runbooks
■No inbound firewall requirements
Hybrid Workers
11.
12. ■Collections of workers
■Runbooks are executed against groups
■Ideal for providing HA
■Share “run as” permissions
Hybrid Worker Groups
13.
14. ■Module Deployment
■Execution context
■No simple file or event triggers
■No prioritization of workers in a group
■Documentation
■Troubleshooting can be a challenge
Hybrid Worker Limitations
15. ■Start jobs from HTTPS requests
■Idea for application and 3rd party integration
■Great for starting jobs if Azure CMDLets are not installed
■Runbooks may need modifications to run from webhooks
Webhooks
Hi Everyone one, My name is Kieran Jacobsen, and today I will be talking to you about Microsoft Azure Automation and running automation tasks within your data centre using Hybrid Workers.
I wanted to start by saying how excited and honoured I am to be talking with you all today, this is an amazing community event and we have some astounding speakers and sessions today. I really hope that you all are enjoying the sessions as much as I have been.
Now I want to thank our sponsors, and in particular I want to make a special shout out to my employer, Readify. I just want to quickly talk about Readify. Readify is a company full of amazing, brilliant people who work with our clients to deliver outstanding software with velocity and uncommon sense. We provide a number of services including Software Development, BI, Cloud, Office 365 and SharePoint Consulting. We also have an amazing Managed Services Team.
Readify is currently hiring, we a number of roles available here in Melbourne and in other cities. If you are interested in what Readify can do to help you, or are interested in working for Readify, please feel free to come up and see me afterwards.
So just quickly a bit about me. My name is Kieran Jacobsen, I moved from Brisbane to Melbourne about 18 months ago when I joined Readify. I am a the Technical Lead for Infrastructure and Security within the Managed Services team. In a few weeks I will have been in the industry for 10 years, specialising in infrastructure, security and automation, with a focus on the Microsoft product stack. I am particularly interested in the automation of infrastructure security operations.
I am the maintainer of the PoshSecurity blog, where I write about a variety of topics, and also the maintainer of a number of open source PowerShell modules.
So the plan for todays presentation is to start with an introduction into Azure Automation, some basic concepts and the limitations of the Azure worker. We will then look at Hybrid Workers, Hybrid Worker Groups and cover off their limitations.
I am going to finish off looking at web hooks, including a more, real-world demonstration. We will look at a demo where we have a web hook, triggering a job on a hybrid worker. That Job will create an Active Directory user and notify a Slack channel of the users details.
Let’s take a look at Azure Automation.
Now Azure Automation has been around for 2 years now, and you would think that this point, it would be in fairly common use. From my experience, and even I am a late adopter, there hasn’t been a wide adoption of Azure Automation.
Microsoft’s goal was to provide a managed service for automation and scripting, with a big focus on simplifying the management of cloud systems. Microsoft focused heavily in the early days of Azure Automation on public cloud oriented tasks, this is unfortunately where a number of, in my opinion, poor design decisions crept in.
Azure Automation lives and breathes PowerShell. When it was first released, Automation only supported workflows, setting back its adoption considerable. Things changed last year with support for PowerShell scripts being introduced, along with a graphical runbook editor.
The reason I am a massive fan of Azure Automation is its availability. Azure is an extremely available platform, and Automation is built upon that availability. In more traditional automation or job scheduling platforms, one of the major risks was if the platform itself went down. The loss of our automation systems, even for a few minutes, could be utterly catastrophic, and even with clustering, HA and DR, it is always a scary risk. Moving our automation off to the cloud reduces these risks, they are not eliminated, just dramatically reduced.
Now there are just a few things we need to understand when we starting to look at Azure Automation.
We start by creating an Azure Automation account, an account contains everything to complete your automation goals, everything you need to make it happen. You can have more than one Automation account, and I myself use separate accounts to segregate different environments. Consider having a production and development account at a minimum.
Runbooks are our automation processes or procedures, runbooks are the tasks we want to execute in a repeatable fashion. A great example of a runbook would be a process, a script, that creates a virtual machine in Azure. The script accepts a name for a virtual machine, and creates storage, network connections, disks, network connectivity and even install extensions. Another example might be a runbook that creates a user account, specifying specific display properties, configuring exchange settings, skype settings and even third party services.
Assets are reusable components that are shared across runbooks. Assets could be variables, or schedules specifying when our runbooks are to be run, they could be PowerShell modules, credentials, certificates for authentication, or connections. Variables can be strings, Boolean values, integers or datetime values.
The next thing you need to understand are jobs. When we start a runbook, specifying parameters if required, the result will be a job. Jobs are executed by workers, and jobs have a lifecycle or state. Jobs can either be new, queued, running, failed, stopped, suspended, or completed.
The last element are workers. We often overlook what system is actually execution our automation tasks, and this is a folly that is often repeated in enterprise environments. Originally there was one type of worker, one that was fully managed by Azure, hosted independently and separate from our other Azure resources. If you are familiar with VSTS build and release, then think of the hosted agent.
So to begin with, lets take a look at Azure Automation. Switching to the Azure portal, we can see here I have an account, there are a number of items within the account including runbooks, assets, hybrid worker groups, DSC configurations, DSC Node configurations and DSC nodes. We can see some job statistics, including how many have completed, failed or were suspended.
Down at the bottom we can see that the source control of choice is GitHub, and we can see some details around the configurations of source control integration. Right now, Azure Automation only support GitHub, VSTS is coming soon. There is a user voice around if VSTS should be implemented.
Drilling into runbooks, we can see some of those I will be running tonight. Notice the authoring status. When we create a runbook or make changes to it, they will not take effect until a runbook is published. This allows you to safely make changes whilst keeping your production environment functioning.
Whilst we are here, let’s run our first runbook! Let’s select Get-MyFirstRunBook,all that this one does is return a hello world message. Let’s hit start, now you can see I have the option to execute this on Azure or a Hybrid worker, got now lets just run it on Azure. Now whilst we wait for the job to complete, we will see it move through a few states, queued, running and them hopefully completed. I can also view a list of all of the jobs by selecting the jobs tile under details. Here I can see jobs as they are running, as well as go back and view previous jobs.
Now we can see that this has completed, and we can see the output of the runbook.
Now you might be starting to wonder, well, what is wrong with the Azure worker? It looks ok to me!
The issues arise from the fact we get very limited control over the make up of the worker and where it is connected.
When we create our Automation Account, we can select which Azure region it is to be created in. Note that right now, only 5 regions are available, East US 2, South Central US, Japan East, Southeast Asia and West Europe. That doesn’t sound like a huge amount of choice. Your choice of region will control where your Azure workers are located, so choose well.
There currently isn’t an option to attach a worker to a virtual network, so we don’t get any of the connectivity options offered, nor is there any option to specify a static or reserved IP address. There is a chance each job will be executed from a different worker, with a different IP address.
Now you might not think this is a problem, but consider this integration scenario. We need to integrate with a partner organisation who provides our HR system, they restrict access to their API to authorised IP addresses. How do we achieve this? Without vnet connectivity, reserved IP addressing, our options look limited. Perhaps we could whitelist that specific Azure regions IP addresses? Microsoft does in fact provide a file containing all of the IPv4 subnets allocated to Azure. I have written a PowerShell module that assists with working with this file, allowing a specific regions addresses to be extracted. There is only one problem, the number of addresses is significant. Consider the Southeast Asia region, one of the smaller regions where Azure Automation is available, it consists of 67 separate ranges, with a total of about 239 thousand addresses! Some how, I doubt whitelisting every IP address is going to be feasible.
For those of you wondering, there are over 1700 subnets allocated to Azure, or approximately 5.8 million district IP addresses.
What about the makeup of an Azure worker? Microsoft controls almost everything here, from Operating System, patching and even PowerShell version. The only control we get is the ability to specify additional PowerShell modules, and from experience, that can be quite problematic. If you need something more complex than a PowerShell module, then you are out of luck.
Overall, these limitations can make it a hard for Azure Automation to be adopted into larger more, complex enterprise environments.
Enter Hybrid Workers.
Hybrid workers allow us to develop more advanced runbooks than we could previously, allowing for runbooks to access resources within your network, integrate with 3rd party frameworks, and give us finer grained control over the execution environment. They solve many of the limitations with the Azure worker.
To make use of Hybrid workers, you will need to implement the Operations Management Suite. Now I haven’t tested if hybrid workers will function if you are using OMS via the SCOM connector, however I have read of this being possible. For my production environments, and even this presentation they are direct attached. You will also need to install and configure the OMS Automation solution as well.
Hybrid workers support all three runbook types, and most importantly you don’t need to open any inbound firewall ports, instead the worker agents will connect out to Azure over HTTPS, and monitor for jobs that they need to perform. I have taken a peak at the internals, and all of this is achieved via Azure Service Bus. I really do wish I could hook PowerShell into custom Azure Service Bus instances as well, if anyone has any neat solutions, please let me know.
Now Microsoft’s documentation here refers a lot to resources within your local data centre, however I see hybrid workers as being highly useful to IAAS situations just as they are on premise.
Let’s take a look at hybrid workers.
So lets run our first job on a hybrid worker. For tonight's demonstrations, I have two Windows Server 2012R2 servers, they are domain controllers for a domain called CORE.
Firstly, I am going to show you the OMS console. In the OMS console, you can see that I have the automation solution added, and it is configured to my azure automation account, poshsecurity-aa.
Let’s go back to the Azure portal. Whilst I have my hybrid workers already configured and running, if you wanted to set your own up, there are two values you need, and we get both of these from the Key icon here. We need to take a note of one of the access keys, and then the URL endpoint for our azure automation account. Adding a hybrid worker is as simple as calling add-hybridrunbookworker, and specifying these two values and the name of the group to add them to. We will talk about the groups in a minute.
Let’s take a look at our group, if I go into Hybrid Worker Groups, we can see a single group. Digging in to that, we can see there is two hybrid workers, DC01 and DC02.
Now on to running our first hybrid job. I am going to run a job called Get-Hostname. This runbook simply outputs the hostname of the worker it is running on. If we hit start on this runbook, we will be asked once again where do we want to run this job, let’s select hybrid and then our domaincontrollers group. Now this is going to be queued up, and then executed, once it is completed, lets look at the output. As you can see, that DC01/DC02 the hostname of one of our works is displayed.
Hybrid Worker Groups are collections of workers, a little bit like a server farm, that can complete our automation activities. There is no reason why we couldn’t have multiple groups, each configured or placed in different places on our network. You might have one group setup that has access to your internal HR systems, another group might near your webserver farm to perform activities there.
When a Job is created, one, and only one worker in the group that job has been assigned to, will complete it. Don’t think of groups as load balancing, whilst they will to an extent distribute the jobs, this isn’t so much designed for load balancing and more designed for high availability. Now just to note, the failover isn’t as smooth and as seamless as it could be. If a worker does fail, it make take some time for everything to work it out. The main driver for work groups is to ensure that we always have a worker available to complete our automation tasks. Workers in a group do not need to be in the same data centre, they could represent geographically dispersed systems at multiple locations for availability.
Workers run jobs under the same execution also called a run as account. No matter what runbook job is sent to the group, they are all executed as the same account.
This time, why don’t we start a bunch of jobs and see what happens. I have some PowerShell code, here that will spin up 5 jobs for us, and then read the output back for us.
for ($a = 0; $a -le 10;$a++ )
{
"Starting Job $a"
$null = Start-AzureRmAutomationRunbook -Name 'Get-Hostname' -RunOn 'DomainControllers' -ResourceGroupName 'poshsecurity-aa' -AutomationAccountName 'poshsecurity-aa'
}
$Jobs = Get-AzureRmAutomationJob -ResourceGroupName 'poshsecurity-aa' -AutomationAccountName 'poshsecurity-aa' | select-object -first 10
foreach ($job in $Jobs)
{
(Get-AzureRmAutomationJobOutput -Id $job.id -ResourceGroupName 'poshsecurity-aa' -AutomationAccountName 'poshsecurity-aa').text
}
We should see that some ran on DC01 and others ran on DC02. Pretty neat Eh?
Now let's take a look at changing the account that these runbook jobs are running as.
So I have another runbook, Get-RunningUser, will simply return as output the user account that we are running as. Let's run it and see what it returns. So let's select to run on the hybrid worker. And we can see that it returns that the runbook was running as nt authority\system.
Now before we change the account jobs will be run as, we need to ensure we have a credential asset defined with the appropriate settings. If I go to assets, and then credentials, you can see I have one called AutomationAccount. These are domain credentials that we want to use to run our jobs.
Now if I go back into the group settings, then select "hybrid worker group settings". Now as you can see, we have the run as selected as "default". Let's select custom, next we will be asked to select a credental, and select the AutomationAccount.
I am going to save, and go back to runbooks, and run the get-runninguser. And if we look at the output, then we see that the account is core\azureautomation, which is the user it was configured for.
Who here is sick of all the jumping around in the portal yet? I know I am.
Unfortunately, all this comes with some limitations. Now most of these might not be a show stopper for you, they might not even be an issue, it is still best that you are aware of them.
Modules are not automatically deployed to hybrid workers. Unlike with Azure Workers, modules installed as assets will not be deployed automatically. Either script the prerequisite module install or use DSC. If you have come this far, why not sure Azure Automation DSC?
Execution context, as I mentioned earlier, is tied to the worker group. Now for most people, you probably don’t care about executing one runbook as a different user account than another. Thankfully there are some easy solutions to this one.
Now I for one would like to see file close triggers, and I would love if the story of trigging from event logs was much simpler. You certainly can trigger jobs from Windows events, but it is a lot of work.
One thing that would be nice to see is weighting or prioritization within the worker groups. It would be nice to be able to say, run the runbooks here on this worker, unless it is dead. Each hybrid worker in a group has the same chance to perform the job as the others. Whilst this might not cause issues to most people, there are probably situations where this could be an issue.
Now documentation has been a big limitation, but has greatly improved over the past few weeks.
Azure Automation doesn’t have a strong logging model when it comes to the hybrid workers, there are some diagnostic logs and traces on each worker, but they are more oriented to assist Microsoft support than for administrators on their own. I am hoping that perhaps OMS may start to fill in the gaps here.
These days one of the most common bits of automation buzzwords and lingo is web hooks, and even Azure Automation provides us with a method of trigging runbook execution from a single HTTPS request. Web hooks are great for 3rd party integration with things like VSTS, GitHub, Slack, SharePoint, PushOver etc.
You might want to trigger off jobs based on monitoring alerts, or specific events within a Slack chat room. Webhooks are very effective.
One thing I am looking to use Webhooks for is providing a method to trigger automation, without the overheads. Users don’t need to have the Azure CMDLets installed, nor do they need to have large complex scripts locally available. With a webhook, all they need is PowerShell and the webhook URL.
I have also made us of webhooks with SharePoint, allowing for users to launch automation tasks from a click of a button or a workflow.
Now webhooks are not perfect. They don’t integrate with the normal parameter mechanism that exists for runbooks. You are going to need to modify how your runbook is written to accept a specific webhook parameter object. This object will contain a bunch of information including the name of the webhook that triggered the job, the request headers and the request body. You are not going to get any support working with handling objects in the webhook request. If you want to send data to the hook, then you will need to convert it there and back. I recommend sending any data as a JSON formatted body and then converting back from JSON in your runbook.
I don’t want to scare you off web hooks, they are extremely powerful, and extremely useful in our automation life cycle.
So I am going to show you two demos on integrating with web hooks. We will start by creating our own webhook and calling it from PowerShell.
Firstly we go to the Runbook that we want to run, and select webhook. We then customize the settings, entering a name, expiry and make sure you copy the URL!!!
Now this runbook doesn't need parameters, but don't forget to say, run on hybrid worker. Then we hit create.
Now that it is created, we go to powershell, and call invoke-restmethod, the url we specified, and the method post. When this returns, it will return the Job ID for the job we just kicked off.
And If I switch back to the azure portal, we can see a job has been queued up, and has been executed successfully.
Now let’s look at a more interesting example. Let’s take a look at it in the ISE. New-ADUser is a runbook that accepts data via a webhook. Specifically firstname and lastname. It will then create an active directory user based upon that information. After it creates the account, it is going to send me a message in Slack with the accounts password.
I have also created a little function to kick the whole process off. So Let’s paste that into a PowerShell window, execute it, and then switch over to slack.
And there we have the password, and if I look in AD, we can see the account has been created.
So some quick links. I will be posting up the slide deck on my site at poshsecurity.com.
The runbooks are all available up on github.
I have also included a good reference on hybrid workers and one on webhooks. The webhook article goes into a great amount of detail on setting up a webhook and handling objects being passed in.
The last link is for the Azure Automation Authoring Toolkit, often called the Azure Automation ISE plugin. This provides you with an ISE integrated environment for working on runbooks and assets. The toolkit can make a working with Azure Automation much easier, now I only wish there was a similar plugin for Visual Studio Code.
So that is all for me today. I want to thank you all for listening to me today.
Does anyone have any questions?