Lucy Davinhart presented on how her team at Sky Betting & Gaming accelerated their adoption of Vault using Terraform. They initially managed Vault configurations manually, which was time-consuming and lacked auditability. They developed a Terraform pipeline that imports existing Vault configurations, allows editing configurations through pull requests, and applies approved changes to Vault. This provided time savings, visibility, auditability, and reduced their own permissions. They continue improving the pipeline and plan to add more resources and validation.
Application monitoring is being talked about a lot these days and it helps provide key information that is helpful in developing better software and also in taking some key business decision. Datadog offers monitoring as a service.
In this Meetup Arik Lerner – Liveperson Team lead of Java Automation, Performance & Resilience , will talk about How we measure our services, By End2End testing which become one of the most critical Monitor tool in LP .
Over 200K tests runs per day providing statistics and insights into the problem as they happen.
Arik will go through different topics and stages of the journey and share details that led to current results .
Part of the menu topics are : The Awakens of the End2End Insights
• How we measure our services using synthetic user experience
• Measuring through analytics & insights
• How we collect our data
• How we debug our services? Hint: video recording, HAR (Http archive), KIbana , Dashboard analytics & insights
• Future logs App correlation with End2End data
• Our tools: Selenium, Jenkins and cutting edge technologies such as Kafka & ELK (Elastic search, Logstash and Kibana)
In this Meetup, Arik will host Ali AbuAli- NOC Team Leader , who will talk about the e2e usage on his day 2 day work.
So many times our customers need a simple routine that can be executed on a routine basis but the solution doesn’t need to be an elaborate solution without going the trouble of setting servers and other infrastructure. Serverless computer is the abstraction of servers, infrastructure, and operating systems and make getting solutions to your customer’s needs much quicker and cheaper. During this session we will look at how Azure Functions will enable you to run code on-demand without having to explicitly provision or manage infrastructure.
Service Discovery and Registration in a Microservices ArchitecturePLUMgrid
Microservices, Service Discovery and Registration have been heading towards the peak of inflated expectations on the Gartner Hype cycle for over the last year or so, but there has often been a lack of clarity as to what these are, why are they needed or how to implement them well.
Service discovery and registration are key components of most distributed systems and service oriented architectures. In this session we will talk about what, why and how of service registration and discovery in distributed systems in general and OpenStack in particular.
We will talk about some of the technologies that address this challenge like Zookeeper, Etcd, Consul, Mesos-DNS, Minuteman, SkyDNS, SmartStack or Eureka. We will also address how these technologies as well as existing OpenStack projects can be used to solve this problem inside OpenStack environments.
From Dev to Ops:Delivering an API to Production with SplunkBrian Ritchie
Dive into the design, implementation, and operations of web APIs. As your API moves into operations, we will explore how you can use the Splunk platform to give your devops or ops teams the operational insight they need. We will demonstrate how Splunk can be used to provide historical and realtime visibility into your API applications and much more.
Originally given at Code on the Beach 2015, Jacksonville, FL.
https://www.codeonthebeach.com
Brian Ritchie, Chief Information Officer, Payspan
Kinjal Mehta, Manager of Systems Development, Peak 10
Docker in Production: How RightScale Delivers Cloud ApplicationsRightScale
Combining Docker, cloud infrastructure, and continuous integration and delivery practices can create a highly automated and efficient way to get new applications and features to market. The RightScale development team has been using Docker from development to continuous integration, and now the operations team has taken Docker into the production environment.
The Docker in Production: How RightScale Delivers Cloud Applications webinar will cover:
Approach and use case for adopting Docker
How RightScale has adopted Docker for development, CI, and production
Overcoming technical and process challenges
The RightScale process before and after Docker
Benefits for both developers and operations teams
Application monitoring is being talked about a lot these days and it helps provide key information that is helpful in developing better software and also in taking some key business decision. Datadog offers monitoring as a service.
In this Meetup Arik Lerner – Liveperson Team lead of Java Automation, Performance & Resilience , will talk about How we measure our services, By End2End testing which become one of the most critical Monitor tool in LP .
Over 200K tests runs per day providing statistics and insights into the problem as they happen.
Arik will go through different topics and stages of the journey and share details that led to current results .
Part of the menu topics are : The Awakens of the End2End Insights
• How we measure our services using synthetic user experience
• Measuring through analytics & insights
• How we collect our data
• How we debug our services? Hint: video recording, HAR (Http archive), KIbana , Dashboard analytics & insights
• Future logs App correlation with End2End data
• Our tools: Selenium, Jenkins and cutting edge technologies such as Kafka & ELK (Elastic search, Logstash and Kibana)
In this Meetup, Arik will host Ali AbuAli- NOC Team Leader , who will talk about the e2e usage on his day 2 day work.
So many times our customers need a simple routine that can be executed on a routine basis but the solution doesn’t need to be an elaborate solution without going the trouble of setting servers and other infrastructure. Serverless computer is the abstraction of servers, infrastructure, and operating systems and make getting solutions to your customer’s needs much quicker and cheaper. During this session we will look at how Azure Functions will enable you to run code on-demand without having to explicitly provision or manage infrastructure.
Service Discovery and Registration in a Microservices ArchitecturePLUMgrid
Microservices, Service Discovery and Registration have been heading towards the peak of inflated expectations on the Gartner Hype cycle for over the last year or so, but there has often been a lack of clarity as to what these are, why are they needed or how to implement them well.
Service discovery and registration are key components of most distributed systems and service oriented architectures. In this session we will talk about what, why and how of service registration and discovery in distributed systems in general and OpenStack in particular.
We will talk about some of the technologies that address this challenge like Zookeeper, Etcd, Consul, Mesos-DNS, Minuteman, SkyDNS, SmartStack or Eureka. We will also address how these technologies as well as existing OpenStack projects can be used to solve this problem inside OpenStack environments.
From Dev to Ops:Delivering an API to Production with SplunkBrian Ritchie
Dive into the design, implementation, and operations of web APIs. As your API moves into operations, we will explore how you can use the Splunk platform to give your devops or ops teams the operational insight they need. We will demonstrate how Splunk can be used to provide historical and realtime visibility into your API applications and much more.
Originally given at Code on the Beach 2015, Jacksonville, FL.
https://www.codeonthebeach.com
Brian Ritchie, Chief Information Officer, Payspan
Kinjal Mehta, Manager of Systems Development, Peak 10
Docker in Production: How RightScale Delivers Cloud ApplicationsRightScale
Combining Docker, cloud infrastructure, and continuous integration and delivery practices can create a highly automated and efficient way to get new applications and features to market. The RightScale development team has been using Docker from development to continuous integration, and now the operations team has taken Docker into the production environment.
The Docker in Production: How RightScale Delivers Cloud Applications webinar will cover:
Approach and use case for adopting Docker
How RightScale has adopted Docker for development, CI, and production
Overcoming technical and process challenges
The RightScale process before and after Docker
Benefits for both developers and operations teams
DevOps on AWS: Accelerating Software Delivery with the AWS Developer ToolsAmazon Web Services
Learn more about the processes followed by Amazon engineers and discuss how you can bring them to your company by using AWS CodePipeline and AWS CodeDeploy, services inspired by Amazon's internal developer tools and DevOps culture.
API and App Ecosystems - Build The Best: a deep diveCisco DevNet
A session in the DevNet Zone at Cisco Live, Berlin. This presentation presents our perspective and guidance on full life-cycle management and governance of API's from defining with the customer in mind, building, publishing on a single platform, supporting and retiring API's for the business outcomes you're driving!
Modern businesses have data at their core, and this data is changing continuously. How can we harness this torrent of information in real-time? The answer is stream processing, and the technology that has since become the core platform for streaming data is Apache Kafka. Among the thousands of companies that use Kafka to transform and reshape their industries are the likes of Netflix, Uber, PayPal, and AirBnB, but also established players such as Goldman Sachs, Cisco, and Oracle.
Unfortunately, today’s common architectures for real-time data processing at scale suffer from complexity: there are many technologies that need to be stitched and operated together, and each individual technology is often complex by itself. This has led to a strong discrepancy between how we, as engineers, would like to work vs. how we actually end up working in practice.
In this session we talk about how Apache Kafka helps you to radically simplify your data processing architectures. We cover how you can now build normal applications to serve your real-time processing needs — rather than building clusters or similar special-purpose infrastructure — and still benefit from properties such as high scalability, distributed computing, and fault-tolerance, which are typically associated exclusively with cluster technologies. Notably, we introduce Kafka’s Streams API, its abstractions for streams and tables, and its recently introduced Interactive Queries functionality. As we will see, Kafka makes such architectures equally viable for small, medium, and large scale use cases.
Protecting your data at rest with Apache Kafka by Confluent and Vormetricconfluent
Learn how data in motion is secure within Apache Kafka and the broader Confluent Platform, while data at rest can be secured by solutions like Vormetric Data Security Manager.
OpenStack: Toward a More Resilient CloudMark Voelker
Since it's inception over four years ago, OpenStack has become the most popular open source software for building many types of clouds in part due to the flexibility it provides. As more adoption increases, interest has increased in building OpenStack clouds on a highly available control plane infrastructure. In this talk we will provide an introduction to today's OpenStack community and software, then dive deeper into how to build more highly available, scalable OpenStack architectures. - See more at: http://www.percona.com/news-and-events/percona-university-smart-data-raleigh/openstack-toward-more-resilient-cloud#sthash.wicdUMdH.dpuf
Scale your application to new heights with NGINX and AWSNGINX, Inc.
On-demand Link:
https://www.nginx.com/resources/webinars/scale-application-new-heights-nginx-aws/
In this webinar we will discuss how AWS and NGINX can complement each other to create highly scalable, high performance and secure web applications. We will cover the different ways that NGINX can integrate with AWS services such as NLB, Route53 and PrivateLink to add new layers of security and functionality to your high traffic website, streaming service or IOT system.
Scala Security: Eliminate 200+ Code-Level Threats With Fortify SCA For ScalaLightbend
Join Jeremy Daggett, Solutions Architect at Lightbend, to see how Fortify SCA for Scala works differently from existing Static Code Analysis tools to help you uncover security issues early in the SDLC of your mission-critical applications.
This slide deck covers spinning up a demo of elk using vagrant, and focusses on why aggregated logging is important, how it can add value and help enable collaboration and enhance 'Continual Service Improvement'.
Shared Security Responsibility Model of AWSAkshay Mathur
I heard many people saying that they need not worry about security of their application (or it is automatically PCI compliant) just because the application is hosted in AWS EC2.
This was presented in AWS meetup to make it clear to audience that security is shared responsibility. While AWS takes care of security at L1 & L2 and provide tools for L3 & L4, we need to take care of security at L7 (Application layer)
OSMC 2021 | Use OpenSource monitoring for an Enterprise Grade PlatformNETWAYS
There are many tools and frameworks for monitoring. Usually when you think of an Open Source solution, you don’t think to implement it in a COTS product. Nevertheless, this session will tell you how you can implement tools such as Prometheus, Grafana and ELK into such an Enterprise application platform. Monitoring performance, throughput and error rate is important to be in control of your transactions. If you use a Service Bus or SOA/BPM suite product there are a lot out of the box diagnostics waiting for you. The puzzle here is how to get it out in a useful way. Besides of the many commercial solutions also Open Source tools can help you out with it. You can export runtime diagnostics out of the Diagnostics framework, monitor your SOA Composites and trace down Service Bus statistics using Prometheus and Grafana. The session will elaborate how to set up a proper monitoring using these tools, also in a proactive way where automated monitoring is a must for every application environment.
Using apache camel for microservices and integration then deploying and managing on Docker and Kubernetes. When we need to make changes to our app, we can use Fabric8 continuous delivery built on top of Kubernetes and OpenShift.
In this talk, Marco and Shashi go in depth on the Kong Mesosphere DC/OS integration and how it enables developers to deploy Kong on a Mesosphere DC/OS cluster to simplify operations and achieve higher resource utilization.
APIs: Intelligent Routing, Security, & ManagementNGINX, Inc.
Kevin Jones, Global Consulting Engineer from NGINX San Francisco, preseentation about how to accelerate your journey to microservices with a modernised full API lifecycle management solution. Learn how to cut costs, improve performance, and reduce load on API endpoints. This presentation, covers:
All elements of full lifecycle management including API creation, securing your backend infrastructure, managing traffic, and ongoing monitoring.
Innovative architecture that doesn't involve additional microgateways to process API calls
Differentiated pricing model that does not penalize API adoption
In the era of cloud generation, the constant activity around workloads and containers create more vulnerabilities than an organization can keep up with. Using legacy security vendors doesn't set you up for success in the cloud. You’re likely spending undue hours chasing, triaging and patching a countless stream of cloud vulnerabilities with little prioritization.
Join us for this live webinar as we detail how to streamline host and container vulnerability workflows for your software teams wanting to build fast in the cloud. We'll be covering how to:
Get visibility into active packages and associated vulnerabilities
Reduce false positives by 98%
Reduce investigation time by 30%
Spot a legacy vendor looking to do some cloud washing
Introduction of using Hashicorp Vault with your NodeJS Application. How to store your secrets when using a cloud application in nodejs. Meetup in Austin Texas May 2019 (https://www.meetup.com/austinnodejs/events/srwjzqyzhbtb/)
Over the past 1.5 decade our industry has tried to adopt an increased amount of infrastructure automation. We called it Configuration Management, Infrastructure as Code, infrastructure as Software, Provisioning, Orchestration. We learned about Desired State, Idempotence, etc.. We have seen a number of tools become popular; we have seen a number of tools disappear. But over the years we have seen a number of patterns appear and reappear. Patterns that lead to actually getting great benefits out of automation, or just wasting time while missing out on goals. This talk will explain you a number of these patterns which we have frequently encountered in the wild, with their benefits and caveats. We will try to keep this tool agnostic. Your vision might be Clouded, and you might have to take this with a grain of Salt while you play the Chef from the Muppet show the story, all names, characters, and incidents portrayed in this production are fictitious. No identification with actual persons (living or deceased), places, buildings, and products are intended or should be inferred.
DevOps on AWS: Accelerating Software Delivery with the AWS Developer ToolsAmazon Web Services
Learn more about the processes followed by Amazon engineers and discuss how you can bring them to your company by using AWS CodePipeline and AWS CodeDeploy, services inspired by Amazon's internal developer tools and DevOps culture.
API and App Ecosystems - Build The Best: a deep diveCisco DevNet
A session in the DevNet Zone at Cisco Live, Berlin. This presentation presents our perspective and guidance on full life-cycle management and governance of API's from defining with the customer in mind, building, publishing on a single platform, supporting and retiring API's for the business outcomes you're driving!
Modern businesses have data at their core, and this data is changing continuously. How can we harness this torrent of information in real-time? The answer is stream processing, and the technology that has since become the core platform for streaming data is Apache Kafka. Among the thousands of companies that use Kafka to transform and reshape their industries are the likes of Netflix, Uber, PayPal, and AirBnB, but also established players such as Goldman Sachs, Cisco, and Oracle.
Unfortunately, today’s common architectures for real-time data processing at scale suffer from complexity: there are many technologies that need to be stitched and operated together, and each individual technology is often complex by itself. This has led to a strong discrepancy between how we, as engineers, would like to work vs. how we actually end up working in practice.
In this session we talk about how Apache Kafka helps you to radically simplify your data processing architectures. We cover how you can now build normal applications to serve your real-time processing needs — rather than building clusters or similar special-purpose infrastructure — and still benefit from properties such as high scalability, distributed computing, and fault-tolerance, which are typically associated exclusively with cluster technologies. Notably, we introduce Kafka’s Streams API, its abstractions for streams and tables, and its recently introduced Interactive Queries functionality. As we will see, Kafka makes such architectures equally viable for small, medium, and large scale use cases.
Protecting your data at rest with Apache Kafka by Confluent and Vormetricconfluent
Learn how data in motion is secure within Apache Kafka and the broader Confluent Platform, while data at rest can be secured by solutions like Vormetric Data Security Manager.
OpenStack: Toward a More Resilient CloudMark Voelker
Since it's inception over four years ago, OpenStack has become the most popular open source software for building many types of clouds in part due to the flexibility it provides. As more adoption increases, interest has increased in building OpenStack clouds on a highly available control plane infrastructure. In this talk we will provide an introduction to today's OpenStack community and software, then dive deeper into how to build more highly available, scalable OpenStack architectures. - See more at: http://www.percona.com/news-and-events/percona-university-smart-data-raleigh/openstack-toward-more-resilient-cloud#sthash.wicdUMdH.dpuf
Scale your application to new heights with NGINX and AWSNGINX, Inc.
On-demand Link:
https://www.nginx.com/resources/webinars/scale-application-new-heights-nginx-aws/
In this webinar we will discuss how AWS and NGINX can complement each other to create highly scalable, high performance and secure web applications. We will cover the different ways that NGINX can integrate with AWS services such as NLB, Route53 and PrivateLink to add new layers of security and functionality to your high traffic website, streaming service or IOT system.
Scala Security: Eliminate 200+ Code-Level Threats With Fortify SCA For ScalaLightbend
Join Jeremy Daggett, Solutions Architect at Lightbend, to see how Fortify SCA for Scala works differently from existing Static Code Analysis tools to help you uncover security issues early in the SDLC of your mission-critical applications.
This slide deck covers spinning up a demo of elk using vagrant, and focusses on why aggregated logging is important, how it can add value and help enable collaboration and enhance 'Continual Service Improvement'.
Shared Security Responsibility Model of AWSAkshay Mathur
I heard many people saying that they need not worry about security of their application (or it is automatically PCI compliant) just because the application is hosted in AWS EC2.
This was presented in AWS meetup to make it clear to audience that security is shared responsibility. While AWS takes care of security at L1 & L2 and provide tools for L3 & L4, we need to take care of security at L7 (Application layer)
OSMC 2021 | Use OpenSource monitoring for an Enterprise Grade PlatformNETWAYS
There are many tools and frameworks for monitoring. Usually when you think of an Open Source solution, you don’t think to implement it in a COTS product. Nevertheless, this session will tell you how you can implement tools such as Prometheus, Grafana and ELK into such an Enterprise application platform. Monitoring performance, throughput and error rate is important to be in control of your transactions. If you use a Service Bus or SOA/BPM suite product there are a lot out of the box diagnostics waiting for you. The puzzle here is how to get it out in a useful way. Besides of the many commercial solutions also Open Source tools can help you out with it. You can export runtime diagnostics out of the Diagnostics framework, monitor your SOA Composites and trace down Service Bus statistics using Prometheus and Grafana. The session will elaborate how to set up a proper monitoring using these tools, also in a proactive way where automated monitoring is a must for every application environment.
Using apache camel for microservices and integration then deploying and managing on Docker and Kubernetes. When we need to make changes to our app, we can use Fabric8 continuous delivery built on top of Kubernetes and OpenShift.
In this talk, Marco and Shashi go in depth on the Kong Mesosphere DC/OS integration and how it enables developers to deploy Kong on a Mesosphere DC/OS cluster to simplify operations and achieve higher resource utilization.
APIs: Intelligent Routing, Security, & ManagementNGINX, Inc.
Kevin Jones, Global Consulting Engineer from NGINX San Francisco, preseentation about how to accelerate your journey to microservices with a modernised full API lifecycle management solution. Learn how to cut costs, improve performance, and reduce load on API endpoints. This presentation, covers:
All elements of full lifecycle management including API creation, securing your backend infrastructure, managing traffic, and ongoing monitoring.
Innovative architecture that doesn't involve additional microgateways to process API calls
Differentiated pricing model that does not penalize API adoption
In the era of cloud generation, the constant activity around workloads and containers create more vulnerabilities than an organization can keep up with. Using legacy security vendors doesn't set you up for success in the cloud. You’re likely spending undue hours chasing, triaging and patching a countless stream of cloud vulnerabilities with little prioritization.
Join us for this live webinar as we detail how to streamline host and container vulnerability workflows for your software teams wanting to build fast in the cloud. We'll be covering how to:
Get visibility into active packages and associated vulnerabilities
Reduce false positives by 98%
Reduce investigation time by 30%
Spot a legacy vendor looking to do some cloud washing
Introduction of using Hashicorp Vault with your NodeJS Application. How to store your secrets when using a cloud application in nodejs. Meetup in Austin Texas May 2019 (https://www.meetup.com/austinnodejs/events/srwjzqyzhbtb/)
Over the past 1.5 decade our industry has tried to adopt an increased amount of infrastructure automation. We called it Configuration Management, Infrastructure as Code, infrastructure as Software, Provisioning, Orchestration. We learned about Desired State, Idempotence, etc.. We have seen a number of tools become popular; we have seen a number of tools disappear. But over the years we have seen a number of patterns appear and reappear. Patterns that lead to actually getting great benefits out of automation, or just wasting time while missing out on goals. This talk will explain you a number of these patterns which we have frequently encountered in the wild, with their benefits and caveats. We will try to keep this tool agnostic. Your vision might be Clouded, and you might have to take this with a grain of Salt while you play the Chef from the Muppet show the story, all names, characters, and incidents portrayed in this production are fictitious. No identification with actual persons (living or deceased), places, buildings, and products are intended or should be inferred.
Presentation done at the November meeting of the Sudoers Barcelona group (https://www.meetup.com/sudoersbcn/).
HashiCorp Vault (https://www.vaultproject.io/)
"Vault és una eina per emmagatzemar i gestionar secrets. Veurem què ofereix, com instal·lar-la, utilitzar-la i operar-la, i la nostra experiència."
A presentation on why or why not microservices, why a platform is important, discovering how to break down a monolith and some of the challenges you'll face (data, transactions, boundaries, etc). Last section is on Istio and service mesh introductions. Follow on twitter @christianposta for updates and more details
Trent Hornibrook gave a recent talk at the Infracoders meet-up playing a thought experiment with the audience on 'what would be your tech decisions if you were given a blank cheque at at startup'.
Trent, recently working for a start-up then shared what decisions he made, and why
InVision is a collaborative design company that’s growing into Golang. That being said, when we started doing web services, we looked at using one of the middleware libraries out there such as Alice and Negroni. We found them all interesting but decided to tackle it on our own. As we did that we realized that our library was pretty cool so we broke it out and open sourced it as Rye. I’ll present on the approach we took and some of the benefits of using Rye including integration with Statsd, Context and custom middleware handlers we’ve added such as CIDR validation and JWT validation.
Design Patterns are not only cool but also bring years of collective wisdom to every level of developers. Since GoF, many books have been written and words shed, as well as many new concepts like Enterprise and Domain Design Patterns extended the coverage the Design Patterns, originally shared by the famous Gang of Four. Unlike the J2EE 1.4 era, Java EE provides easy and out of box implementations of many well known design patterns such as Singleton, Façade, Observer, Factory, Dependency Injection, Decorator, Data Access Patterns, MVC and even more. Many classical design patterns are actually just one annotation away from your project.
The slides from the talk I gave in Java.IL's Apr 2019 session.
These slides describe Keycloak, OAuth 2.0, OpenID and SparkBeyond's integration with Keycloak
Pentest Apocalypse-That's when you hire a pentester, and they walk all over your network. To avoid this, organizations need to be prepared before the first packet is sent in order to get the most value from the tester. There is no excuse for pentesters to find critical vulnerabilities that are six years old on an assessment. And who needs a zero-day when employees leave credentials on wide-open shares? Just like how Doomsday Preppers helps you prepare for the apocalypse, this presentation will help you prepare for, and avoid, a pentest apocalypse by describing common vulnerabilities found on many assessments. Being prepared for common pentester activities will not only help add value to a pentest but will also help prevent attackers from using the same tactics to compromise your organization.
For More Information Please Visit:- http://bsidestampa.net
http://www.irongeek.com/i.php?page=videos/bsidestampa2015/104-pentest-apocalypse-beau-bullock
Kubernetes is an open source container cluster orchestration platform founded by Google. This presentation covers an overview of it's main concepts, plus how it fits into Google Cloud Platform. This was delivered by Kit Merker at DevNexus 2015 in Atlanta.
Anyone who has tried integrating search in their application knows how good and powerful Solr is but always wished it was simpler to get started and simpler to take it to production.
I will talk about the recent features added to Solr making it easier for users and some of the changes we plan on adding soon to make the experience even better.
The ColdBox cbsecurity module is a collection of modules to help secure your ColdBox applications. In this session, we will explore all the features behind CBSecurity 3. We will build an application using the module to showcase authentication, authorization, and JWT authentication.
https://coldbox-security.ortusbooks.com/
https://intothebox.org
https://cfcasts.com/
Similar to How we accelerated our vault adoption with terraform (20)
Consul is a Service Networking tool designed to connect applications and services across a multi-cloud world. With Consul, organizations can manage service discovery and health monitoring, automate their middleware and leverage service mesh to connect virtual machine environments and Kubernetes clusters.
See what deploying across polycloud environments using cross-workloads looks like in HashiCorp Nomad. And See Consul tie these workloads together with secure routing.
An important use-case for Vault is to provide short lived and least privileged Cloud credentials. In this webinar we will review specifically how Vault's Azure Secrets Engine can provide dynamic Azure credentials. We will cover details on how to configure the Azure Secrets Engine in Vault and use it in an application. If you are using Azure now or in the near future, join us for some patterns on maintaining a high security posture with Vault's dynamic credentials model!
Migrating from VMs to Kubernetes using HashiCorp Consul Service on AzureMitchell Pronschinske
DevOps tools became very popular with the adoption of public cloud, but Operational teams now realize that their benefits can be extended to enterprise data centers. In reality, cloud native tools can help bridge public clouds and private data centers by enabling a common framework to manage applications and their underlying infrastructure components.
In this session you’ll learn about the latest Cisco ACI integrations with Hashicorp Terraform and Consul to deliver a powerful solution for end-to-end on-prem and cloud infrastructure deployments.
Empowering developers and operators through Gitlab and HashiCorpMitchell Pronschinske
Companies digitally transforming themselves into modern, software-defined businesses are building their foundation on cloud native solutions like GitLab and Hashicorp. Together, GitLab, Terraform, and Vault are empowering organizations to be more iterative, flexible, and secure. Join us in this session to learn more about how GitLab and Hashicorp are lowering the barrier of entry into industrializing the application development and delivery process across the entire application lifecycle.
Automate and simplify multi cloud complexity with f5 and hashi corpMitchell Pronschinske
In this session, Lori Mac Vittie, principal technology evangelist at F5 discusses digital transformation and how F5 and HashiCorp are working together to unlock the full potential of the cloud
In this webinar we will cover the new features in Vault 1.5. This release introduces several new improvements along with new features around the following areas: Usage Quotas for Request Rate Limiting, OpenShift Helm Support (beta), Telemetry and Monitoring Enhancements, and much more. Join Vault technical marketer Justin Weissig as he demos Vault 1.5's new features.
Integrated Storage, a key feature now available in Vault 1.4, can streamline your Vault architecture and improve performance. See demos and documentation of its use cases and migration process.
Learn how Cisco ACI and HashiCorp Terraform can help you increase productivity while reducing risks for your organization by managing infrastructure as code.
HashiCorp Nomad is an easy-to-use and flexible workload orchestrator that enables organizations to automate the deployment of any applications on any infrastructure at any scale across multiple clouds. While Kubernetes gets a lot of attention, Nomad is an attractive alternative that is easy to use, more flexible, and natively integrated with HashiCorp Vault and Consul. In addition to running Docker containers, Nomad can also run non-containerized, legacy applications on both Linux and Windows servers.
Terraform allows you to define your infrastructure as code. Variables and modules empower you to extend and reuse your Infrastructure as Code. With the Consul provider for Terraform, you can also let your Consul KV data drive your Terraform runs.
Learn from HashiCorp Vault engineer Nick Cabatoff how you can ensure that you actually use Vault effectively to allow no potential leaks of secret credentials, apis, or certs.
Watch this succinct guide to the benefits of modern scheduling and how HashiCorp Nomad can help you move your organization toward more modern deployment patterns.
See a demo of HashiCorp Consul Service (HCS) on Azure and learn how it could be used to migrate from monolithic, VM-based apps to microservices running on Kubernetes.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
Artificia Intellicence and XPath Extension FunctionsOctavian Nadolu
The purpose of this presentation is to provide an overview of how you can use AI from XSLT, XQuery, Schematron, or XML Refactoring operations, the potential benefits of using AI, and some of the challenges we face.
Need for Speed: Removing speed bumps from your Symfony projects ⚡️Łukasz Chruściel
No one wants their application to drag like a car stuck in the slow lane! Yet it’s all too common to encounter bumpy, pothole-filled solutions that slow the speed of any application. Symfony apps are not an exception.
In this talk, I will take you for a spin around the performance racetrack. We’ll explore common pitfalls - those hidden potholes on your application that can cause unexpected slowdowns. Learn how to spot these performance bumps early, and more importantly, how to navigate around them to keep your application running at top speed.
We will focus in particular on tuning your engine at the application level, making the right adjustments to ensure that your system responds like a well-oiled, high-performance race car.
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
Navigating the Metaverse: A Journey into Virtual Evolution"Donna Lenk
Join us for an exploration of the Metaverse's evolution, where innovation meets imagination. Discover new dimensions of virtual events, engage with thought-provoking discussions, and witness the transformative power of digital realms."
OpenMetadata Community Meeting - 5th June 2024OpenMetadata
The OpenMetadata Community Meeting was held on June 5th, 2024. In this meeting, we discussed about the data quality capabilities that are integrated with the Incident Manager, providing a complete solution to handle your data observability needs. Watch the end-to-end demo of the data quality features.
* How to run your own data quality framework
* What is the performance impact of running data quality frameworks
* How to run the test cases in your own ETL pipelines
* How the Incident Manager is integrated
* Get notified with alerts when test cases fail
Watch the meeting recording here - https://www.youtube.com/watch?v=UbNOje0kf6E
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
A Study of Variable-Role-based Feature Enrichment in Neural Models of CodeAftab Hussain
Understanding variable roles in code has been found to be helpful by students
in learning programming -- could variable roles help deep neural models in
performing coding tasks? We do an exploratory study.
- These are slides of the talk given at InteNSE'23: The 1st International Workshop on Interpretability and Robustness in Neural Software Engineering, co-located with the 45th International Conference on Software Engineering, ICSE 2023, Melbourne Australia
AI Fusion Buddy Review: Brand New, Groundbreaking Gemini-Powered AI AppGoogle
AI Fusion Buddy Review: Brand New, Groundbreaking Gemini-Powered AI App
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-fusion-buddy-review
AI Fusion Buddy Review: Key Features
✅Create Stunning AI App Suite Fully Powered By Google's Latest AI technology, Gemini
✅Use Gemini to Build high-converting Converting Sales Video Scripts, ad copies, Trending Articles, blogs, etc.100% unique!
✅Create Ultra-HD graphics with a single keyword or phrase that commands 10x eyeballs!
✅Fully automated AI articles bulk generation!
✅Auto-post or schedule stunning AI content across all your accounts at once—WordPress, Facebook, LinkedIn, Blogger, and more.
✅With one keyword or URL, generate complete websites, landing pages, and more…
✅Automatically create & sell AI content, graphics, websites, landing pages, & all that gets you paid non-stop 24*7.
✅Pre-built High-Converting 100+ website Templates and 2000+ graphic templates logos, banners, and thumbnail images in Trending Niches.
✅Say goodbye to wasting time logging into multiple Chat GPT & AI Apps once & for all!
✅Save over $5000 per year and kick out dependency on third parties completely!
✅Brand New App: Not available anywhere else!
✅ Beginner-friendly!
✅ZERO upfront cost or any extra expenses
✅Risk-Free: 30-Day Money-Back Guarantee!
✅Commercial License included!
See My Other Reviews Article:
(1) AI Genie Review: https://sumonreview.com/ai-genie-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
#AIFusionBuddyReview,
#AIFusionBuddyFeatures,
#AIFusionBuddyPricing,
#AIFusionBuddyProsandCons,
#AIFusionBuddyTutorial,
#AIFusionBuddyUserExperience
#AIFusionBuddyforBeginners,
#AIFusionBuddyBenefits,
#AIFusionBuddyComparison,
#AIFusionBuddyInstallation,
#AIFusionBuddyRefundPolicy,
#AIFusionBuddyDemo,
#AIFusionBuddyMaintenanceFees,
#AIFusionBuddyNewbieFriendly,
#WhatIsAIFusionBuddy?,
#HowDoesAIFusionBuddyWorks
Transform Your Communication with Cloud-Based IVR SolutionsTheSMSPoint
Discover the power of Cloud-Based IVR Solutions to streamline communication processes. Embrace scalability and cost-efficiency while enhancing customer experiences with features like automated call routing and voice recognition. Accessible from anywhere, these solutions integrate seamlessly with existing systems, providing real-time analytics for continuous improvement. Revolutionize your communication strategy today with Cloud-Based IVR Solutions. Learn more at: https://thesmspoint.com/channel/cloud-telephony
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...Mind IT Systems
Healthcare providers often struggle with the complexities of chronic conditions and remote patient monitoring, as each patient requires personalized care and ongoing monitoring. Off-the-shelf solutions may not meet these diverse needs, leading to inefficiencies and gaps in care. It’s here, custom healthcare software offers a tailored solution, ensuring improved care and effectiveness.
GraphSummit Paris - The art of the possible with Graph TechnologyNeo4j
Sudhir Hasbe, Chief Product Officer, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
GraphSummit Paris - The art of the possible with Graph Technology
How we accelerated our vault adoption with terraform
1. Lucy Davinhart – Sky Betting & Gaming
How we accelerated our Vault
adoption, with Terraform
2. 👋 Who am I?
• Senior Automation Engineer
• @ Sky Betting & Gaming
• Part of The Stars Group
• Delivery Engineering Squad
• Part of the Infrastructure & Platforms Tribe
• Among other things we…
• Look after our Vault clusters
• Maintain Vault integrations & tooling
• Control access to AWS (via Vault!)
• Support internal customers
@LUCYDAVINHART @SBGTECHTEAM
3. 🔐 What do we use Vault for?
• Across the company, our Vault users are:
• > 4000 Virtual Machines
• > 500 humans
• > 250 various AppRoles
• And a few more for Kubernetes Auth and AWS Auth
• Main features we use:
• K/V Secrets
• PKI
• AWS Credentials
@LUCYDAVINHART @SBGTECHTEAM
4. 💬 This Talk
• Our problems managing Vault and onboarding people
• How we went about solving them
• Our initial Terraform solution
• How we have improved it over time
• The Future
@LUCYDAVINHART @SBGTECHTEAM
6. ✍️ Everything Manual
• Time consuming for us to make changes
• Making the changes
• Comparing policies, AppRoles, LDAP groups, etc.
• Time consuming to see what was in Vault already
• We were regularly asked to troubleshoot why User A doesn’t have access to Secret B
• Lack of standards / best practices
• (and we didn’t really know what we were doing initially)
• Automating Stuff is Cool 😎
@LUCYDAVINHART @SBGTECHTEAM
7. 🧞♀️ We were too powerful
• We started out with full admin rights and access to everything
• Configure all the auth and secret mounts
• Read and write to all the secrets
• Give ourselves any policies we needed
• But at least none of us had root tokens, right? 😱
@LUCYDAVINHART @SBGTECHTEAM
8. 🙈 Lack of Audit Trail
• What was changed?
• When was it changed?
• Who changed it?
• How did it change?
• Why did they change it?
@LUCYDAVINHART @SBGTECHTEAM
🧞
🧞
☹️
10. Vault Config Ruby Gem
• Downloads Vault config (policies, AppRoles, LDAP groups, etc) and saves in git repo
• Jenkins job to run this on a schedule
• We now have configuration backups, so we can see what has changed and when
• But not necessarily who or why
• Written very quickly:
• Was useful very quickly
• Was not particularly maintainable
@LUCYDAVINHART @SBGTECHTEAM
11. Goldfish Vault UI
• A Vault UI, before one was available in Open Source Vault
• Policy Request feature
• Users edited policies in the UI, and submitted for approval
• Vault admins review changes and apply
@LUCYDAVINHART @SBGTECHTEAM
13. Terraform
• Codifies APIs into declarative configuration files
• Reproducible Infrastructure as Code
@LUCYDAVINHART @SBGTECHTEAM
Terraform Code
resource "vault_policy" "ravenclaw" { … }
resource "vault_policy" "hufflepuff" { … }
Terraform State
vault_policy.ravenclaw
vault_policy.slytherin
Terraform Plan
+ vault_policy.hufflepuff
- vault_policy.slytherin
14. 🧞 Terraform Pipeline Design Decisions
• Look like the Vault API as much as possible
• Files which match the Vault API, e.g. sys/policy/foo.json
@LUCYDAVINHART @SBGTECHTEAM
16. 🧞 Terraform Pipeline Design Decisions
• Look like the Vault API as much as possible
• Files which match the Vault API, e.g. sys/policy/foo.json
• (Initially) Take output from Ruby Gem as input
• Pull Requests to make changes
• Start with Policies, our most common request
• Everything in the repo in Vault
Nothing in Vault that was not in the repo
@LUCYDAVINHART @SBGTECHTEAM
Config
in
Vault
Config
in
Repo
Config
in Vault
+ Repo
Delete This
Create This
28. Init
• Ensures we have valid AWS credentials
• We store Terraform State in S3
• Dynamic AWS credentials from Vault
• terraform init
• Accesses remote Terraform State
• Downloads dependencies
• terraform workspace select test/prod
• Allows us to maintain separate Terraform State for different Vault clusters
@LUCYDAVINHART @SBGTECHTEAM
29. Import
• Lists resources in Vault
• Lists resources in Terraform State
• Imports resources not in Terraform State
@LUCYDAVINHART @SBGTECHTEAM
Config
in
Vault
Config
in
Repo
30. Generate
• Converts from files representing the Vault API into Terraform code
@LUCYDAVINHART @SBGTECHTEAM
resource "vault_policy" "example"
{
name = "dev-team"
policy = <<EOT
path "secret/my_app" {
capabilities = [”read”]
}
EOT
}
path "secret/my_app" {
capabilities = [”read”]
}
31. Validate
• terraform validate
• Ensures all generated Terraform code is syntactically correct
• Resource-specific checks
• Check for common human errors e.g.
• Types of certain resources (e.g. LDAP groups, AD users)
• Some case sensitivity issues
• Most of these are actually done in the Generate phase
@LUCYDAVINHART @SBGTECHTEAM
37. LDAP Groups
• One of the most common requests, after policies
• Initially: vault_generic_secret
• Resource to manage arbitrary Vault paths
• Later: vault_ldap_auth_backend_group
• Dedicated LDAP group resource
• LDAP Restructure: Only allow certain LDAP groups to be mapped to policies
• ✅ PG-Vault-Foo
• 🚫 SG-MyTeam
@LUCYDAVINHART @SBGTECHTEAM
38. AppRoles
• Another of the most common requests, after policies
• We introduced Terraform variables for CIDR ranges:
@LUCYDAVINHART @SBGTECHTEAM
variable "cidr_range_prod_jenkins_agents" {
type = "list"
default = [
”1.2.3.4/30", # Production Site A Jenkins Agents
”2.3.4.5/30", # Production Site B Jenkins Agents
...
]
}
39. AppRoles
• Another of the most common requests, after policies
• We introduced Terraform variables for CIDR ranges:
@LUCYDAVINHART @SBGTECHTEAM
{
"token_bound_cidrs": "${var.cidr_range_prod_jenkins_agents}",
"policies": [
"default",
"terraform_vault-readonly”
],
"token_max_ttl": 120
}
40. Kubernetes Auth Roles
• The team managing the k8s clusters wrote this one for us!
• Effort needed by them:
• Write Import Script, based on existing scripts
• Write Generate Script, based on existing scripts
• Effort needed by us:
• Review their scripts
@LUCYDAVINHART @SBGTECHTEAM
41. AWS Auth Roles
• Some auto-generation of resources
• Get all AWS Account IDs with:
aws organizations list-accounts
• Generate resources:
@LUCYDAVINHART @SBGTECHTEAM
resource "vault_aws_auth_backend_sts_role" "role" {
backend = ”aws"
account_id = "1234567890"
sts_role = "arn:aws:iam::1234567890:role/my-role"
}
42. Active Directory Users
• ad/roles/:role_name
• has a few fields you can’t write to
@LUCYDAVINHART @SBGTECHTEAM
{
"last_vault_rotation": "2018-05-24T17:14:38.677370855Z",
"password_last_set": "2018-05-24T17:14:38.677370855Z",
"service_account_name": "my-application@example.com",
"ttl": 100
}
43. Active Directory Users
• vault_generic_endpoint resource
@LUCYDAVINHART @SBGTECHTEAM
resource "vault_generic_endpoint" "ad_role-vaulttest" {
path = "ad/roles/vaulttest”
data_json = ‘{"service_account_name": ”VaultTest@fancycorp.net"}’
# When reading, the secret contains keys that cannot be written:
# password_last_set (when did the password last get updated)
# last_vault_rotation (when did Vault last update the password)
ignore_absent_fields = true
}
45. 🎉 What Did All This Give Us?
• Time
• Individual changes take less of our time We can handle more requests
• Visibility
• Easier to see what’s in Vault
• Easier to debug
• Auditability
• Who, What, When, How, Why
• grep-ability / Searchability
• Find common patterns
• Identify issues before they become problems
• Reducing our own permissions
• Lots of configuration can no longer be done by humans
@LUCYDAVINHART @SBGTECHTEAM
47. 🆕 New Resources
• PKI
• dynamic X.509 certificates
• Sentinel Policies
• Richer access control functionality than ACL policies
• Namespaces
• Self-managed sub-Vaults
@LUCYDAVINHART @SBGTECHTEAM
48. 🧞 Auto Generation
• AWS Accounts, all have standard permissions, which correspond to at least…
• 2x Vault Policies per account
• 2x LDAP Groups per account
• Auto-Generated PRs for common functionality
• Service Discovery for AppRole CIDR ranges
@LUCYDAVINHART @SBGTECHTEAM
49. 🧞🧞 Review Security Trade-Offs
• 2FA to apply changes
• e.g. require 2 Factor Auth before a human can grant Jenkins read/write access
@LUCYDAVINHART @SBGTECHTEAM
Jenkins
requests
read/write
Human runs
command
2FA prompt
Human
pastes token
into Jenkins
Jenkins
requests
read/write
First human
runs
command
Second
human runs
command
First human
pastes token
into Jenkins
• Enterprise Control Groups
• e.g. require multiple humans to grant Jenkins read/write access
50. 🧞🧞♀️ More validation before a PR can be merged
• Check resources for sensible parameters
• e.g. TTLs, num_uses, etc.
• Check if Vault has required permissions before approving PRs
• e.g. check if AWS account is in Organization
• e.g. check if AD user is in correct Organizational Unit
• Case sensitivity check on LDAP groups
• We have a script to manually check this
• Deploy to a local dev Vault
• For testing new features in the pipeline
@LUCYDAVINHART @SBGTECHTEAM
52. 👤 Make it Generic
• Allow pipeline to be run against child namespaces
• Config for each namespace stored in different repos
• Delegate permissions to other teams
@LUCYDAVINHART @SBGTECHTEAM
Morning!
Say you’re a small team of a couple of people. In charge of managing the company Vault cluster.
Say hundreds of people across the company want to make use of Vault, across thousands of systems, each with their own granular level of access.
So maybe you expect to get a dozen requests daily.
Your team also manages several other services, so you can’t dedicate all your time to Vault. How are you going to manage that?
You don’t want to hand out too many admin permissions, because that inevitably leads to the too-many-admins problem. Let’s imagine it’s also 2017, so you’ve not convinced your finance department to pay for Enterprise Vault, not that Namespaces exist to help yet anyway.
What are you gonna do? Well I’ll tell you how we did it.
We started our Vault journey back in late 2016.
I’m going to touch on some of the problems we had back then, how we approached solving them initially, and how things improved for us as a result of using Terraform.
And as this is a journey that never ends, I’m going to talk about some things we haven’t done yet.
=== 2m / -33m ===
So, some of the problems we had early on, which were blockers to us using Vault in production
We’re pretty good at config management at SBG, so actually installing Vault was automated from the get-go.
But actual configuration of the service once it was running? That was manual.
We were new to Vault, so naturally doing things took a while, but even as we gained more experience with the product there were still some things which took time. Not just in terms of configuring Vault, but also helping people figure out what access they had and why.
Writing or making changes to policies for example, especially when that involved comparing to existing policies, took a while.
And when all of that config isn’t stored anywhere except Vault itself, it was often too time consuming to properly compare things. So we ended up with many similar things being done in very different ways.
We also just like automating stuff. It’s in our job titles, after all.
Because we had to do everything ourselves manually, we for the most part we felt we had to have access to everything.
With everything done by hand, we had to be able to actually do everything.
Vaut’s audit logs are great, and we were shipping those off to our Elastic Stack from very early on.
But they can only answer so many questions.
[click] For example, you can reasonably easily find out that somebody has written to a particular policy, who they are, and when they did it.
[click] But the audit logs don’t tell you what changes they made, or why.
And we can’t have infinite retention on those logs, so anything older than the retention period is gone
=== 4m 30s / - 30m 30s ===
So on our way to using Vault in production, we needed to put something in place to solve these problems, even if it was only going to be temporary.
First problem we looked at was keeping track of changes over time. We wrote a Ruby gem which we ran as a scheduled Jenkins job
It iterated through paths in Vault and read them, saving the content in a git repo.
This included:
Policies
LDAP Groups
AppRoles
Specifically, we do not back up secrets!
We put this together pretty quickly and, as a result, we now had a better ability to see what changed and when, but still no visibility on who or why. And were still making changes manually.
But it wasn’t very good. I’m allowed to say that, because I wrote it.
It was originally going to be used to make changes in Vault, but it was too clunky and we didn’t really have confidence in it to grant it write access
Next problem we tackled was the time it took for us to make some changes.
We deployed a tool called Goldfish, which was primarily a Vault UI before Open Source Vault had one, which was useful at this point as we had not yet migrated to the Enterprise version.
Our justification for spinning it up was its policy request feature, which made it much simpler for people to request changes to policies or add new policies.
Users edited directly in the UI, were given a policy approval token, which they then sent to us for review. We’d approve it and apply it.
We still had to map those policies to LDAP groups and AppRoles, but this made things a little easier for us for a while.
=== 6m 45s / - 28m 15s ===
With those two things in place we had enough to assure people Vault was ready for Production, and we had more time to focus on doing things better.
If you’ve decided to come to my talk, then I’m assuming you know what Vault is.
But you may not know what Terraform is.
If that’s you… in simple terms, it allows you to write code to define your resources in a declarative way.
Typically this is things like cloud infrastructure, but it can be anything with an API.
[click]
You write your code to define what you want your stuff to look like [click], Terraform keeps track of the state of your resources, which lets it [click] figure out how to get it from the state it’s in now to the state you want it to be in.
We’d been using it for a while for other things, and discovered that there was a Vault provider.
We didn’t want to just create a repo with raw Terraform code.
For a start, that would mean our users would have to learn Terraform at the same time as learning Vault.
So we wanted it to resemble the Vault API as much as possible, on disk.
So files in the right directories, parameters matching what you'd get with the API, etc.
Partly this was to allow our users to learn more about how Vault works than would be the case if we abstracted things away.
Particularly useful for when users wanted to request multiple interacting resources.
And partly it was to reduce the learning curve on our users. Compare the file on the left to the file on the right. The left is some Terraform code to write a Vault policy. The right is just the policy.
While this example is fairly simple, we don’t want to have to make our users learn the syntax on the left when all they really care about is what’s on the right.
Initially, there would be some overlap between us configuring Vault using this Terraform, and configuring things manually, so we wanted it to take the output of the ruby gem as input, so Terraform didn’t try to delete anything we’d not written the code for yet. Making the repo resemble the Vault API was also useful for that.
[click] We wanted people to be able to raise Pull Requests to make changes, so we could track who has made changes, who has approved them, what JIRA tickets they’re linked to, etc.
[click] While we wanted to Terraform as much as possible, we were only going to start with policies to begin with, as those were our most common request. There were about 300 of them by the time we started this project in May 2018. We now have over 1000.
[click] Finally, we wanted to make sure that anything that was not in the repo got deleted from Vault. The idea behind this being that there should be no unauthorized or unexplained changes to Vault. In Prod, we have restricted permissions, so we don’t even have the ability to do that. But in test, the rules are more relaxed, so it’s useful to be able to reset Vault to a known state.
=== 10m 15s / - 24m 45s ===
=== 12m 45s / - 22m 15s ===
The pipeline is run as a Jenkins job, and with the exception of a few things, each stage corresponds to a makefile phony target.
The idea being, you should be able to run the whole thing locally, which helps a lot when making changes to it.
Run make help, I see all the stages of the pipeline. A few of them have dependencies on the init stage, and a few can be run standalone.
The init stage, makes sure we have credentials to access our Terraform state.
We store this in Amazon S3, so naturally we get our AWS creds out of Vault.
We do a Terraform init to ensure we have all the necessary dependencies.
And we make use of Terraform workspaces, which allows us to maintain separate Terraform state files for each of our Vault clusters.
The import phase is our fail-secure mechanism, to make sure nothing is in Vault which shouldn’t be. Thinking back to the Venn Diagram, we’re checking for things which are in the left half of the diagram.
Shouldn't happen too often, as we don't have permissions, but if it does then we can investigate.
For each of the resources we support, we have a script which:
Lists all resources of that type in Vault
Lists all the resources in the Terraform state, i.e. those which Terraform knows about
Imports into the Terraform state whatever is in Vault that Terraform doesn’t know about.
The idea being, if we’ve told Terraform that it exists, but we haven’t written code to say that it's supposed to exist, then Terraform will delete it.
I’ve simplified it a little, but an example looks a bit like this
List all policies in Vault
List all policies Terraform knows about
Import anything Terraform doesn’t know about
We have a script like this for each of the resources we support.
We skip this stage when validating a pull request. It’s not really needed at that point.
Then we come to the generate phase, where we actually make Terraform code
Using policies as an example…
We iterate over all policy files in the repo
Get the policy name from the filename
Then we generate the relevant Terraform resources
This gets saved to an ephemeral .tf file
TODO: remove this
The generate scripts look similar to this. Again, I’ve slightly simplfied
We iterate over all policy files in the repo
Get the name of the policy by stripping the file extension, and converting to lowercase
Then, for each policy file, we create two resources:
A template file resource, so we can use the content of the policy file
A Vault policy, which uses that template
This gets saved to an ephemeral .tf file
Validate doesn’t actually do much beyond verifying that the Terraform code we have generated is syntactically correct.
There is some validation done during the generate phase, which I’ll touch on later
At this point, we have a Terraform State which corresponds to everything in Vault
We have generated Terraform code which corresponds to everything we want to be in Vault
We run Terraform Plan, and it compares the two, and determines if it needs to make any changes. It saves those to a planfile, so we can guarantee that Terraform won’t try anything unexpected later.
At this point, if we’re validating a pull request, we finish and mark the commit as successful.
At this point, the entire Jenkins job has been running with read-only capabilities on the resources it’s been looking at.
We’re comfortable allowing it to do this without human supervision, as we don’t really consider anything it’s reading from Vault to be secret, and the Terraform state can be regenerated from nothing with the import stage.
The slack notification provides the Vault CLI command we need to generate the secret-id, so we don’t need to worry about it.
It's prefixed with pscli, our Pretty Snazzy Command Line Interface, a tool my team looks after which does many things, but all you need to know is it makes sure everyone is using the same version of the Vault CLI and Terraform, and it handles all the Vault auth automatically.
This is the part where Terraform actually goes ahead and makes the changes to Vault it said it would
At this point, all that remains is to commit any generated terraform code to the repo, and merge our release branch to master.
We don’t really need to do this, but it’s sometimes useful to see the raw terraform code that the job came up with.
=== 18m / - 17m ===
So we had a minimal viable product, something which solved part of the problem for us.
We started making incremental improvements over time.
I’m going to go over each new resource we added, because there’s something interesting for each of them
LDAP groups, i.e. what policies should specific groups of human users have access to. We have about 250 of these in Vault at the moment, but there was only 90 when we added this in July 2018.
When we first added LDAP groups in our pipeline, there was no dedicated resource for them in the Terraform provider, so we had to improvise.
Fortunately, there was a resource called vault_generic_secret, which allows you to read and write to arbitrary Vault paths. This is very useful, but if you’re not careful with it you could end up revealing secrets. So treat it with care.
But in our case, we do not consider LDAP groups and their policy mappings to be secrets, so we’re not too worried.
Later on, there was a dedicated set of resources for the LDAP auth backend, so we switched over to that, completely invisibly to our users.
Recently, we’ve had a restructure of all our Active Directory groups, and now only certain types of groups are allowed to grant permissions within systems like Vault. So we added a check in the script to make sure nobody accidentally added the wrong kind of group (as happened a couple of times).
AppRoles, another common authentication mechanism for Vault. We had 160 ish when we added these to the pipeline, about a year ago in September 2018. That’s since doubled.
The majority of these (about 2/3rds) are used to grant Jenkins jobs access to Vault, but the IP addresses used to reference these varied. So we let our users define Terraform variables, which they could then use when requesting AppRoles via this pipeline. This meant that whenever the team that looked after Jenkins added any new agents, it’s just one file that needs updating, and all the AppRoles get updated.
AppRoles, another common authentication mechanism for Vault. We had 160 ish when we added these to the pipeline, about a year ago in September 2018. That’s since doubled.
The majority of these (about 2/3rds) are used to grant Jenkins jobs access to Vault, but the IP addresses used to reference these varied. So we let our users define Terraform variables, which they could then use when requesting AppRoles via this pipeline. This meant that whenever the team that looked after Jenkins added any new agents, it’s just one file that needs updating, and all the AppRoles get updated.
The team managing our Kubernetes clusters wanted to use k8s as an authentication mechanism for Vault.
That seemed like a good idea, but as we didn’t manage the clusters, we didn’t know how it worked.
Fortunately, we’d developed this repo in such a way that meant adding additional resources was just a case of writing an Import and a Generate script, and adding them in to the relevant Makefile phases.
Vaut allows you to use AWS as an authentication mechanism, which a few teams were asking for us to enable.
We have over 100 AWS accounts across the company, with access to those accounts managed by Vault.
That’s using one set of AWS credentials per account.
But for reasons I’m not going to go into here, there's something we need to configure for every AWS account in Vault if users want resources in those accounts to access Vault.
Fortunately, it’s the same thing for every account, so our pipeline now runs an AWS CLI command to list all the accounts in our organization and automatically generates a Terraform resource for that relevant bit of config. And this just happens automatically whenever we create a new AWS account, so we don’t even have to think about it.
This is another case where we had to get creative, because there isn’t a dedicated resource for these within the Terraform provider.
That’s similar to LDAP groups when we first added those, but we can’t re-use the solution there.
We can’t use generic_secret, because there are certain keys [click] which we either don’t know, or which change frequently, so using generic secret would result in Terraform getting stuck in a loop.
Fortunately, there’s another resource, generic_endpoint, which is a bit more flexible.
There’s a parameter which lets you specify which keys you care about, and as long as those remain unchanged within Vault, Terraform is happy.
How has this improved things for us?
Firstly, time. Individual changes take less of our time, which means we can handle more requests.
We’ve got increased visibility of what’s going on in Vault, as it happens. We’ve also made it easier for people to debug their own permissions without needing to ask us. Easier for people to self-help and debug their own access without asking us.
It’s now much easier to see the who, what, when, how and why of changes over time, allowing us to more easily audit historical access.
It’s now possible to search through the code and find common patterns, and potentially identify issues before they become a problem.
And as a result of our automation, we’ve been able to reduce our own default permissions within Vault.
So, some numbers, for those interested
Slack requests / MonkeyBot tickets…
Since we started tracking these in JIRA, in January 2018, we’ve had over 2000 requests relating to Vault alone.
Over 1000 of these include at least one Pull Request, for a total of over 1300 PRs
The Jenkins job has run over 25,000 times since May 2018, running at a rate of about 750 every two weeks at the moment, which equates to about 40 unscheduled builds a day.
i.e. builds triggered by pull requests being raised or merged.
=== 26m / - 9m ===
So what does the future look like for us?
While I can’t tell you for certain, as priorities are constantly shifting, I do have a few ideas for things which we could do.
First thing, obviously, there’s several more resources we use in Vault which aren’t terraformed yet.
PKI mounts and roles, we have quite a few of these.
Sentinel policies, we’re not using too much yet, but as and when our users want to make use of these in anger, they’d be a natural fit in our pipeline.
Namespaces, a neat Enterprise feature we’re going to be making use of very soon, will need quite a few resources set up for each of these.
We have over 100 AWS accounts, all managed through Vault, and all are configured very similarly.
So with just the name of the account, we could theoretically generate the relevant policies and LDAP group mappings.
I’m calling this The Future, but as recently as the past few weeks, the team managing our Kubernetes clusters have started auto-generating Pull Requests for their users.
We could also do some service discovery for CIDR ranges, e.g. in the case of Jenkins agents, so those don’t need manually updated whenever IP ranges change.
Though anywhere that rapidly scales, or where IP addresses are dynamic, realistically you’d want to use a different auth method altogether.
We’re pretty happy with the tradeoff we’ve made between security and convenience, but if we ever needed to, we could add some additional safeguards.
2 Factor Auth for example, to grant the Jenkins job write access for example. Something we’ve discussed, but decided wasn’t worth the tradeoff yet.
Control Groups are an Enterprise feature which looks super exciting, but which we’ve not yet found a use for.
(It’s on my personal backlog of features to play with)
If we ever decided we needed more than one human to approve the Jenkins job, or if we wanted to remove the necessity for a human to paste a secret-id into the job, we could make use of this.
Our pipeline doesn’t do a great deal of validation at the moment.
It’s mostly just checking if the syntax is correct, and a few resource-specific tests.
We could do more, but in our case, we’ve found that the sort of issues we could write write checks for don’t actually happen often enough to be worth our time.
Both in terms of how long it would take to implement, and how much time it would add to the build.
This is also the sort of thing where Terraform Enterprise Sentinel Policies would come in handy.
There are quite a few inefficiencies in the pipeline.
It’s not really a problem right now, but as we grow, it’ll take longer.
I have some ideas to make it faster which I’m gong to look into.
Once we have Vault namespaces in place, admins of those namespaces will be in a similar position to where we were when we first started using Vault.
So I have ideas for how we could make our Jenkins job generic enough that it could run against any namespace
=== 32m / - 3m ===
So, should you go away and create a Terraform pipeline that looks like ours?
Probably not. Ours was the result of initial experimentation and incremental improvements, and the way we’ve chosen to implement things works well for us, but may not work well for you. If I was writing it from scratch now, there are things I'd do differently.
But hopefully I’ve given you some insights into how we tackled the problems we had, and the inspiration to try it yourselves.
Thank you all for listening!
If you wanna find me and ask any questions, or of you want some stickers, my Twitter’s on screen. DMs are open.
I’m also on the HashiConf slack, and should be easy to find by name.