Chef InSpec can be used to test for system security and compliance by creating profiles of InSpec tests. Profiles allow complex compliance requirements to be tested across different teams and environments. The document demonstrates running the open source linux-baseline profile against a CentOS system using InSpec, remediating any failures using the corresponding Chef cookbook, and then wrapping the linux-baseline profile in a custom profile to skip a specific test.
Using Chef InSpec for Infrastructure SecurityMandi Walls
This document provides an overview of Chef InSpec and how it can be used for infrastructure security assurance. Chef InSpec allows users to create tests for security and compliance related to infrastructure and then run those tests on systems locally or remotely. The document demonstrates how to use Chef InSpec to check for compliance with a security baseline, remediate any issues found using Chef infrastructure automation, and then re-check compliance.
Morgan Roman presented on how to standardize the use of secure patterns in frameworks to prevent common vulnerabilities. The approach is to first identify unsafe patterns, make the safe pattern the default, train developers, and use tools to enforce the safe pattern. Examples given were preventing regex denial of service by wrapping regex in a timeout class, preventing XML external entity injection in Java by making a safe XML parser factory the default, and preventing open redirects by using a safe redirect method. The goal is to make security the easiest option for developers.
Automating Compliance with InSpec - Chef Singapore MeetupMatt Ray
July 24, 2017 slides and demo for Automating Compliance with InSpec. The associated GitHub repository is here: https://github.com/mattray/inspec-workshop
Adding Security and Compliance to Your Workflow with InSpecMandi Walls
This document provides an overview of InSpec, which is a tool for creating automated tests for compliance and security. InSpec allows users to write tests in a human-readable language to check systems for vulnerabilities or configuration issues. It can test infrastructure locally or remotely. Profiles can be created to package and share test suites. InSpec integrates with tools like Test Kitchen and can be included in development workflows to continuously test systems.
InSpec Workflow for DevOpsDays Riga 2017Mandi Walls
This document discusses how to build security into your workflow using InSpec. InSpec is a human-readable specification language for testing security and compliance across infrastructure. It can be used to test configurations and identify issues. The document provides an example of using an InSpec profile to test that SSH is configured securely on a system before and after applying a security hardening cookbook. It emphasizes how InSpec helps automate security testing and ensures compliance is maintained over time as systems change.
DevOpsDaysRiga 2017: Mandi Walls - Building security into your workflow with ...DevOpsDays Riga
This document discusses using InSpec to build security checks into development workflows. It provides an example of using InSpec to check that an SSH configuration is using version 2. InSpec makes it possible to write tests against system configurations and services in a human-readable format. Tests can be packaged into shareable profiles and run during development and deployment to automate compliance checking.
Using Chef InSpec for Infrastructure SecurityMandi Walls
This document provides an overview of Chef InSpec and how it can be used for infrastructure security assurance. Chef InSpec allows users to create tests for security and compliance related to infrastructure and then run those tests on systems locally or remotely. The document demonstrates how to use Chef InSpec to check for compliance with a security baseline, remediate any issues found using Chef infrastructure automation, and then re-check compliance.
Morgan Roman presented on how to standardize the use of secure patterns in frameworks to prevent common vulnerabilities. The approach is to first identify unsafe patterns, make the safe pattern the default, train developers, and use tools to enforce the safe pattern. Examples given were preventing regex denial of service by wrapping regex in a timeout class, preventing XML external entity injection in Java by making a safe XML parser factory the default, and preventing open redirects by using a safe redirect method. The goal is to make security the easiest option for developers.
Automating Compliance with InSpec - Chef Singapore MeetupMatt Ray
July 24, 2017 slides and demo for Automating Compliance with InSpec. The associated GitHub repository is here: https://github.com/mattray/inspec-workshop
Adding Security and Compliance to Your Workflow with InSpecMandi Walls
This document provides an overview of InSpec, which is a tool for creating automated tests for compliance and security. InSpec allows users to write tests in a human-readable language to check systems for vulnerabilities or configuration issues. It can test infrastructure locally or remotely. Profiles can be created to package and share test suites. InSpec integrates with tools like Test Kitchen and can be included in development workflows to continuously test systems.
InSpec Workflow for DevOpsDays Riga 2017Mandi Walls
This document discusses how to build security into your workflow using InSpec. InSpec is a human-readable specification language for testing security and compliance across infrastructure. It can be used to test configurations and identify issues. The document provides an example of using an InSpec profile to test that SSH is configured securely on a system before and after applying a security hardening cookbook. It emphasizes how InSpec helps automate security testing and ensures compliance is maintained over time as systems change.
DevOpsDaysRiga 2017: Mandi Walls - Building security into your workflow with ...DevOpsDays Riga
This document discusses using InSpec to build security checks into development workflows. It provides an example of using InSpec to check that an SSH configuration is using version 2. InSpec makes it possible to write tests against system configurations and services in a human-readable format. Tests can be packaged into shareable profiles and run during development and deployment to automate compliance checking.
InSpec is a tool that allows users to write security and compliance tests as human-readable code (or "profiles") that can be run on systems to check configurations and identify issues. Profiles can test for things like required SSH settings, file permissions, and package/patch levels. Profiles are run using the InSpec command line tool and can test local systems or remote targets like Linux servers. When profiles detect failures, they return non-zero exit codes to fail automation jobs. This allows InSpec to integrate with configuration management and infrastructure as code tools for continuous compliance monitoring.
Inspec: Turn your compliance, security, and other policy requirements into au...Kangaroot
This document discusses InSpec, an open-source testing framework for infrastructure and compliance. It can be used to test security configurations and compliance across operating systems, platforms, and cloud providers. InSpec allows users to write tests in a human-readable language and execute them locally or remotely. Tests can be packaged into reusable profiles that ensure configurations meet security and compliance requirements throughout the development lifecycle.
This document discusses InSpec, an open-source testing framework for infrastructure and compliance. It can be used to test configurations and ensure security best practices are followed. InSpec uses human-readable tests and comes with built-in resources to test common infrastructure components. It can test locally or remotely on Linux, Windows, and cloud platforms. Profiles allow packaging tests for reuse across environments. InSpec integrates with DevOps tools like Chef and Test Kitchen to enable compliance testing in development workflows.
OSDC 2017 | Building Security Into Your Workflow with InSpec by Mandi WallsNETWAYS
InSpec is a tool for defining infrastructure and security compliance tests in human-readable code. It can test systems locally or remotely. Profiles allow packaging and sharing InSpec test code. When used with configuration management tools like Chef, InSpec helps enforce security compliance and reduces risk of failures or vulnerabilities over time.
Adding Security to Your Workflow with InSpec (MAY 2017)Mandi Walls
An introduction to InSpec and its motivations for teams looking for a security and compliance tool for their organizations. May 2017 edition. Atmosphere.pl Krakow and Netways OSDC Berlin.
Compliance Automation with InSpec - Chef NYC Meetup - April 2017adamleff
Presented at the Chef NYC meetup on April 20, 2017, this presentation reviews how to automate compliance scanning and reporting with InSpec by Chef and wrapped up with a hands-on workshop.
This is an approximately 90-minute InSpec workshop covering basic InSpec resources and profiles and applying them to Linux Hardening. Delivered at DevSecCon 2017 in London, October 20, 2017
DevOpsDays Austin 2016 talk. Compliance and security are the next steps after Infrastructure as Code and Test-Driven Infrastructure in expanding your DevOps workflow. Chef's open-source InSpec and audit cookbooks provide an accessible pattern for building compliance into your continuous delivery pipelines.
Drupal Continuous Integration with Jenkins - The BasicsJohn Smith
Please check out our new SlideShow of setting up and configuring a Jenkins Continuous Integration server for use within a Drupal development environment. We walk you through the steps of installing Ubuntu 10.04 LTS, Jenkins, Drush and several other PHP coding tools and Drupal Modules to help check your code against current Drupal standards. Then we walk you through creating a git post-receive script, and Jenkins job to pull it all together.
The document discusses automated infrastructure testing. It explains that infrastructure testing involves automating the testing of code, infrastructure as code, and deployed infrastructure. This is done through unit, functional, integration and monitoring tests. The document recommends collaborating with operations and building thorough monitoring and analytics. Automating tests helps ensure battle tested code and infrastructure health. Cloud infrastructure also requires more testing across providers. Lessons include starting with most time consuming tasks and understanding domain concepts.
Building Security into Your Workflow with InSpecMandi Walls
InSpec is a tool that allows users to write security and compliance tests in human-readable code. The tests can be run locally or against remote systems. InSpec includes built-in resources that make it easy to test common files, services, and configurations. Profiles allow users to package and share sets of InSpec tests. Custom resources can also be created to test proprietary configurations. Over time, a comprehensive set of InSpec tests can be built and run regularly to check for configuration drift.
Testing for infra code using test-kitchen,docker,chefkamalikamj
This document discusses using Test-Kitchen, Docker, and Chef-Zero to test infrastructure code. It begins with an introduction of the speaker and their background in infrastructure automation. The topics to be covered are then outlined: why test-driven development is important for infrastructure code; what Test-Kitchen is; how to provision instances on demand using Test-Kitchen and Docker; how to configure those instances using Chef-Zero; and how to test infrastructure code with Test-Kitchen. Common problems with infrastructure and proposed solutions using infrastructure as code are also briefly discussed.
DevOpsDays Singapore - Continuous Auditing with Compliance as CodeMatt Ray
This document discusses using Chef Automate to enable continuous compliance through a three step process of detecting issues, correcting problems, and automating compliance. It notes that many organizations currently assess compliance inconsistently or after deploying code to production. Chef Automate allows detecting and correcting issues across infrastructure in a single platform using the same language for both DevOps and InfoSec teams. This enables deploying applications with confidence while maintaining security and compliance.
The document discusses remediating compliance issues by writing a remediation recipe on the target node to update the SSH version. It describes testing the recipe locally using Kitchen, verifying compliance with InSpec from the CLI, converging the recipe, and rescanning the node to ensure compliance. Key steps include generating a cookbook and server recipe for SSH, creating an SSH config template, updating the template, deploying locally, and re-running the compliance scan to show the issue is now resolved.
Introduction to Infrastructure as Code & Automation / Introduction to ChefNathen Harvey
The document provides an introduction to infrastructure as code using Chef. It begins with an introduction by Nathen Harvey and outlines the sys admin journey from manually managing servers to using automation and policy-driven configuration management. It then discusses how infrastructure as code with Chef allows treating infrastructure like code by programmatically provisioning and configuring components. The document demonstrates configuring resources like packages, services, files and more using Chef.
Jumpstart your education on learning Chef InSpec to turn your DevOps into DevSecOps, by automating your integration testing and compliance/security scanning.
Jenkins and Chef: Infrastructure CI and Automated DeploymentDan Stine
This presentation discusses two key components of our deployment pipeline: Continuous integration of Chef code and automated deployment of Java applications. CI jobs for Chef code run static analysis and then provision, configure and test EC2 instances. Release jobs publish new cookbook versions to the Chef server. Deployment jobs identify target EC2 and VMware nodes and orchestrate Chef client runs. The flexibility of Jenkins is essential to our overall delivery architecture.
OSDC 2017 - Mandi Walls - Building security into your workflow with inspecNETWAYS
InSpec is an open source testing framework for infrastructure with a human- and machine-readable language for specifying compliance, security, and policy requirements. Using a combination of command-line and remote-execution tools, InSpec can help you keep your infrastructure aligned with security and compliance guidelines on an ongoing basis, rather than waiting for and then remediating from arduous annual audits. InSpec’s flexibility makes it a key tool choice for incorporating security into a complete continuous delivery workflow, reducing the risk of new features and releases breaking established host-based security guidelines.
InSpec is an open-source testing framework that allows users to write security and compliance tests. Tests can be written to check configurations, files, and other infrastructure attributes. InSpec includes built-in resources that make it easy to test common services and configurations. Tests are written in a human-readable format and can be executed locally or remotely on servers. InSpec integrates with tools like Chef and Test Kitchen to allow testing as part of development and deployment workflows. The document provides examples of using InSpec to test SSH configuration and other attributes based on security requirements.
InSpec is a tool that allows users to write security and compliance tests as human-readable code (or "profiles") that can be run on systems to check configurations and identify issues. Profiles can test for things like required SSH settings, file permissions, and package/patch levels. Profiles are run using the InSpec command line tool and can test local systems or remote targets like Linux servers. When profiles detect failures, they return non-zero exit codes to fail automation jobs. This allows InSpec to integrate with configuration management and infrastructure as code tools for continuous compliance monitoring.
Inspec: Turn your compliance, security, and other policy requirements into au...Kangaroot
This document discusses InSpec, an open-source testing framework for infrastructure and compliance. It can be used to test security configurations and compliance across operating systems, platforms, and cloud providers. InSpec allows users to write tests in a human-readable language and execute them locally or remotely. Tests can be packaged into reusable profiles that ensure configurations meet security and compliance requirements throughout the development lifecycle.
This document discusses InSpec, an open-source testing framework for infrastructure and compliance. It can be used to test configurations and ensure security best practices are followed. InSpec uses human-readable tests and comes with built-in resources to test common infrastructure components. It can test locally or remotely on Linux, Windows, and cloud platforms. Profiles allow packaging tests for reuse across environments. InSpec integrates with DevOps tools like Chef and Test Kitchen to enable compliance testing in development workflows.
OSDC 2017 | Building Security Into Your Workflow with InSpec by Mandi WallsNETWAYS
InSpec is a tool for defining infrastructure and security compliance tests in human-readable code. It can test systems locally or remotely. Profiles allow packaging and sharing InSpec test code. When used with configuration management tools like Chef, InSpec helps enforce security compliance and reduces risk of failures or vulnerabilities over time.
Adding Security to Your Workflow with InSpec (MAY 2017)Mandi Walls
An introduction to InSpec and its motivations for teams looking for a security and compliance tool for their organizations. May 2017 edition. Atmosphere.pl Krakow and Netways OSDC Berlin.
Compliance Automation with InSpec - Chef NYC Meetup - April 2017adamleff
Presented at the Chef NYC meetup on April 20, 2017, this presentation reviews how to automate compliance scanning and reporting with InSpec by Chef and wrapped up with a hands-on workshop.
This is an approximately 90-minute InSpec workshop covering basic InSpec resources and profiles and applying them to Linux Hardening. Delivered at DevSecCon 2017 in London, October 20, 2017
DevOpsDays Austin 2016 talk. Compliance and security are the next steps after Infrastructure as Code and Test-Driven Infrastructure in expanding your DevOps workflow. Chef's open-source InSpec and audit cookbooks provide an accessible pattern for building compliance into your continuous delivery pipelines.
Drupal Continuous Integration with Jenkins - The BasicsJohn Smith
Please check out our new SlideShow of setting up and configuring a Jenkins Continuous Integration server for use within a Drupal development environment. We walk you through the steps of installing Ubuntu 10.04 LTS, Jenkins, Drush and several other PHP coding tools and Drupal Modules to help check your code against current Drupal standards. Then we walk you through creating a git post-receive script, and Jenkins job to pull it all together.
The document discusses automated infrastructure testing. It explains that infrastructure testing involves automating the testing of code, infrastructure as code, and deployed infrastructure. This is done through unit, functional, integration and monitoring tests. The document recommends collaborating with operations and building thorough monitoring and analytics. Automating tests helps ensure battle tested code and infrastructure health. Cloud infrastructure also requires more testing across providers. Lessons include starting with most time consuming tasks and understanding domain concepts.
Building Security into Your Workflow with InSpecMandi Walls
InSpec is a tool that allows users to write security and compliance tests in human-readable code. The tests can be run locally or against remote systems. InSpec includes built-in resources that make it easy to test common files, services, and configurations. Profiles allow users to package and share sets of InSpec tests. Custom resources can also be created to test proprietary configurations. Over time, a comprehensive set of InSpec tests can be built and run regularly to check for configuration drift.
Testing for infra code using test-kitchen,docker,chefkamalikamj
This document discusses using Test-Kitchen, Docker, and Chef-Zero to test infrastructure code. It begins with an introduction of the speaker and their background in infrastructure automation. The topics to be covered are then outlined: why test-driven development is important for infrastructure code; what Test-Kitchen is; how to provision instances on demand using Test-Kitchen and Docker; how to configure those instances using Chef-Zero; and how to test infrastructure code with Test-Kitchen. Common problems with infrastructure and proposed solutions using infrastructure as code are also briefly discussed.
DevOpsDays Singapore - Continuous Auditing with Compliance as CodeMatt Ray
This document discusses using Chef Automate to enable continuous compliance through a three step process of detecting issues, correcting problems, and automating compliance. It notes that many organizations currently assess compliance inconsistently or after deploying code to production. Chef Automate allows detecting and correcting issues across infrastructure in a single platform using the same language for both DevOps and InfoSec teams. This enables deploying applications with confidence while maintaining security and compliance.
The document discusses remediating compliance issues by writing a remediation recipe on the target node to update the SSH version. It describes testing the recipe locally using Kitchen, verifying compliance with InSpec from the CLI, converging the recipe, and rescanning the node to ensure compliance. Key steps include generating a cookbook and server recipe for SSH, creating an SSH config template, updating the template, deploying locally, and re-running the compliance scan to show the issue is now resolved.
Introduction to Infrastructure as Code & Automation / Introduction to ChefNathen Harvey
The document provides an introduction to infrastructure as code using Chef. It begins with an introduction by Nathen Harvey and outlines the sys admin journey from manually managing servers to using automation and policy-driven configuration management. It then discusses how infrastructure as code with Chef allows treating infrastructure like code by programmatically provisioning and configuring components. The document demonstrates configuring resources like packages, services, files and more using Chef.
Jumpstart your education on learning Chef InSpec to turn your DevOps into DevSecOps, by automating your integration testing and compliance/security scanning.
Jenkins and Chef: Infrastructure CI and Automated DeploymentDan Stine
This presentation discusses two key components of our deployment pipeline: Continuous integration of Chef code and automated deployment of Java applications. CI jobs for Chef code run static analysis and then provision, configure and test EC2 instances. Release jobs publish new cookbook versions to the Chef server. Deployment jobs identify target EC2 and VMware nodes and orchestrate Chef client runs. The flexibility of Jenkins is essential to our overall delivery architecture.
OSDC 2017 - Mandi Walls - Building security into your workflow with inspecNETWAYS
InSpec is an open source testing framework for infrastructure with a human- and machine-readable language for specifying compliance, security, and policy requirements. Using a combination of command-line and remote-execution tools, InSpec can help you keep your infrastructure aligned with security and compliance guidelines on an ongoing basis, rather than waiting for and then remediating from arduous annual audits. InSpec’s flexibility makes it a key tool choice for incorporating security into a complete continuous delivery workflow, reducing the risk of new features and releases breaking established host-based security guidelines.
InSpec is an open-source testing framework that allows users to write security and compliance tests. Tests can be written to check configurations, files, and other infrastructure attributes. InSpec includes built-in resources that make it easy to test common services and configurations. Tests are written in a human-readable format and can be executed locally or remotely on servers. InSpec integrates with tools like Chef and Test Kitchen to allow testing as part of development and deployment workflows. The document provides examples of using InSpec to test SSH configuration and other attributes based on security requirements.
DevSecCon London 2017: Inspec workshop by Mandi WallsDevSecCon
This document discusses using InSpec to build security into workflows. InSpec is a human-readable specification language for testing security and compliance requirements. It includes resources for common services, files, and configurations that can be used to verify requirements. InSpec profiles allow packaging and sharing test sets and can be run locally or against remote targets. The document demonstrates writing an InSpec test, running it against targets, and integrating InSpec tests with configuration management using Chef to remediate failures.
InSpec Workshop at Velocity London 2018Mandi Walls
InSpec is an open-source testing framework that allows users to test and enforce security configurations and compliance for infrastructure code. It uses human-readable tests and resources to check configurations and generate reports. Users can write InSpec tests and profiles to test systems locally or remotely, address security issues, and integrate testing into development workflows using tools like Test Kitchen.
The document discusses using InSpec to build security into workflows by creating tests to check for compliance. InSpec allows writing tests in a human-readable format to test security configurations and ensure compliance with policies. Tests can be run locally or remotely on servers to check configurations and are integrated with DevOps workflows through profiles and controls.
This document describes eBay's use of Fluo for continuous integration and deployment using OpenStack. Fluo provides a single interface for configuring, building, testing, and deploying code changes. It provisions instances on OpenStack to run tasks defined in a configuration file like running tests, building packages, and deploying code. Fluo replicates code, packages, and configuration management code across regions and datacenters. It supports common workflows from code review through integration testing, releases, and periodic jobs. Fluo aims to provide a fully automated and scalable continuous delivery system to deploy code changes to eBay's global infrastructure on OpenStack.
Splunk forwarders were used to gain initial access to a network by exploiting their default credentials and REST API. This allowed deploying a malicious app that provided a shell. The shell was then used to pillage other systems by abusing credentials and data found in Chef scripts and GitHub repositories. Mitigations include changing default credentials, disabling the REST API on forwarders, improving logging and monitoring for unusual app deployments, using TLS for deployment server communications, and running Splunk in a less privileged manner.
Comment améliorer le quotidien des Développeurs PHP ?AFUP_Limoges
Conférence présentée lors du summer meetup de l'AFUP à Limoges le 19 juin 2018. Son objectif est de présenter plusieurs outils permettant de gagner rapidement en efficacité au quotidien.
Thursday, June 12th 2014
Discussing strategies in Rails development for keeping multiple application environments as consistent as possible for the best development, testing, and deployment experience.
Chef Automate provides a full-stack collaboration platform to help organizations achieve DevOps success by managing infrastructure, containers, applications, and compliance through automation. It addresses barriers to DevOps adoption like disparate tooling and lack of skills/cultural adoption. New capabilities in Chef Automate and Compliance accelerate and de-risk adoption by providing automation, governance, and compliance as code.
The document discusses the author's approach to setting up a development environment for Django projects. It describes establishing a project layout with separate folders for source code, virtual environments, requirements files, and more. It also covers tools and practices for tasks like dependency management, testing, debugging, deployment, and overall software development philosophy.
Today’s cutting edge companies have software release cycles measured in days instead of months. This agility is enabled by the DevOps practice of continuous delivery, which automates building, testing, and deploying all code changes. This automation helps you catch bugs sooner and accelerates developer productivity. In this session, we’ll share best practices (including ones followed internally at Amazon) and how you can bring them to your company by using open source and AWS services.
Speaker: Raghuraman Balachandran, Solutions Architect, Amazon India
This document provides a quick introduction to InSpec, which is a human-readable specification language for defining security and compliance tests. It can be used to create, share, and reuse test profiles to verify characteristics of systems and applications. The document demonstrates writing InSpec tests and profiles to check configuration settings like SSH protocol version. InSpec integrates with tools like Test Kitchen and can test any target, including local systems, remote hosts over SSH/WinRM, Docker containers, and cloud resources. Profiles allow packaging and sharing sets of InSpec tests.
This document discusses deploying software at scale through automation. It advocates treating infrastructure as code and using version control, continuous integration, and packaging tools. The key steps are to automate deployments, make them reproducible, and deploy changes frequently and consistently through a pipeline that checks code, runs tests, builds packages, and deploys to testing and production environments. This allows deploying changes safely and quickly while improving collaboration between developers and operations teams.
Melbourne Chef Meetup: Automating Azure Compliance with InSpecMatt Ray
June 26, 2017 presentation. With the move to infrastructure as code and continuous integration/continuous delivery pipelines, it looked like releases would become more frequent and less problematic. Then the auditors showed up and made everyone stop what they were doing. How could this have been prevented? What if the audits were part of the process instead of a roadblock? What sort of visibility do we have into the state of our Azure infrastructure compliance? This talk will provide an overview of Chef's open-source InSpec project (https://inspec.io) and how you can build "Compliance as Code" into your Azure-based infrastructure.
DevSec Delight with Compliance as Code - Matt Ray - AgileNZ 2017AgileNZ Conference
For too long, audits and security reviews have been seen as resistant to the frequent release of software. Auditors require access to static systems and environments, which would seem to make continuous delivery impossible. Too frequently audits are a fire drill sampling of the current state and temporary fixes are put in place to appease the compliance audit without being integrated into future releases.
About Matt Ray:
Matt Ray is the Manager and Solutions Architect for Asia Pacific and Japan for Chef. He has worked in large enterprise software companies and founded his own startups in a wide variety of industries including banking, retail and government.
He has been active in open source communities for over two decades and has spoken at, and helped organise, many conferences and Meetups. He currently resides in Sydney, Australia after relocating from Austin, Texas. He podcasts at SoftwareDefinedTalk.com, blogs at LeastResistance.net and is @mattray on Twitter, IRC, GitHub and too many Slacks.
This document discusses techniques for automating software deployment processes. It advocates treating infrastructure configurations as code that is version controlled. It introduces Configizer, a tool that helps manage configurations. It argues that manual deployment processes are antipatterns that lead to long cycles and broken productions. The goal is to deploy software often through incremental automated processes like database migrations in order to get faster feedback.
Continuous Integration & Development with GitlabAyush Sharma
GitLab CI is a part of GitLab, a web application with an API that stores its state in a database. It manages projects/builds and provides a nice user interface, besides all the features of GitLab. GitLab Runner is an application which processes builds.
The document discusses continuous feature development. It defines a feature as a set of expected functional behaviors from a client. Continuous feature development involves incrementally building these expected behaviors. This approach is needed because clients' expectations, business needs, user perceptions, and competitive advantages are continually changing. Managing continuous feature development presents challenges like integrating new features, maintaining stability, seamless integration, and managing trust. The document recommends practices like acceptance test-driven development, test-driven development, behavior-driven development, continuous integration, coding in feature branches, code reviews, maintaining a production branch, using staging servers, and continuous integration to help address these challenges.
Similar to Prescriptive Security with InSpec - All Things Open 2019 (20)
DOD Raleigh Gamedays with Chaos Engineering.pdfMandi Walls
My talk from DevOpsDays Raleigh 2022: Plan for Unplanned Work; Game Days with Chaos Engineering.
How do you plan for unplanned incidents? You practice with Chaos Engineering. Strong incident response doesn"t just happen, you have to build the skills and train your team. Practicing for major incidents gives your team insight into how your applications will behave when something goes wrong as well as how the team will interact to solve problems. Combining your Incident Response practices with Chaos Engineering roots your response practice in real-world scenarios, helping your team build confidence.
Addo reducing trauma in organizations with SLOs and chaos engineeringMandi Walls
This document discusses establishing service level objectives (SLOs) and indicators (SLIs) to quantify user experience and prioritize work. It recommends using chaos engineering to validate SLOs and dependencies by injecting failures. Key points:
- SLOs quantify goals for SLIs to measure user experience quality like load times and errors
- Error budgets set thresholds for acceptable failures to meet SLOs
- Chaos engineering tests new features and validates SLOs and dependencies by inducing failures
- Incidents provide opportunities to revisit SLOs and prioritize work to improve experience
PagerDuty: Best Practices for On Call TeamsMandi Walls
The document outlines best practices for establishing effective on-call teams including formalizing on-call schedules, ensuring team members have the proper equipment, access, and training. It emphasizes the importance of building an empathetic on-call culture through practices like shadow rotations, avoiding burnout, and establishing clear responsibilities and expectations for on-call staff.
Habitat is an open source project that provides tools for building, deploying, and managing applications across platforms. It allows developers to build applications once and run them anywhere by ignoring the underlying platform and packaging applications with all of their dependencies. Habitat provides tools for building applications locally, managing packages in a private registry, and running applications as managed services that can be updated in a zero-downtime way.
This document summarizes Habitat, a tool for building, deploying, and managing applications. Habitat aims to reduce complexity by providing repeatable builds, configuration management, and service discovery. It allows building applications from source or using pre-built binaries. The Habitat Builder service can build applications and store artifacts, including integrating with GitHub and Docker Hub. Habitat packages applications in a platform-agnostic way and allows updating configurations at runtime. Users are encouraged to try out Habitat on Slack, with online tutorials, or by contributing to projects on GitHub.
Habitat Workshop at Velocity London 2017Mandi Walls
Mandi Walls is the Technical Community Manager for EMEA at Chef and the Habitat Community lead is Ian Henry. The document discusses how modern applications are trending toward immutability, platform agnosticism, complexity reduction, and scalability. It provides an overview of ways to work with Habitat, including using artifacts that run themselves via the supervisor, exporting to Docker, and building plans from scratch or using scaffolding.
Mandi Walls introduces Habitat, a tool for building and running applications. Habitat aims to reduce complexity by making applications platform agnostic and immutable. It uses Habitat Studio to build applications in a clean room environment with explicit dependencies. Applications are packaged into harts - compressed packages with signatures - that can run on any infrastructure. The runtime manages services, configuration, updates, and more to help modern applications scale. Users are encouraged to try Habitat on Slack, online tutorials, and at Chef Summits in October.
This document provides an overview of Habitat, a tool for building, deploying, and managing applications. It discusses how Habitat aims to reduce complexity by providing immutable, platform-agnostic packages and managing dependencies and configurations. A demo of building and running a sample Ruby application in Habitat is also shown. Key features highlighted include Habitat plans for defining builds, hooks for controlling application startup, and configuration management at runtime. The document encourages attendees to try out Habitat and get involved in the community.
Configuration Management is Old and BoringMandi Walls
This document discusses the history and evolution of configuration management (CM) tools over time as technology and business needs have changed. It outlines several eras from mainframe computers requiring manual configuration to today's cloud, DevOps, and container-based environments that require automated CM. It argues that while CM tools have existed for decades, they continue to be important for modern practices like continuous delivery, infrastructure as code, and treating environments as code. CM helps speed up processes like experimentation, deployment, and failure recovery in a way that reduces waste and adds business value.
Habitat is a tool for building and running distributed applications. It aims to standardize packaging and running applications across different environments. With Habitat, applications are packaged into "harts" which contain all their dependencies and can be run on any system. Habitat handles configuration, service discovery, and updates to provide a uniform way to deploy applications. Plans are used to define how to build harts in a reproducible way. The Habitat runtime then manages running applications as services.
This document provides key lessons learned from cloud migrations. It discusses technical lessons like using automation, revision control, and automated testing in cloud migrations. It also covers cultural lessons such as moving from a "gate keeping" to open access model, improving collaboration, and adopting Lean principles. Additionally, it addresses business considerations like managing risk, failure, skills, and contractors when moving to the cloud. The overall message is that cloud migrations require changes to both tools and culture to fully realize the benefits.
Lessons Learned from Continuous DeliveryMandi Walls
This document discusses lessons learned from continuous delivery practices. Key points include: automating infrastructure provisioning and application deployment; treating infrastructure as code; adopting a dynamic "cattle not pets" approach to infrastructure; implementing revision control and automated testing for infrastructure and applications; overcoming cultural challenges through collaboration, Lean practices, and a blameless culture; managing risk through experimentation; and shifting activities earlier in the process. Containerization and tools that facilitate team collaboration are also trends in DevOps.
Role of Pipelines in Continuous DeliveryMandi Walls
This document discusses pipelines in continuous delivery environments. It defines a pipeline as the workflow teams use to get changes created and published through various stages like development, testing, staging, etc. Pipelines are important because they ensure all changes pass the same requirements before being promoted. The document recommends that pipelines be configurable, portable between projects, and have simple entry points. It also suggests including security reviews and human approval gates. Finally, it provides an example of Chef's automated pipeline that incorporates peer review and configurable testing steps.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Project Management Semester Long Project - Acuityjpupo2018
Acuity is an innovative learning app designed to transform the way you engage with knowledge. Powered by AI technology, Acuity takes complex topics and distills them into concise, interactive summaries that are easy to read & understand. Whether you're exploring the depths of quantum mechanics or seeking insight into historical events, Acuity provides the key information you need without the burden of lengthy texts.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Webinar: Designing a schema for a Data WarehouseFederico Razzoli
Are you new to data warehouses (DWH)? Do you need to check whether your data warehouse follows the best practices for a good design? In both cases, this webinar is for you.
A data warehouse is a central relational database that contains all measurements about a business or an organisation. This data comes from a variety of heterogeneous data sources, which includes databases of any type that back the applications used by the company, data files exported by some applications, or APIs provided by internal or external services.
But designing a data warehouse correctly is a hard task, which requires gathering information about the business processes that need to be analysed in the first place. These processes must be translated into so-called star schemas, which means, denormalised databases where each table represents a dimension or facts.
We will discuss these topics:
- How to gather information about a business;
- Understanding dictionaries and how to identify business entities;
- Dimensions and facts;
- Setting a table granularity;
- Types of facts;
- Types of dimensions;
- Snowflakes and how to avoid them;
- Expanding existing dimensions and facts.
8. Chef InSpec
• Human-readable language for tests related to security and compliance
• Create, share, and reuse complex profiles
• Extensible language - build your own rules
• Command-line tools plug into your existing workflow, build, deploy
• Test early, test often!
• InSpec is an open source project with commercialized distribution
The Community distribution is part of the CINC project
https://gitlab.com/cinc-project/auditor
9. Create and Consume
• Complex compliance requirements can slow you down
• Different information and expertise live in different teams, but
need to be used by many
• Security and compliance personnel can work with operations
and development to create comprehensive profiles
10. Chef InSpec is Code
• Check it into repos, publish as artifacts
• Include InSpec steps before code checkin
• Include InSpec steps in integration and pre-production
• Continue InSpec checks in production to guard against new
threats
11. Network Services
• If your security team sends you a directive:
Ensure that no legacy network services
are installed on all versions of Linux,
including inetd, xinetd, telnet, rsh, tftp,
and ypserv.
12. How Do You Go About Checking and Fixing?
• Identify the package names on your systems
• Remove all packages
• Does someone fix your source images?
Rebuild?
Remediate at launch?
• Ensure it doesn't get re-installed by accident at some point in the
future
13. Check for inetd and xinetd
control 'package-01' do
impact 1.0
title 'Do not run deprecated inetd or xinetd'
desc 'rhel5-guide-i731.pdf, Chapter 3.2.1'
describe package('inetd') do
it { should_not be_installed }
end
describe package('xinetd') do
it { should_not be_installed }
end
end
14. Lifecycle – How Often Do You Check Security?
• Single big scan, report mailed out with a “due date”?
Considered done, not checked again
• Yearly or twice-yearly massive scans with remediation firedrills?
Common audit cycles, large projects around fixing found issues
• Part of the software development lifecycle?
“To the left”
Regularly part of what is included in builds
15. Test Regularly with Chef InSpec
• Use InSpec before code checkin
No changes to allowed services
• Use InSpec during integration testing
Check transport protocols, port numbers, service configurations
• Use InSpec in production
Ensure no drift occurs in approved configurations over application
lifetime
16. Chef InSpec Components
• Resources
• Resource Characteristics
• Profiles
• Command Line Interface
17. Resources
• Chef InSpec includes built-in resources for common services,
system files, and configurations
• Built-in resources work on several platforms of Linux.
There are also Windows-specifics like registry_key
• A resource has characteristics that can be verified for your
requirements, and Matchers that work with those characteristics
20. Characteristic Tests
• it { should exist } – files, directories, groups that are present
• it { should be_installed } – packages that should be installed
• it { should be_enabled } – services that should be running
• its('max_log_file') { should cmp 6 } – rotate auditd logs
Check inside a config file for a specific setting
• its('exit_status') { should eq 0 } – run any arbitrary checks
Remediation scripts from upstream and OS vendors often come as shell
21. Run Chef InSpec
• InSpec is command line
Installs on your workstation as a ruby gem or as part of the
ChefWorkstation
• Can be run locally, to test the machine it is executing on
• Or remotely
InSpec will log into the target and run the tests for you
22. Execute InSpec
$ inspec exec ./test.rb
Profile: tests from ./test.rb
Version: (not specified)
Target: local://
File /tmp
✔ should exist
✔ should be directory
✔ should be owned by "root"
✔ mode should cmp == "01777"
Test Summary: 4 successful, 0 failures, 0 skipped
24. Profiles
• Collections of InSpec tests
Group by team, by application, by platform
• Each profile can have multiple test files included
• Flexible!
Create your own profiles for specific software you use
Use included matcher libraries or write your own – they live in the
profile
• https://dev-sec.io/ for samples
25. Sample Profile: linux-baseline
control 'os-02' do
impact 1.0
title 'Check owner and permissions for /etc/shadow'
desc 'Check periodically the owner and permissions for /etc/shadow'
describe file('/etc/shadow') do
it { should exist }
it { should be_file }
it { should be_owned_by 'root' }
its('group') { should eq shadow_group }
it { should_not be_executable }
it { should be_writable.by('owner') }
...
26. Demo
• Basic off-the-shelf CentOS system on AWS
• Install ChefWorkstation and git
• Download and run the linux-baseline profile
• Remediate with the corresponding Chef cookbook from
https://dev-sec.io
27. Resources
• https://inspec.io
• https://blog.chef.io/category/inspec
• https://learn.chef.io/
• http://www.anniehedgie.com/inspec-basics-1
• Whitepaper featuring Danske Bank:
https://www.chef.io/customers/danske-bank/
• Community Distros https://gitlab.com/cinc-project
• Join us on Slack: http://community-slack.chef.io/
30. Select CentOS 7 from the Marketplace
• Use a small instance - .micro should be fine for this
• Tag X-Contact with your name and X-Customer with something like "InSpec Talk
Delete after 7/15/19" or similar
31. Security Group
• I use the default all-open security group, as there's nothing running but ssh on
this machine. If you have another security group that is more locked down, that's
fine, too.
33. Demo stage 1 – Detect with the linux-baseline profile
git clone https://github.com/dev-sec/linux-baseline.git
sudo inspec exec linux-baseline/
<<accept the product license here>>
You'll have some number of errors; the default installs will always have too
many things installed. This version:
Profile Summary: 26 successful controls, 27 control
failures, 1 control skipped
Test Summary: 80 successful, 45 failures, 1 skipped
34. Demo Stage 2 – Correct with Chef Infrastructure
Download the Chef cookbook that matches the linux-baseline profile via a
policyfile workflow
chef generate policyfile fix-security
<<accept the license>>
edit fix-security.rb
edit-> run_list 'os-hardening::default'
chef install fix-security.rb
chef export fix-security.rb harden-linux
cd harden-linux
sudo chef-client -z
35. Correct with Chef con't
...things happening...
Recipe: os-hardening::auditd
* yum_package[audit] action install (up to date)
Running handlers:
Running handlers complete
Chef Infra Client finished, 141/206 resources updated in 07
seconds
36. Demo Stage 3 – Re-check with InSpec
cd ..
sudo inspec exec linux-baseline
...
Profile Summary: 52 successful controls, 1 control failure, 1
control skipped
Test Summary: 124 successful, 1 failure, 1 skipped
There's almost always at least one failure. Depending on the time you have left,
you can work through the next part, creating a wrapper profile and skipping this
step, or, conversely, if you audience is already chef-aware, adding an additional
recipe to fix whatever it is.
37. The error in this example:
× package-08: Install auditd (1 failed)
✔ System Package audit should be installed
✔ Audit Daemon Config log_file should cmp == "/var/log/audit/audit.log"
✔ Audit Daemon Config log_format should cmp == "raw"
✔ Audit Daemon Config flush should match
/^incremental|INCREMENTAL|incremental_async|INCREMENTAL_ASYNC$/
× Audit Daemon Config max_log_file_action should cmp == "keep_logs"
expected: "keep_logs"
got: "ROTATE"
(compared using `cmp` matcher)
38. Demo stage 4 – prepare for Automate with wrapper
profile
• Create a wrapper profile:
inspec init profile my-hardening
• Edit my-hardening/inspec.yml
depends:
- name: linux-baseline
git: https://github.com/dev-sec/linux-baseline
• Remove the example
rm -f my-hardening/controls/example.rb
39. Stage 4
Create a new control file:
$ vi my-hardening/controls/skip-auditd.rb
include_controls 'linux-baseline' do
skip_control 'package-08'
end
40. Demo Stage 5 – run the wrapper profile
sudo inspec exec my-hardening
...
Profile Summary: 52 successful controls, 0 control failures, 1
control skipped
Test Summary: 113 successful, 0 failures, 1 skipped
41. Additional Note - --no-distinct-exit
When using InSpec in a build process that relies on non-zero return codes, any
intentionally skipped control will generate a return code of 101
$ sudo inspec exec my-hardening/
...
$ echo $?
101
If you want skips to generate 0 so the flow continues, use the flag --no-distinct-exit
sudo inspec exec my-hardening/ --no-distinct-exit
...
$ echo $?
0
42. Exit Codes For InSpec in Pipelines
https://github.com/inspec/inspec/issues/1825#issuecomment-382650911
0 run okay + all passed:
100 run okay + failures
101 run okay + skipped only
Additional discussion around whether to specify in the run output the reason for a
skip is also happening. For example, skipping due to platform, skipping due to not-
needed, skipping due to fix-coming.
Honda shut down an automobile production plant because of WannaCry. This story is particularly interesting in an InSpec context not just because of the presence of a virus that had remediation available from the upstream vendor, but further down in the story it talks about coordinating the needs of several teams – IT and plant automation, for example – who have different needs, risk profiles, and resources available to work on something like WannaCry. It still has to be dealt with; this is real money coming off the line that is being disrupted by security shortcomings.
Text of article:
In an example of just how persistent modern cyberthreats can be, automaker Honda Motors had to temporarily stop production at its Sayama plant in Japan this week after being hit by WannaCry, a malware threat the company thought it had mitigated just one month ago.
The nearly 48-hour shutdown impacted production of about 1,000 vehicles at the facility, which does engine production and assembly for a line of vehicles including the Odyssey minivan and the Accord.
A statement from Honda North America said the interruption at the Sayama Auto Plant was caused by the shutdown of several older production-line computers infected with the WannaCry virus.
Systems at multiple Honda plants in Asia, North America, Europe, and China were found similarly infected with WannaCry, according to a different Honda statement quoted by Reuters and other outlets.
WannaCry infected hundreds of thousands of computers worldwide last month using a Windows exploit dubbed EternalBlue that the US National Security Agency (NSA) originally developed for use against adversaries. Threat group Shadow Brokers publicly leaked the exploit earlier this year.
Honda has not said if the infection only impacted its industrial control system (ICS) network or its IT network as well, or both. Neither has the automaker so far explained why it decided to shut down operations only in Sayama and not at any of the other locations where WannaCry was reportedly spotted.
Honda first discovered the outbreak Sunday and began recovery work immediately. But it wasn't until Tuesday morning that the company resumed production at Sayama. The infection occured despite Honda's implementation of new measures to mitigate WannaCry when news of the malware first broke. But Honda's efforts apparently were insufficient for several older computers installed at the Sayama Honda plant, some media outlets have quoted the company as saying.
The incident highlights how difficult it is for large organizations to secure every system on their network, especially against self-propagating malware such as WannaCry, says Paul Norris, senior systems engineer at Tripwire.
"Organizations will generally secure the systems they know about," he says. "But most will have assets that are not managed or secured and are old legacy systems that haven’t been decommissioned," and remain vulnerable, Norris says.
"It's harder for larger organizations to secure every asset within their environment, due to the size and complexity of corporate networks," he says.
The challenges are exacerbated in an industrial control system environment where IT and cybersecurity organizations often have little visibility into all the assets that might be in place.
In fact, up to 80% of all cyber assets in a plant can sometimes be invisible to cybersecurity personnel and often there is an incomplete inventory of IT-based assets as well, making them hard to protect, says David Zahn, general manager at ICS security vendor PAS. "If you can't see it, you can't protect it," he says.
It is possible also that Honda may have known about the underlying vulnerabilities to WannaCry in its plant floor environment but decided not to patch right away because it did not want to disrupt operations. "Risk mitigation within an industrial process facility moves at industry pace – not hacker speed," Zahn says.
Hopefully, incidents such as this will prompt organizations into answering basic cybersecurity questions for plant environments, he notes. "What are my cyber assets, where are my vulnerabilities, did an unauthorized change occur, and can I recover quickly if the worst case scenario happens."
More details are needed to know how Honda got breached. But the incident shows the need for organizations to pay more attention to securing plant floors against cybersecurity threats, adds John Bambenek, threat intelligence manager at Fidelis Cybersecurity.
"Large organizations have devices in low security environments that are necessary for their operations and in many cases, rely on factory employees not take actions that undermine the security of those environments," Bambanek says. That is a mistake, he adds.
"These attacks can cause real impact and a factory not producing parts for a day has a large monetary impact to the organization."
A second example of lax security resulting in real dollars being lost
Text of article:
A Catholic health care system has agreed to pay $2.14 million to settle claims it failed to change the default settings after installing new server, allowing public access to the private health records of 31,800 patients.
St. Joseph Health – which operates hospitals, community clinics, nursing facilities and provides a range of other health care services – agreed it was in potential violation of security rules of the Health Insurance Portability and Accountability Act (HIPAA).
The U.S. Department of Health and Human Services’ Office of Civil Rights (OCR) opened an investigation on Feb. 14, 2012, after St. Joseph Health reported that files containing electronic protected health information had been publicly accessible via Google and other browsers during the entire preceding year.
“The server SJH purchased to store the files included a file sharing application whose default settings allowed anyone with an Internet connection to access them,” OCR said in an Oct. 17 statement announcing the settlement.
“Upon implementation of this server and the file sharing application, SJH did not examine or modify it,” the statement continued. “As a result, the public had unrestricted access to PDF files containing the ePHI of 31,800 individuals, including patient names, health statuses, diagnoses, and demographic information.”
See also: Merger of Two Healthcare Giants Makes IT Transformation Inevitable
Federal investigators determined the health care nonprofit failed to coduct a thorough evaluation of the environmental and operational implications of installing the new server.
Also, multiple contractors hired by St. Joseph to assess risks and vulnerabilities of ePHI were brought on in a patchwork fashion that did not result in the enterprise-wide risk analysis required by HIPAA.
“Entities must not only conduct a comprehensive risk analysis, but must also evaluate and address potential security risks when implementing enterprise changes impacting ePHI,” OCR Director Jocelyn Samuels said in a statement. “The HIPAA Security Rule’s specific requirements to address environmental and operational changes are critical for the protection of patient information.”
See also: HIPAA Breach Case Results in Record $5.5M Penalty
In addition to the financial payment, St. Joseph Health agreed to a corrective action plan that includes a thorough risk analysis, implementation of a risk management plan and staff training.
The $2.14 million penalty brings the total amount of settlements for HIPAA security violations to $22.84 million this year, up sharply from $6.2 million in all of 2015.
Compliance requirements are often set out in flat documents. Sometimes PDFs, sometimes other formats, but they have a tendency to be a huge list of characteristics and checkboxes to be investigated and potentially remediated. They often come from industry standards bodies or governments. Security tools may be somewhat more flexible, encoded into a set of shell scripts that check and verify the systems after they are built. These are often shipped by upstream software providers when a breach or bug is found. Operational tools deal with the day-to-day building and management of systems, and might include components that are homegrown and some that come from vendors. These various sources and requirements play into the overall security picture of technical infrastructure.
For the purposes of compliance, we actually wanted a common language, in code, that would allow all audiences – compliance, security, and devops – to collaborate on. And this code will then act on systems.
This is whyInSpec was developed.
Removing legacy network services helps prevent unwanted access to systems. Some of these services have their uses, but many have modern replacements that were built with more security in mind. You may still find these services included in full-distribution installs from various vendors.
I've replaced the original SSH example; it no longer works on/applies to current releases of Linux; openssh no longer supports protocol 1, and the check for versions is not universal. http://undeadly.org/cgi?action=article&sid=20170501005206
The old version is here:
SSH supports two different protocol versions. The original version, SSHv1, was subject to a number of security issues. All systems must use SSHv2 instead to avoid these issues.
This directive is fairly common; it’s included in the security benchmarks published by CIS for a number of Linux and Unix systems that include SSH as a connection protocol. Many modern versions of these operating systems have version 2 as the default but include legacy support for version 1. It’s still a good idea to ensure that your systems are set to only use version 2.
This test will tell you on red hat-related Linux hosts that the two packages xinetd and inetd are not installed on the system. This full control for InSpec has detailed information about what the requirement is – do not run these deprecated services – as well as a location of the documentation (here truncated to fit on the slide). This example uses the nsa guidelines, and the configuration follows the headers included in the documentation, so only xinetd and inetd are included in this control. The other services would also have similar controls that can be traced back to guidelines, or could have CVE numbers, ticket numbers from a ticketing system, or other notations for where the requirements were adopted from. "Steve in security sent an email on June 20 2019 telling us to clean this up".
The control gives you the ability to group security requirements together – I could also include the other services here if I wanted to, but can also make them their own controls.
The impact tells me this is a requirement – impacts range from 0.01 to 1.0.
The InSpec resources, the two "describe package" directives, tell InSpec to go looking on the target systems for those packages. InSpec figures out the correct tools to do this with.
Deprecated SSHv1 example
For bits like the ssh configurations, or network services, or other components that are considered more infrastructure than application, these practices are common, changes are periodically rolled into the source images for new hosts (or containers) and the old configurations are eventually purged from production. It’s a herd-immunity approach.
But what happens if the thing to be tested is affected by a continuously developed application? Like run time configurations for java, or your databases. Can you count on every team to always know all of the requirements? When the requirements change – we're moving all of our databases to a new port – how does that information get out to all teams, how is it rolled out across systems, and who ensures that nothing gets reverted in the future, even inadvertently?
The point here is that running a twice-yearly audit and then spending six months remediating issues is a deadend task. With InSpec, applications can be shipped and deployed on hosts that you know meet your standards, and InSpec can then be used to make sure nothing drifts over time. Keep an eye on your systems regularly rather than just at audit.
InSpec's resources have powerful libraries for matching and checking the characteristics of individual atomic resources – like files or services. They also have support for more sophisticated verifications on the system. Individual configuration files can be interrogated for settings, like the example here for auditd. Additionally, the ability to run arbitrary commands is powerful for situations where a fix for a vulnerability has been produced by an upstream vendor and includes some sort of shell script rather than a new package. These are fairly common for things like kernel-level issues that require multiple checks and changes in settings files and also verification in the running kernel filesystem
A simple example that checks for settings on the /tmp directory.
If you have time, walk through the layout of a profile on github. The dev-sec.io ones are pretty complex, but have a lot of important stuff in them.
Large platform-focused profiles like linux-baseline include tests collected into subtopics for easier management and understanding. The os-* tests are in a separate file from tests that are looking at specific packages. This also shows again the amount of user-friendly information that can be included in the title and description of a control. This particular test I chose because it has a lot of tests for a single resource, giving a comprehensive set of checks for an important file.
Add upcoming events, webinars, etc to this slide
Using AWS. If you prefer some other cloud that provides CentOS 7, this example should still work fine!
Nothing special. This is the default ami available.
I'm going to do the next part with a wrapper profile example. Depending on current versions of linux, the default install, and the state of the linux-baseline, this error changes, but this setting for auditd turns up pretty often when using CentOS 7 on AWS. Other stuff includes settings for the random number generator, prng.
0 failures! Yay! A few minutes from a baseline off-the-shelf random image in the cloud to a system that meets our security needs!
You can use this diagram to show the relationship between the baseline profile and the wrapper profile