Here are the slides from Rick Sherman's PuppetConf 2016 presentation called Why Network Automation Matters, and What You Can Do About It. Watch the videos at https://www.youtube.com/playlist?list=PLV86BgbREluVjwwt-9UL8u2Uy8xnzpIqa
The Rules of Network Automation - Interop/NYC 2014Jeremy Schulman
The document discusses network automation, noting that while network operations are currently very painful, automation can provide business benefits like velocity, agility, stability and lower costs. It evaluates options for automation, from vendor products to building from scratch, and advocates learning from other fields like DevOps that have successfully adopted automation. The document concludes by urging readers to start planning their network automation initiatives now while keeping in mind that culture change and seeing results will take time.
This document provides an overview of the Open Network Automation Platform (ONAP), including what it is, its architecture and components, scope, and use cases. Some key points:
- ONAP is an open source platform that automates lifecycle management of virtual network functions (VNFs) and services using an ETSI NFV framework.
- It manages the full lifecycle of VNFs and network services from design to deployment to monitoring using a model-driven approach.
- ONAP's architecture includes design-time and run-time components to onboard, deploy, and assure VNFs and end-to-end services across multi-cloud environments.
- Its initial Amsterdam release
This document provides an agenda and details for a Manila MuleSoft Meetup event on AWS integration with MuleSoft. The agenda includes introductions, presentations on running Mule runtime on AWS containers and setting up Runtime Fabric on EKS, and demos of integrating DynamoDB and S3 with MuleSoft. Housekeeping rules are outlined and there will be a quiz at the end for a training voucher prize. Speaker bios are provided for the two technical presenters.
This document provides an overview of OpenDataPlane (ODP), an open source framework for portable high performance data plane applications. ODP aims to provide common APIs and optimized implementations of those APIs for different hardware platforms to allow applications to be portable. It addresses the problem of vendor-specific networking SDKs by providing a shared design that abstracts the underlying hardware and allows applications to run on any platform. The document describes the key components of ODP including the APIs, implementations for different platforms, design principles, and typical packet processing flows using ODP.
Setting up an ONAP development environment is not easy. Development tools and practices are not collected in a single place. This project pretends to collect and standardize that process.
Uwe Richter, Juniper Networks
Juniper Day, Praha, 13.5.2015
Jestliže SlideShare nezobrazí prezentaci korektně, můžete si ji stáhnout ve formátu .ppsx nebo .pdf (kliknutím na tlačitko v dolní liště snímků).
The Rules of Network Automation - Interop/NYC 2014Jeremy Schulman
The document discusses network automation, noting that while network operations are currently very painful, automation can provide business benefits like velocity, agility, stability and lower costs. It evaluates options for automation, from vendor products to building from scratch, and advocates learning from other fields like DevOps that have successfully adopted automation. The document concludes by urging readers to start planning their network automation initiatives now while keeping in mind that culture change and seeing results will take time.
This document provides an overview of the Open Network Automation Platform (ONAP), including what it is, its architecture and components, scope, and use cases. Some key points:
- ONAP is an open source platform that automates lifecycle management of virtual network functions (VNFs) and services using an ETSI NFV framework.
- It manages the full lifecycle of VNFs and network services from design to deployment to monitoring using a model-driven approach.
- ONAP's architecture includes design-time and run-time components to onboard, deploy, and assure VNFs and end-to-end services across multi-cloud environments.
- Its initial Amsterdam release
This document provides an agenda and details for a Manila MuleSoft Meetup event on AWS integration with MuleSoft. The agenda includes introductions, presentations on running Mule runtime on AWS containers and setting up Runtime Fabric on EKS, and demos of integrating DynamoDB and S3 with MuleSoft. Housekeeping rules are outlined and there will be a quiz at the end for a training voucher prize. Speaker bios are provided for the two technical presenters.
This document provides an overview of OpenDataPlane (ODP), an open source framework for portable high performance data plane applications. ODP aims to provide common APIs and optimized implementations of those APIs for different hardware platforms to allow applications to be portable. It addresses the problem of vendor-specific networking SDKs by providing a shared design that abstracts the underlying hardware and allows applications to run on any platform. The document describes the key components of ODP including the APIs, implementations for different platforms, design principles, and typical packet processing flows using ODP.
Setting up an ONAP development environment is not easy. Development tools and practices are not collected in a single place. This project pretends to collect and standardize that process.
Uwe Richter, Juniper Networks
Juniper Day, Praha, 13.5.2015
Jestliže SlideShare nezobrazí prezentaci korektně, můžete si ji stáhnout ve formátu .ppsx nebo .pdf (kliknutím na tlačitko v dolní liště snímků).
Network Automation Journey, A systems engineer NetOps perspectiveWalid Shaari
Network devices play a crucial role; they are not just in the Data Center. It's the Wifi, VOIP, WAN and recently underlays and overlays. Network teams are essential for operations. It's about time we highlight to the configuration management community the importance of Network teams and include them in our discussions. This talk describes the personal experience of systems engineer on how to kickstart a network team into automation. Most importantly, how and where to start, challenges faced, and progress made. The network team in question uses multi-vendor network devices in a large traditional enterprise.
NetDevOps, we do not hear that term as frequent as we should. Every time we hear about automation, or configuration management, it is usually the application, if not, it is the systems that host the applications. How about the network systems and devices that interconnect and protects our services? This talk aims to describe the journey a systems engineer had as part of an automation assignment with the network management team. Building from lessons learned and challenges faced with system automation, how one can kickstart an automation project and gain small wins quickly. Where and how to start the journey? What to avoid? What to prioritise? How to overcome the lack of network skills for the automation engineer and lack of automation and Linux/Unix skills for network engineers. What challenges were faced and how to overcome them? What fights to give up? Where do I see network automation and configuration management as a systems engineer? What are the status quo and future expectations?
The document provides an overview of the various components that make up the BigBlueButton platform, including:
- Client-side components like the HTML5 client and server-side components running without a front-end.
- Programming languages and frameworks used include Node.js, Java, Scala, Groovy, Ruby, and more.
- Key components are the HTML5 client built with Meteor.js, the Etherpad collaborative editor, Nginx web server, MongoDB database, WebRTC SFU for media handling, FreeSwitch for audio streams, and Akka apps for managing meeting state.
- Other components discussed include Kurento for webcams and screensharing, recording and playback utilities, conversion
Riyadh Meetup4- Sonarqube for Mule 4 Code reviewsatyasekhar123
This document summarizes a virtual meetup about Mule 4 code review using SonarQube. The meetup agenda included introductions, a discussion of continuous inspection and SonarQube, and a demo. Continuous inspection is part of the software development lifecycle and provides continuous feedback on code quality. SonarQube is a tool that can analyze source code without execution to generate software metrics and identify issues. It was demonstrated at the meetup and supports code review in multiple languages. There was also an open discussion period for questions and suggestions for future meetup topics.
This presentation from the I Love APIs conference makes the case for why Node and Docker are great together for implementing Microservice architecture. It also provides an quick orientation for getting started with Docker Machine, Node, and Mongo with container linking and data volume containers.
This document provides the agenda and guidelines for the Mumbai MuleSoft Meetup #17 on GraphQL in Mule 4. The meetup will include introductions by organizers, a presentation on GraphQL and how to implement it in Mule 4, a demo, and a networking session. Attendees are asked to keep their videos on and write questions in the chat. The meetup aims to educate the community on GraphQL and encourage continued engagement through surveys and social media.
An overview on docker and container technology behind it. Lastly, we discuss few tools that might come handy when dealing with large number of containers management.
Mike Weber - Nagios and Group Deployment of Service ChecksNagios
This presentation will show how you can create groups of checks like CPU metrics, Oracle metrics or IIS metrics and push them to all of the hosts that require them. The presentation will provide a script that will allow you to select and implement hundreds of groups of checks that have been developed for NRPE, NCPA, WMI, NSClient++, NRDP and NRDS.
Hot to build continuously processing for 24/7 real-time data streaming platform?GetInData
You can read our blog post about it here: https://getindata.com/blog/how-to-build-continuously-processing-for-24-7-real-time-data-streaming-platform/
Hot to build continuously processing for 24/7 real-time data streaming platform?
The Art and Zen of Managing Nagios With PuppetMike Merideth
The document discusses using Puppet to manage Nagios configurations. It describes key Puppet features like exported resources, Hiera for separating code and data, and templates. It also discusses building Nagios configs using these features, provisioning new hosts, removing decommissioned hosts, and monitoring Puppet processes. The presenter then demonstrates these techniques in a Vagrant environment on GitHub.
MuleSoft Deployment Strategies (RTF vs Hybrid vs CloudHub)Prashanth Kurimella
Differences between MuleSoft Deployment Strategies (RTF vs Hybrid vs CloudHub)
For additional information, read https://www.linkedin.com/pulse/mulesoft-deployment-strategies-rtf-vs-hybrid-cloudhub-kurimella/
One tool, two fabrics: Ansible and Nexus 9000Joel W. King
Ansible can be used to automate configuration of Cisco Nexus 9000 series switches running either NX-OS or Application Centric Infrastructure (ACI). It allows using YAML files, Jinja templates, and Python modules to provision and manage network infrastructure without relying on CLI commands. The presentation demonstrated using Ansible roles to configure NTP servers and backup settings for an ACI fabric by specifying variables in a CSV file and generating XML configuration files from templates.
Clash of Titans in SDN: OpenDaylight vs ONOS - Elisa RojasOpenNebula Project
OpenDaylight and ONOS are two leading open-source SDN controller platforms. OpenDaylight is a modular, extensible framework developed by a large community including many vendors. ONOS is focused on the needs of service providers and has quickly matured features for production use. Both use Java and OSGi and support OpenFlow and other southbound protocols, but have different architectures, communities, and goals.
The document provides an agenda for a MuleSoft Meetup Group meeting in Moscow on May 13, 2021. The agenda includes introductions, MuleSoft updates, a demo and discussion on building secure financial APIs, a networking break, and a demo and discussion on revealing OData capabilities with Mulesoft and connecting it to Salesforce and mobile apps.
This document discusses software-defined networking (SDN) and network automation using DevOps tools. It defines SDN as a programmatic framework to optimize network services delivery and management. It explains that SDN solutions can be either vendor-developed or custom-built. The document then discusses DevOps and how network engineers can integrate networks into DevOps workflows through practices like NetDevOps. It provides examples of controller-based and tool-based network abstraction using technologies like Ansible, Cisco ACI, and OpenDaylight. The rest of the document demonstrates network automation concepts and compares orchestration tools from vendors like Cisco, Ansible, Chef, and SaltStack.
The document outlines an agenda for a Mulesoft community meetup in Geneva, Switzerland. The agenda includes an introduction and networking session at 7:00pm, followed by a group discussion at 7:30pm to define future meeting topics and plans. Drinks and networking will follow at 8:30pm. The meetup leader, Maksym Dovgopolyi, will introduce himself and his experience with Mulesoft. The goals of the meetup are to help people be more successful with integrations and provide information on Mulesoft training, conferences, and resources. Future meetups will be planned every two months.
OSMC 2021 | Use OpenSource monitoring for an Enterprise Grade PlatformNETWAYS
There are many tools and frameworks for monitoring. Usually when you think of an Open Source solution, you don’t think to implement it in a COTS product. Nevertheless, this session will tell you how you can implement tools such as Prometheus, Grafana and ELK into such an Enterprise application platform. Monitoring performance, throughput and error rate is important to be in control of your transactions. If you use a Service Bus or SOA/BPM suite product there are a lot out of the box diagnostics waiting for you. The puzzle here is how to get it out in a useful way. Besides of the many commercial solutions also Open Source tools can help you out with it. You can export runtime diagnostics out of the Diagnostics framework, monitor your SOA Composites and trace down Service Bus statistics using Prometheus and Grafana. The session will elaborate how to set up a proper monitoring using these tools, also in a proactive way where automated monitoring is a must for every application environment.
PuppetConf 2016: How You Actually Get Hacked – Ben Hughes, EtsyPuppet
Here are the slides from Ben Hughes's PuppetConf 2016 presentation called How You Actually Get Hacked. Watch the videos at https://www.youtube.com/playlist?list=PLV86BgbREluVjwwt-9UL8u2Uy8xnzpIqa
Cisco Automation with Puppet and onePK - PuppetConf 2013Puppet
"Cisco Automation with Puppet and onePK" by Jason Pfeifer Technical Marketing Engineer, Cisco.
Presentation Overview: This session will provide an overview of the cisco developed puppet functionality for management and configuration of Cisco devices.
Speaker Bio: Jason is a Cisco Technical Marketing Engineer focusing on programmability and automation of Cisco network devices. He is currently supporting, discussing, evangelizing, and writing applications against Cisco's onePK SDK. He also has a long term love affair with Cisco's Embedded Event Manager.
PuppetConf 2016: A Year in Open Source: Automated Compliance With Puppet – Tr...Puppet
Here are the slides from – Trevor Vaughan's PuppetConf 2016 presentation called A Year in Open Source: Automated Compliance With Puppet. Watch the videos at https://www.youtube.com/playlist?list=PLV86BgbREluVjwwt-9UL8u2Uy8xnzpIqa
Network Automation Journey, A systems engineer NetOps perspectiveWalid Shaari
Network devices play a crucial role; they are not just in the Data Center. It's the Wifi, VOIP, WAN and recently underlays and overlays. Network teams are essential for operations. It's about time we highlight to the configuration management community the importance of Network teams and include them in our discussions. This talk describes the personal experience of systems engineer on how to kickstart a network team into automation. Most importantly, how and where to start, challenges faced, and progress made. The network team in question uses multi-vendor network devices in a large traditional enterprise.
NetDevOps, we do not hear that term as frequent as we should. Every time we hear about automation, or configuration management, it is usually the application, if not, it is the systems that host the applications. How about the network systems and devices that interconnect and protects our services? This talk aims to describe the journey a systems engineer had as part of an automation assignment with the network management team. Building from lessons learned and challenges faced with system automation, how one can kickstart an automation project and gain small wins quickly. Where and how to start the journey? What to avoid? What to prioritise? How to overcome the lack of network skills for the automation engineer and lack of automation and Linux/Unix skills for network engineers. What challenges were faced and how to overcome them? What fights to give up? Where do I see network automation and configuration management as a systems engineer? What are the status quo and future expectations?
The document provides an overview of the various components that make up the BigBlueButton platform, including:
- Client-side components like the HTML5 client and server-side components running without a front-end.
- Programming languages and frameworks used include Node.js, Java, Scala, Groovy, Ruby, and more.
- Key components are the HTML5 client built with Meteor.js, the Etherpad collaborative editor, Nginx web server, MongoDB database, WebRTC SFU for media handling, FreeSwitch for audio streams, and Akka apps for managing meeting state.
- Other components discussed include Kurento for webcams and screensharing, recording and playback utilities, conversion
Riyadh Meetup4- Sonarqube for Mule 4 Code reviewsatyasekhar123
This document summarizes a virtual meetup about Mule 4 code review using SonarQube. The meetup agenda included introductions, a discussion of continuous inspection and SonarQube, and a demo. Continuous inspection is part of the software development lifecycle and provides continuous feedback on code quality. SonarQube is a tool that can analyze source code without execution to generate software metrics and identify issues. It was demonstrated at the meetup and supports code review in multiple languages. There was also an open discussion period for questions and suggestions for future meetup topics.
This presentation from the I Love APIs conference makes the case for why Node and Docker are great together for implementing Microservice architecture. It also provides an quick orientation for getting started with Docker Machine, Node, and Mongo with container linking and data volume containers.
This document provides the agenda and guidelines for the Mumbai MuleSoft Meetup #17 on GraphQL in Mule 4. The meetup will include introductions by organizers, a presentation on GraphQL and how to implement it in Mule 4, a demo, and a networking session. Attendees are asked to keep their videos on and write questions in the chat. The meetup aims to educate the community on GraphQL and encourage continued engagement through surveys and social media.
An overview on docker and container technology behind it. Lastly, we discuss few tools that might come handy when dealing with large number of containers management.
Mike Weber - Nagios and Group Deployment of Service ChecksNagios
This presentation will show how you can create groups of checks like CPU metrics, Oracle metrics or IIS metrics and push them to all of the hosts that require them. The presentation will provide a script that will allow you to select and implement hundreds of groups of checks that have been developed for NRPE, NCPA, WMI, NSClient++, NRDP and NRDS.
Hot to build continuously processing for 24/7 real-time data streaming platform?GetInData
You can read our blog post about it here: https://getindata.com/blog/how-to-build-continuously-processing-for-24-7-real-time-data-streaming-platform/
Hot to build continuously processing for 24/7 real-time data streaming platform?
The Art and Zen of Managing Nagios With PuppetMike Merideth
The document discusses using Puppet to manage Nagios configurations. It describes key Puppet features like exported resources, Hiera for separating code and data, and templates. It also discusses building Nagios configs using these features, provisioning new hosts, removing decommissioned hosts, and monitoring Puppet processes. The presenter then demonstrates these techniques in a Vagrant environment on GitHub.
MuleSoft Deployment Strategies (RTF vs Hybrid vs CloudHub)Prashanth Kurimella
Differences between MuleSoft Deployment Strategies (RTF vs Hybrid vs CloudHub)
For additional information, read https://www.linkedin.com/pulse/mulesoft-deployment-strategies-rtf-vs-hybrid-cloudhub-kurimella/
One tool, two fabrics: Ansible and Nexus 9000Joel W. King
Ansible can be used to automate configuration of Cisco Nexus 9000 series switches running either NX-OS or Application Centric Infrastructure (ACI). It allows using YAML files, Jinja templates, and Python modules to provision and manage network infrastructure without relying on CLI commands. The presentation demonstrated using Ansible roles to configure NTP servers and backup settings for an ACI fabric by specifying variables in a CSV file and generating XML configuration files from templates.
Clash of Titans in SDN: OpenDaylight vs ONOS - Elisa RojasOpenNebula Project
OpenDaylight and ONOS are two leading open-source SDN controller platforms. OpenDaylight is a modular, extensible framework developed by a large community including many vendors. ONOS is focused on the needs of service providers and has quickly matured features for production use. Both use Java and OSGi and support OpenFlow and other southbound protocols, but have different architectures, communities, and goals.
The document provides an agenda for a MuleSoft Meetup Group meeting in Moscow on May 13, 2021. The agenda includes introductions, MuleSoft updates, a demo and discussion on building secure financial APIs, a networking break, and a demo and discussion on revealing OData capabilities with Mulesoft and connecting it to Salesforce and mobile apps.
This document discusses software-defined networking (SDN) and network automation using DevOps tools. It defines SDN as a programmatic framework to optimize network services delivery and management. It explains that SDN solutions can be either vendor-developed or custom-built. The document then discusses DevOps and how network engineers can integrate networks into DevOps workflows through practices like NetDevOps. It provides examples of controller-based and tool-based network abstraction using technologies like Ansible, Cisco ACI, and OpenDaylight. The rest of the document demonstrates network automation concepts and compares orchestration tools from vendors like Cisco, Ansible, Chef, and SaltStack.
The document outlines an agenda for a Mulesoft community meetup in Geneva, Switzerland. The agenda includes an introduction and networking session at 7:00pm, followed by a group discussion at 7:30pm to define future meeting topics and plans. Drinks and networking will follow at 8:30pm. The meetup leader, Maksym Dovgopolyi, will introduce himself and his experience with Mulesoft. The goals of the meetup are to help people be more successful with integrations and provide information on Mulesoft training, conferences, and resources. Future meetups will be planned every two months.
OSMC 2021 | Use OpenSource monitoring for an Enterprise Grade PlatformNETWAYS
There are many tools and frameworks for monitoring. Usually when you think of an Open Source solution, you don’t think to implement it in a COTS product. Nevertheless, this session will tell you how you can implement tools such as Prometheus, Grafana and ELK into such an Enterprise application platform. Monitoring performance, throughput and error rate is important to be in control of your transactions. If you use a Service Bus or SOA/BPM suite product there are a lot out of the box diagnostics waiting for you. The puzzle here is how to get it out in a useful way. Besides of the many commercial solutions also Open Source tools can help you out with it. You can export runtime diagnostics out of the Diagnostics framework, monitor your SOA Composites and trace down Service Bus statistics using Prometheus and Grafana. The session will elaborate how to set up a proper monitoring using these tools, also in a proactive way where automated monitoring is a must for every application environment.
PuppetConf 2016: How You Actually Get Hacked – Ben Hughes, EtsyPuppet
Here are the slides from Ben Hughes's PuppetConf 2016 presentation called How You Actually Get Hacked. Watch the videos at https://www.youtube.com/playlist?list=PLV86BgbREluVjwwt-9UL8u2Uy8xnzpIqa
Cisco Automation with Puppet and onePK - PuppetConf 2013Puppet
"Cisco Automation with Puppet and onePK" by Jason Pfeifer Technical Marketing Engineer, Cisco.
Presentation Overview: This session will provide an overview of the cisco developed puppet functionality for management and configuration of Cisco devices.
Speaker Bio: Jason is a Cisco Technical Marketing Engineer focusing on programmability and automation of Cisco network devices. He is currently supporting, discussing, evangelizing, and writing applications against Cisco's onePK SDK. He also has a long term love affair with Cisco's Embedded Event Manager.
PuppetConf 2016: A Year in Open Source: Automated Compliance With Puppet – Tr...Puppet
Here are the slides from – Trevor Vaughan's PuppetConf 2016 presentation called A Year in Open Source: Automated Compliance With Puppet. Watch the videos at https://www.youtube.com/playlist?list=PLV86BgbREluVjwwt-9UL8u2Uy8xnzpIqa
Here are the slides from Farid Jiandani & Joe Onisick's PuppetConf 2016 presentation called PuppetConf 2016: Application Centric Automation with Puppet & Cisco. Watch the videos at https://www.youtube.com/playlist?list=PLV86BgbREluVjwwt-9UL8u2Uy8xnzpIqa
Puppet Enterprise is an automation platform that allows organizations to define their infrastructure in code and automatically enforce that configuration. It was demonstrated how Puppet defines infrastructure using a common language and automates configuration across any environment. Benefits shown include significant increases in deployment speed, reductions in outages and security fix time, and more frequent deployments. Next steps suggested trying out Puppet Enterprise or learning resources to see how it can help deliver better software faster through infrastructure automation.
1. The document discusses building a small data center network for Dyn, focusing on lessons learned from two design iterations.
2. The first design used MPLS VPNs but had issues with routing and IPv6 support. The second design used virtual routing and forwarding with multiple routing tables to separate traffic and improve service mobility.
3. Key lessons included validating designs before deploying, automating network operations, and moving security policies to instances rather than the network to improve agility and isolate impacts.
This document compares existing CNI plugins for Kubernetes and provides descriptions of popular plugins like Flannel, Calico, Kube-router, and AWS VPC CNI. It explains that CNI plugins provide the interface between container runtimes and network implementations, and describes the CNI workflow and requirements for pod networking in Kubernetes.
USENIX LISA15: How TubeMogul Handles over One Trillion HTTP Requests a MonthNicolas Brousse
TubeMogul grew from few servers to over two thousands servers and handling over one trillion http requests a month, processed in less than 50ms each. To keep up with the fast growth, the SRE team had to implement an efficient Continuous Delivery infrastructure that allowed to do over 10,000 puppet deployment and 8,500 application deployment in 2014. In this presentation, we will cover the nuts and bolts of the TubeMogul operations engineering team and how they overcome challenges.
A Kernel of Truth: Intrusion Detection and Attestation with eBPFoholiab
"Attestation is hard" is something you might hear from security researchers tracking nation states and APTs, but it's actually pretty true for most network-connected systems!
Modern deployment methodologies mean that disparate teams create workloads for shared worker-hosts (ranging from Jenkins to Kubernetes and all the other orchestrators and CI tools in-between), meaning that at any given moment your hosts could be running any one of a number of services, connecting to who-knows-what on the internet.
So when your network-based intrusion detection system (IDS) opaquely declares that one of these machines has made an "anomalous" network connection, how do you even determine if it's business as usual? Sure you can log on to the host to try and figure it out, but (in case you hadn't noticed) computers are pretty fast these days, and once the connection is closed it might as well not have happened... Assuming it wasn't actually a reverse shell...
At Yelp we turned to the Linux kernel to tell us whodunit! Utilizing the Linux kernel's eBPF subsystem - an in-kernel VM with syscall hooking capabilities - we're able to aggregate metadata about the calling process tree for any internet-bound TCP connection by filtering IPs and ports in-kernel and enriching with process tree information in userland. The result is "pidtree-bcc": a supplementary IDS. Now whenever there's an alert for a suspicious connection, we just search for it in our SIEM (spoiler alert: it's nearly always an engineer doing something "innovative")! And the cherry on top? It's stupid fast with negligible overhead, creating a much higher signal-to-noise ratio than the kernels firehose-like audit subsystems.
This talk will look at how you can tune the signal-to-noise ratio of your IDS by making it reflect your business logic and common usage patterns, get more work done by reducing MTTR for false positives, use eBPF and the kernel to do all the hard work for you, accidentally load test your new IDS by not filtering all RFC-1918 addresses, and abuse Docker to get to production ASAP!
As well as looking at some of the technologies that the kernel puts at your disposal, this talk will also tell pidtree-bcc's road from hackathon project to production system and how focus on demonstrating business value early on allowed the organization to give us buy-in to build and deploy a brand new project from scratch.
DevSecCon London 2019: A Kernel of Truth: Intrusion Detection and Attestation...DevSecCon
Matt Carroll
Infrastructure Security Engineer at Yelp
"Attestation is hard" is something you might hear from security researchers tracking nation states and APTs, but it's actually pretty true for most network-connected systems!
Modern deployment methodologies mean that disparate teams create workloads for shared worker-hosts (ranging from Jenkins to Kubernetes and all the other orchestrators and CI tools in-between), meaning that at any given moment your hosts could be running any one of a number of services, connecting to who-knows-what on the internet.
So when your network-based intrusion detection system (IDS) opaquely declares that one of these machines has made an "anomalous" network connection, how do you even determine if it's business as usual? Sure you can log on to the host to try and figure it out, but (in case you hadn't noticed) computers are pretty fast these days, and once the connection is closed it might as well not have happened... Assuming it wasn't actually a reverse shell...
At Yelp we turned to the Linux kernel to tell us whodunit! Utilizing the Linux kernel's eBPF subsystem - an in-kernel VM with syscall hooking capabilities - we're able to aggregate metadata about the calling process tree for any internet-bound TCP connection by filtering IPs and ports in-kernel and enriching with process tree information in userland. The result is "pidtree-bcc": a supplementary IDS. Now whenever there's an alert for a suspicious connection, we just search for it in our SIEM (spoiler alert: it's nearly always an engineer doing something "innovative")! And the cherry on top? It's stupid fast with negligible overhead, creating a much higher signal-to-noise ratio than the kernels firehose-like audit subsystems.
This talk will look at how you can tune the signal-to-noise ratio of your IDS by making it reflect your business logic and common usage patterns, get more work done by reducing MTTR for false positives, use eBPF and the kernel to do all the hard work for you, accidentally load test your new IDS by not filtering all RFC-1918 addresses, and abuse Docker to get to production ASAP!
As well as looking at some of the technologies that the kernel puts at your disposal, this talk will also tell pidtree-bcc's road from hackathon project to production system and how focus on demonstrating business value early on allowed the organization to give us buy-in to build and deploy a brand new project from scratch.
This document provides an overview of Kubernetes 101. It begins with asking why Kubernetes is needed and provides a brief history of the project. It describes containers and container orchestration tools. It then covers the main components of Kubernetes architecture including pods, replica sets, deployments, services, and ingress. It provides examples of common Kubernetes manifest files and discusses basic Kubernetes primitives. It concludes with discussing DevOps practices after adopting Kubernetes and potential next steps to learn more advanced Kubernetes topics.
Container orchestration and microservices worldKarol Chrapek
This document discusses Novomatic Technologies Poland's adoption of container orchestration using Kubernetes. It provides background on Novomatic, explains why containers and Kubernetes were adopted, and summarizes the evolution of Kubernetes usage at Novomatic over time. Key points discussed include setting up development environments with Kubernetes, requirements for a PaaS platform, and lessons learned along the way in areas like infrastructure resources, application deployment, telemetry, and managing stateful applications.
Montreal Kubernetes Meetup: Developer-first workflows (for microservices) on ...Ambassador Labs
1. The document discusses developer-first workflows for building and operating microservices on Kubernetes.
2. It recommends creating self-sufficient, autonomous teams and using Kubernetes, Docker, and Envoy to provide the basic infrastructure primitives needed for distributed workflows.
3. The strategies suggested depend on the service maturity level and include using similar development and production environments for prototyping, implementing software redundancy for production services, and defining service level objectives and network observability for internal dependencies.
This document discusses network automation using Ansible and OpenConfig/YANG. It provides an overview of moving from CLI scraping to using NETCONF and common data models like OpenConfig and YANG. It also demonstrates how Ansible can be used with Juniper network devices for automation through both standard and API modes. A demo is available on GitHub for automating OpenConfig configurations on Juniper devices using Ansible.
The document provides an overview of adding IEEE 802.15.4 and 6LoWPAN support to an embedded Linux device. It discusses the motivation, including the header size problem in IEEE 802.15.4 frames and how 6LoWPAN addresses this. It then describes the Linux-wpan project, supported hardware, configuration tools, and communication with RIOT and Contiki operating systems.
BPF & Cilium - Turning Linux into a Microservices-aware Operating SystemThomas Graf
Container runtimes cause Linux to return to its original purpose: to serve applications interacting directly with the kernel. At the same time, the Linux kernel is traditionally difficult to change and its development process is full of myths. A new efficient in-kernel programming language called eBPF is changing this and allows everyone to extend existing kernel components or glue them together in new forms without requiring to change the kernel itself.
LCU14 310- Cisco ODP
---------------------------------------------------
Speaker: Robbie King
Date: September 17, 2014
---------------------------------------------------
★ Session Summary ★
Cisco to present their experience using ODP to provide portable accelerated access to crypto functions on various SoCs.
---------------------------------------------------
★ Resources ★
Zerista: http://lcu14.zerista.com/event/member/137757
Google Event: https://plus.google.com/u/0/events/ckmld1hll5jjijq11frbqmptet8
Video: https://www.youtube.com/watch?v=eFlTmslVK-Y&list=UUIVqQKxCyQLJS6xvSmfndLA
Etherpad: http://pad.linaro.org/p/lcu14-310
---------------------------------------------------
★ Event Details ★
Linaro Connect USA - #LCU14
September 15-19th, 2014
Hyatt Regency San Francisco Airport
---------------------------------------------------
http://www.linaro.org
http://connect.linaro.org
The document discusses running IEEE 802.15.4 low-power wireless networks under Linux. It describes the linux-wpan project, which provides native support for 802.15.4 radio devices and the 6LoWPAN standard in the Linux kernel. It also discusses the wpan-tools userspace utilities. The document outlines how to set up basic communication between Linux, RIOT and Contiki operating systems for IoT devices using the virtual loopback driver or USB dongles. It also covers link layer security, IPv6 routing protocols like RPL, and areas for future work such as mesh networking support.
Integrating Puppet and Gitolite for sysadmins cooperationsLuca Mazzaferro
In this slides is presented a light solution based on the integration between Puppet-Foreman and Gitolite to the problem: How to enable many sysadmins to work together on one work environment without interfering with each other?
DevOps Days Boston 2017: Real-world Kubernetes for DevOpsAmbassador Labs
DevOps Days Boston 2017
Microservices is an increasingly popular approach to building cloud-native applications. Dozens of new technologies that streamline adopting microservices development such as Docker, Kubernetes, and Envoy have been released over the past few years. But how do you actually use these technologies together to develop, deploy, and run microservices?
In this presentation, we’ll cover the nuances of deploying containerized applications on Kubernetes, including creating a Kubernetes manifest, debugging and logging, and how to build an automated continuous deployment pipeline. Then, we’ll do a brief tour of some of the advanced concepts related to microservices, including service mesh, canary deployments, resilience, and security.
Raul Leite discusses several key NFV concepts and bottlenecks including:
1) NFV architecture which aims for independent hardware, automatic network operation, and flexible application development.
2) Common NFV bottlenecks like packet loss, hypervisor overhead, and low throughput due to CPU and resource allocation issues.
3) Techniques to optimize NFV performance such as SR-IOV, PCI passthrough, hugepages, CPU pinning, and DPDK. SR-IOV and PCI passthrough provide direct access to network hardware while hugepages, pinning and DPDK improve CPU performance.
Similar to PuppetConf 2016: Why Network Automation Matters, and What You Can Do About It – Rick Sherman, Puppet (20)
Puppet camp2021 testing modules and controlrepoPuppet
This document discusses testing Puppet code when using modules versus a control repository. It recommends starting with simple syntax and unit tests using PDK or rspec-puppet for modules, and using OnceOver for testing control repositories, as it is specially designed for this purpose. OnceOver allows defining classes, nodes, and a test matrix to run syntax, unit, and acceptance tests across different configurations. Moving from simple to more complex testing approaches like acceptance tests is suggested. PDK and OnceOver both have limitations for testing across operating systems that may require customizing spec tests. Infrastructure for running acceptance tests in VMs or containers is also discussed.
This document appears to be for a PuppetCamp 2021 presentation by Corey Osman of NWOPS, LLC. It includes information about Corey Osman and NWOPS, as well as sections on efficient development, presentation content, demo main points, Git strategies including single branch and environment branch strategies, and workflow improvements. Contact information is provided at the bottom.
The document discusses operational verification and how Puppet is working on a new module to provide more confidence in infrastructure health. It introduces the concept of adding check resources to catalogs to validate configurations and service health directly during Puppet runs. Examples are provided of how this could detect issues earlier than current methods. Next steps outlined include integrating checks into more resource types, fixing reporting, integrating into modules, and gathering feedback. This allows testing and monitoring to converge by embedding checks within configurations.
This document provides tips and tricks for using Puppet with VS Code, including links to settings examples and recommended extensions to install like Gitlens, Remote Development Pack, Puppet Extension, Ruby, YAML Extension, and PowerShell Extension. It also mentions there will be a demo.
- The document discusses various patterns and techniques the author has found useful when working with Puppet modules over 10+ years, including some that may be considered unorthodox or anti-patterns by some.
- Key topics covered include optimization of reusable modules, custom data types, Bolt tasks and plans, external facts, Hiera classification, ensuring resources for presence/absence, application abstraction with Tiny Puppet, and class-based noop management.
- The author argues that some established patterns like roles and profiles can evolve to be more flexible, and that running production nodes in noop mode with controls may be preferable to fully enforcing on all nodes.
Applying Roles and Profiles method to compliance codePuppet
This document discusses adapting the roles and profiles design pattern to writing compliance code in Puppet modules. It begins by noting the challenges of writing compliance code, such as it touching many parts of nodes and leading to sprawling code. It then provides an overview of the roles and profiles pattern, which uses simple "front-end" roles/interfaces and more complex "back-end" profiles/implementations. The rest of the document discusses how to apply this pattern when authoring Puppet modules for compliance - including creating interface and implementation classes, using Hiera for configuration, and tools for reducing boilerplate code. It aims to provide a maintainable structure and simplify adapting to new compliance frameworks or requirements.
This document discusses Kinney Group's Puppet compliance framework for automating STIG compliance and reporting. It notes that customers often implement compliance Puppet code poorly or lack appropriate Puppet knowledge. The framework aims to standardize compliance modules that are data-driven and customizable. It addresses challenges like conflicting modules and keeping compliance current after implementation. The framework generates automated STIG checklists and plans future integration with Puppet Enterprise and Splunk for continued compliance reporting. Kinney Group cites practical experience implementing the framework for various military and government customers.
Enforce compliance policy with model-driven automationPuppet
This document discusses model-driven automation for enforcing compliance. It begins with an overview of compliance benchmarks and the CIS benchmarks. It then discusses implementing benchmarks, common challenges around configuration drift and lack of visibility, and how to define compliance policy as code. The key points are that automation is essential for compliance at scale; a model-driven approach defines how a system should be configured and uses desired-state enforcement to keep systems compliant; and defining compliance policy as code, managing it with source control, and automating it with CI/CD helps achieve continuous compliance.
This document discusses how organizations can move from a reactive approach to compliance to a proactive approach using automation. It notes that over 50% of CIOs cite security and compliance as a barrier to IT modernization. Puppet offers an end-to-end compliance solution that allows organizations to automatically eliminate configuration drift, enforce compliance at scale across operating systems and environments, and define policy as code. The solution helps organizations improve compliance from 50% to over 90% compliant. The document argues that taking a proactive automation approach to compliance can turn it into a competitive advantage by improving speed and innovation.
Automating it management with Puppet + ServiceNowPuppet
As the leading IT Service Management and IT Operations Management platform in the marketplace, ServiceNow is used by many organizations to address everything from self service IT requests to Change, Incident and Problem Management. The strength of the platform is in the workflows and processes that are built around the shared data model, represented in the CMDB. This provides the ‘single source of truth’ for the organization.
Puppet Enterprise is a leading automation platform focused on the IT Configuration Management and Compliance space. Puppet Enterprise has a unique perspective on the state of systems being managed, constantly being updated and kept accurate as part of the regular Puppet operation. Puppet Enterprise is the automation engine ensuring that the environment stays consistent and in compliance.
In this webinar, we will explore how to maximize the value of both solutions, with Puppet Enterprise automating the actions required to drive a change, and ServiceNow governing the process around that change, from definition to approval. We will introduce and demonstrate several published integration points between the two solutions, in the areas of Self-Service Infrastructure, Enriched Change Management and Automated Incident Registration.
This document promotes Puppet as a tool for hardening Windows environments. It states that Puppet can be used to harden Windows with one line of code, detect drift from desired configurations, report on missing or changing requirements, reverse engineer existing configurations, secure IIS, and export configurations to the cloud. Benefits of Puppet mentioned include hardening Windows environments, finding drift for investigation, easily passing audits, compliance reporting, easy exceptions, and exporting configurations. It also directs users to Puppet Forge modules for securing Windows and IIS.
Simplified Patch Management with Puppet - Oct. 2020Puppet
Does your company struggle with patching systems? If so, you’re not alone — most organizations have attempted to solve this issue by cobbling together multiple tools, processes, and different teams, which can make an already complicated issue worse.
Puppet helps keep hosts healthy, secure and compliant by replacing time-consuming and error prone patching processes with Puppet’s automated patching solution.
Join this webinar to learn how to do the following with Puppet:
Eliminate manual patching processes with pre-built patching automation for Windows and Linux systems.
Gain visibility into patching status across your estate regardless of OS with new patching solution from the PE console.
Ensure your systems are compliant and patched in a healthy state
How Puppet Enterprise makes patch management easy across your Windows and Linux operating systems.
Presented by: Margaret Lee, Product Manager, Puppet, and Ajay Sridhar, Sr. Sales Engineer, Puppet.
The document discusses how Puppet can be used to accelerate adoption of Microsoft Azure. It describes lift and shift migration of on-premises workloads to Azure virtual machines. It also covers infrastructure as code using Puppet and Terraform for provisioning, configuration management using Puppet Bolt, and implementing immutable infrastructure patterns on Azure. Integrations with Azure services like Key Vault, Blob Storage and metadata service are presented. Patch management and inventory of Azure resources with Puppet are also summarized.
This document discusses using Puppet Catalog Diff to analyze the impact of changes between Puppet environments or catalogs. It provides the command line usage and options for Puppet Catalog Diff. It also discusses how to integrate Puppet Catalog Diff into CI/CD pipelines for automated impact analysis when merging code changes. Additional resources like GitHub projects and Dev.to posts are provided for learning more about diffing Puppet environments and catalogs.
ServiceNow and Puppet- better together, Kevin ReeuwijkPuppet
ServiceNow and Puppet can be integrated in four key areas: 1) Self-service infrastructure allows non-Puppet experts to control infrastructure through a ServiceNow interface; 2) Enriched change management automatically generates ServiceNow change requests from Puppet changes and populates them with impact details; 3) Automated incident registration forwards details of configuration drift corrections in Puppet to ServiceNow to create incidents; and 4) Up-to-date asset management would periodically upload Puppet inventory data to ServiceNow to keep the CMDB accurate without disruptive discovery runs.
This document discusses how Puppet Relay uses Tekton pipelines to orchestrate containerized workflows. It provides an overview of how Tekton fits into the Relay architecture, with Tekton controllers managing taskrun pods to execute workflow steps defined in YAML. Triggers can initiate workflows based on events, with reusable and composable steps for tasks like provisioning infrastructure or clearing resources. Relay also includes features for parameters, secrets, outputs, and approvals to customize workflows. An ecosystem of open source integrations provides sample workflows and steps for common use cases.
100% Puppet Cloud Deployment of Legacy SoftwarePuppet
This document discusses deploying legacy software into the AWS cloud using Puppet. It proposes modeling AWS resources like security groups, autoscaling groups, and launch configurations as Puppet resources. This would allow Puppet to provision the underlying AWS infrastructure and configure servers launched in autoscaling groups. It acknowledges challenges around server reboots but suggests they can be addressed. In summary, it argues custom Puppet resources can easily model AWS resources and using Puppet to configure autoscaling servers is possible despite some challenges around rebooting servers during deployment.
This document discusses a partnership between Republic Polytechnic's School of Infocomm and Puppet to promote DevOps practices. It introduces several people involved with the partnership and outlines their mission to prepare more IT companies and individuals for jobs in the DevOps field through training courses. The document describes some short courses offered on DevOps topics and using the Puppet and Microsoft Azure platforms. It provides an example of how Republic Polytechnic has automated infrastructure configuration using Puppet to save time and reduce errors. There is a request at the end for readers to register their interest in DevOps by completing a survey.
This document discusses continuous compliance and DevSecOps best practices followed by financial services organizations.
Continuous compliance is defined as an ongoing process of proactive risk management that delivers predictable, transparent, and cost-effective compliance results. It involves continuously monitoring compliance controls, providing real-time alerts for failures and remediation recommendations, and maintaining up-to-date policies. Best practices for continuous compliance discussed include defining CIS controls and benchmarks, achieving transparent compliance dashboards and automated fixes for breaches.
DevSecOps is introduced as bringing security earlier in the application development lifecycle to minimize vulnerabilities. It aims to make everyone accountable for security. Challenges discussed include security teams struggling to keep up with DevOps pace and
The Dynamic Duo of Puppet and Vault tame SSL Certificates, Nick MaludyPuppet
The document discusses using Puppet and Vault together to dynamically manage SSL certificates. Puppet can use the vault_cert resource to request signed certificates from Vault and configure services to use the certificates. On Windows, some additional logic is needed to retrieve certificates' thumbprints and bind services to certificates using those thumbprints. This approach provides automated certificate renewal and distribution across platforms.
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
High performance Serverless Java on AWS- GoTo Amsterdam 2024Vadym Kazulkin
Java is for many years one of the most popular programming languages, but it used to have hard times in the Serverless community. Java is known for its high cold start times and high memory footprint, comparing to other programming languages like Node.js and Python. In this talk I'll look at the general best practices and techniques we can use to decrease memory consumption, cold start times for Java Serverless development on AWS including GraalVM (Native Image) and AWS own offering SnapStart based on Firecracker microVM snapshot and restore and CRaC (Coordinated Restore at Checkpoint) runtime hooks. I'll also provide a lot of benchmarking on Lambda functions trying out various deployment package sizes, Lambda memory settings, Java compilation options and HTTP (a)synchronous clients and measure their impact on cold and warm start times.
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
inQuba Webinar Mastering Customer Journey Management with Dr Graham HillLizaNolte
HERE IS YOUR WEBINAR CONTENT! 'Mastering Customer Journey Management with Dr. Graham Hill'. We hope you find the webinar recording both insightful and enjoyable.
In this webinar, we explored essential aspects of Customer Journey Management and personalization. Here’s a summary of the key insights and topics discussed:
Key Takeaways:
Understanding the Customer Journey: Dr. Hill emphasized the importance of mapping and understanding the complete customer journey to identify touchpoints and opportunities for improvement.
Personalization Strategies: We discussed how to leverage data and insights to create personalized experiences that resonate with customers.
Technology Integration: Insights were shared on how inQuba’s advanced technology can streamline customer interactions and drive operational efficiency.
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
LF Energy Webinar: Carbon Data Specifications: Mechanisms to Improve Data Acc...DanBrown980551
This LF Energy webinar took place June 20, 2024. It featured:
-Alex Thornton, LF Energy
-Hallie Cramer, Google
-Daniel Roesler, UtilityAPI
-Henry Richardson, WattTime
In response to the urgency and scale required to effectively address climate change, open source solutions offer significant potential for driving innovation and progress. Currently, there is a growing demand for standardization and interoperability in energy data and modeling. Open source standards and specifications within the energy sector can also alleviate challenges associated with data fragmentation, transparency, and accessibility. At the same time, it is crucial to consider privacy and security concerns throughout the development of open source platforms.
This webinar will delve into the motivations behind establishing LF Energy’s Carbon Data Specification Consortium. It will provide an overview of the draft specifications and the ongoing progress made by the respective working groups.
Three primary specifications will be discussed:
-Discovery and client registration, emphasizing transparent processes and secure and private access
-Customer data, centering around customer tariffs, bills, energy usage, and full consumption disclosure
-Power systems data, focusing on grid data, inclusive of transmission and distribution networks, generation, intergrid power flows, and market settlement data
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
2. A Quick Introduction
● Professional Services
○ Identity and Policy Management
○ Workflow systems
● Security Business Unit
○ Cloud Architect
● Junos Manageability
○ PyEZ (Python micro-framework)
○ Ansible Modules
○ Onbox scripting
○ NetDev Evangelism
● Sr. Engineer - Ecosystem
○ Network Automation Czar
■ SME
○ Release Engineering
■ Puppet Agent
2
3. What makes networks difficult?
● Network devices have historically been closed systems with vendor specific CLIs
● Configurations are hundreds if not thousands of lines (per system)
● Configuration != Desired state
● Vendors slow to introduce features, sometimes 18-24 months - upgrade cycle is just as long
● Network Engineers typically do not have a Sys Admin or programming background
● Networks serve multiple applications
3
4. Series of Tubes!
Content Credit: Cumulus Networks and bgpmon.net
...or networks are a compound cluster something
4
9. The Puppet world today
9
● Platforms are supported via Puppet Agent
○ Cisco
■ NXOS
■ IOS-XR
○ Arista
■ EOS
○ Huawei
■ CloudEngine 12800
○ Cumulus
■ CumulusLinux 2/3x x86
● Variety of Puppet Modules
○ Vendor specific types
○ Puppet “NetDev” types
● Multiple methods of interacting with the
device
○ Screen Scraping
○ API Bindings
○ NETCONF
What you can do right now
11. That’s great, but...
● Building Puppet Agents require serious investment
● Implementations are fragmented
● Yes, there is some screen scraping in there
● Puppet netdev_stdlib not industry recognized
11
13. Enter the NETCONF
● XML based encoding
○ Vendor specific data models
● Configuration RPCs
○ get-config, edit-config, copy-config,
delete-config, lock, unlock
● Operational state RPCs
○ Generally map to CLI “show” commands
● Transport: SSH, HTTPS, TLS, BEEP
13
IETF network management standard
14. A tale of two configs - NETCONF
IOS
14
Junos
<interface>
<GigabitEthernet>
<name>2</name>
<description>core</description>
<ip>
<address>
<primary>
<address>192.168.2.3</address>
<mask>255.255.255.0</mask>
</primary>
</address>
</ip>
<shutdown/>
</interface>
<interface>
<name>ge-0/0/2</name>
<description>core</description>
<disable/>
<unit>
<name>0</name>
<family>
<inet>
<address>
<name>192.168.2.3/24</name>
</address>
</inet>
</family>
</unit>
</interface>
15. That’s great, but...
● Implementation is up to the vendor
○ Same problem - different format
● How in the hell do I know what data to send the device?
● Remember, NetEng’s often not programmers
○ How will I interpret this data?
○ How will I create and modify it?
15
17. YANG
● Human-readable representation of model
● Hierarchical data node representation
○ Can combine multiple models
● Built-in data types
○ String, Boolean, Custom
● Constraints
○ What is mandatory?
● Backwards compatibility rules
● Extensible
● Deviations
* Data is still vendor (or group) specific
17
IETF Data Modeling Language for NETCONF
container interfaces {
list interface {
key "name";
description
"The list of configured interfaces...";
leaf name {
type string;
description
"The name of the interface...";
}
leaf enabled {
type boolean;
default "true";
25. Project Goals
● Provide “Agentless” network device management
○ Also be able to use same code with an Agent
● Use standard protocols
○ NETCONF
○ gRPC*
● Provide established Puppet experience
○ Puppet DSL
○ Idempotency / noop
○ Puppet Graph
● Auto-generate as much as possible
○ Puppet Types
○ Puppet Providers
○ Tests
25
26. Leverage existing tools
pyang
Python tool for validating and converting
YANG data models
Built plugin for generating Puppet code from
YANG models
26
Do not re-invent the wheel - contribute to the community
net-netconf (kkirsche fork)
Ruby library for NETCONF
Added client side support for NETCONF 1.1
(does not validate chunk sizes)
Fixed various issues in framework
In discussions with community maintainer for
long term maintenance direction.
27. Created Proof of Concept Module
vanilla_ice
Set of experimental Puppet Types and Providers (varying levels of completion)
● Artifacts created by code generation + human interaction
● Predominantly NETCONF based
○ Early gRPC investigation
● IOS-XE
○ ietf-interfaces
○ ietf-ospf
○ ietf-nvo
○ cisco-interfaces (ned)
● IOS-XR
○ cisco-ifmgr
27
29. Custom Type & Provider
Type Provider
Describes the “What”
Lists all of the attributes for a resource
Implements the “How”
self.instances (Getter)
What is currently set on the device
flush (Setter)
Enforce the configuration on the device
29
Puppet::Type.newtype(:xe_ietf_interfaces) do
ensurable
apply_to_device
newparam(:name) do
desc 'The name of the interface'
isnamevar
end
newproperty(:description) do
desc 'A description of the interface'
end
newproperty(:ipv4_address_ip) do
desc 'The IPv4 address on the interface.'
end
end
31. Demo Goals
● Create / modify / delete loopback interfaces
via ietf-interfaces model
● Modify OSPF via ietf-ospf model
● noop + idempotency
● Show code generation
○ Type
○ self.instances (resources)
○ Flush (writing to device)
What we’re going to show
31
32. Demo Environment
Using `puppet resource` and `puppet apply`
(Getter) (Setter)
32
Local Machine
Puppet 4.7.0
CSR1000v
IOS-XE 16.03.01
NETCONF