This document discusses using Google Cloud Identity's Secure LDAP service to enable centralized authentication for pfSense firewalls. It provides instructions for setting up the Secure LDAP service in Google Cloud, importing the client certificate into pfSense, and configuring pfSense to use the LDAP server for authentication. It also covers creating matching user groups on pfSense and Google Cloud to map privileges. The document contains troubleshooting tips and discusses additional uses for centralized LDAP authentication beyond pfSense administration logins.
A look at what HA is and what PostgreSQL has to offer for building an open source HA solution. Covers various aspects in terms of Recovery Point Objective and Recovery Time Objective. Includes backup and restore, PITR (point in time recovery) and streaming replication concepts.
A look at what HA is and what PostgreSQL has to offer for building an open source HA solution. Covers various aspects in terms of Recovery Point Objective and Recovery Time Objective. Includes backup and restore, PITR (point in time recovery) and streaming replication concepts.
This tutorial is an introduction to Debian packaging. It teaches prospective developers how to modify existing packages, how to create their own packages, and how to interact with the Debian community. In addition to the main tutorial, it includes three practical sessions on modifying the 'grep' package, and packaging the 'gnujump' game and a Java library.
Kubernetes is a solid leader among different cloud orchestration engines and its adoption rate is growing on a daily basis. Naturally people want to run both their applications and databases on the same infrastructure.
There are a lot of ways to deploy and run PostgreSQL on Kubernetes, but most of them are not cloud-native. Around one year ago Zalando started to run HA setup of PostgreSQL on Kubernetes managed by Patroni. Those experiments were quite successful and produced a Helm chart for Patroni. That chart was useful, albeit a single problem: Patroni depended on Etcd, ZooKeeper or Consul.
Few people look forward to deploy two applications instead of one and support them later on. In this talk I would like to introduce Kubernetes-native Patroni. I will explain how Patroni uses Kubernetes API to run a leader election and store the cluster state. I’m going to live-demo a deployment of HA PostgreSQL cluster on Minikube and share our own experience of running more than 130 clusters on Kubernetes.
Patroni is a Python open-source project developed by Zalando in cooperation with other contributors on GitHub: https://github.com/zalando/patroni
How to do a LIVE-demo with minikube:
1. git clone https://github.com/zalando/patroni
2. cd patroni
3. git checkout feature/demo
4. cd kubernetes
5. open demo.sh and edit line #4 (specify the minikube context )
6. docker build -t patroni .
7. may be docker push patroni
8. may be edit patroni_k8s.yaml line #22 and put the name of patroni image you build there
9. install tmux
10. run tmux in one terminal
11. run bash demo.sh in another terminal and press Enter from time to time
New Ways to Find Latency in Linux Using TracingScyllaDB
Ftrace is the official tracer of the Linux kernel. It originated from the real-time patch (now known as PREEMPT_RT), as developing an operating system for real-time use requires deep insight and transparency of the happenings of the kernel. Not only was tracing useful for debugging, but it was critical for finding areas in the kernel that was causing unbounded latency. It's no wonder why the ftrace infrastructure has a lot of tooling for seeking out latency. Ftrace was introduced into mainline Linux in 2008, and several talks have been done on how to utilize its tracing features. But a lot has happened in the past few years that makes the tooling for finding latency much simpler. Other talks at P99 will discuss the new ftrace tracers "osnoise" and "timerlat", but this talk will focus more on the new flexible and dynamic aspects of ftrace that facilitates finding latency issues which are more specific to your needs. Some of this work may still be in a proof of concept stage, but this talk will give you the advantage of knowing what tools will be available to you in the coming year.
MySQL Group Replication is a new 'synchronous', multi-master, auto-everything replication plugin for MySQL introduced with MySQL 5.7. It is the perfect tool for small 3-20 machine MySQL clusters to gain high availability and high performance. It stands for high availability because the fault of replica don't stop the cluster. Failed nodes can rejoin the cluster and new nodes can be added in a fully automatic way - no DBA intervention required. Its high performance because multiple masters process writes, not just one like with MySQL Replication. Running applications on it is simple: no read-write splitting, no fiddling with eventual consistency and stale data. The cluster offers strong consistency (generalized snapshot isolation).
It is based on Group Communication principles, hence the name.
Identifying privilege escalation paths within an Active Directory environment is crucial for a successful red team. Over the last few years, BloodHound has made it easier for red teamers to perform reconnaissance activities and identify these attacks paths. When evaluating BloodHound data, it is common to find ourselves having sufficient rights to modify a Group Policy Object (GPO). This level of access allows us to perform a number of attacks, targeting any computer or user object controlled by the vulnerable GPO.
In this talk we will present previous research related to GPO abuses and share a number of misconfigurations we have found in the wild. We will also present a tool that allows red teamers to target users and computers controlled by a vulnerable GPO in order to escalate privileges and move laterally within the environment.
A look at some of the ways available to deploy Postgres in a Kubernetes cloud environment, either in small scale using simple configurations, or in larger scale using tools such as Helm charts and the Crunchy PostgreSQL Operator. A short introduction to Kubernetes will be given to explain the concepts involved, followed by examples from each deployment method and observations on the key differences.
Prometheus Design and Philosophy by Julius Volz at Docker Distributed System Summit
Prometheus - https://github.com/Prometheus
Liveblogging: http://canopy.mirage.io/Liveblog/MonitoringDDS2016
This tutorial is an introduction to Debian packaging. It teaches prospective developers how to modify existing packages, how to create their own packages, and how to interact with the Debian community. In addition to the main tutorial, it includes three practical sessions on modifying the 'grep' package, and packaging the 'gnujump' game and a Java library.
Kubernetes is a solid leader among different cloud orchestration engines and its adoption rate is growing on a daily basis. Naturally people want to run both their applications and databases on the same infrastructure.
There are a lot of ways to deploy and run PostgreSQL on Kubernetes, but most of them are not cloud-native. Around one year ago Zalando started to run HA setup of PostgreSQL on Kubernetes managed by Patroni. Those experiments were quite successful and produced a Helm chart for Patroni. That chart was useful, albeit a single problem: Patroni depended on Etcd, ZooKeeper or Consul.
Few people look forward to deploy two applications instead of one and support them later on. In this talk I would like to introduce Kubernetes-native Patroni. I will explain how Patroni uses Kubernetes API to run a leader election and store the cluster state. I’m going to live-demo a deployment of HA PostgreSQL cluster on Minikube and share our own experience of running more than 130 clusters on Kubernetes.
Patroni is a Python open-source project developed by Zalando in cooperation with other contributors on GitHub: https://github.com/zalando/patroni
How to do a LIVE-demo with minikube:
1. git clone https://github.com/zalando/patroni
2. cd patroni
3. git checkout feature/demo
4. cd kubernetes
5. open demo.sh and edit line #4 (specify the minikube context )
6. docker build -t patroni .
7. may be docker push patroni
8. may be edit patroni_k8s.yaml line #22 and put the name of patroni image you build there
9. install tmux
10. run tmux in one terminal
11. run bash demo.sh in another terminal and press Enter from time to time
New Ways to Find Latency in Linux Using TracingScyllaDB
Ftrace is the official tracer of the Linux kernel. It originated from the real-time patch (now known as PREEMPT_RT), as developing an operating system for real-time use requires deep insight and transparency of the happenings of the kernel. Not only was tracing useful for debugging, but it was critical for finding areas in the kernel that was causing unbounded latency. It's no wonder why the ftrace infrastructure has a lot of tooling for seeking out latency. Ftrace was introduced into mainline Linux in 2008, and several talks have been done on how to utilize its tracing features. But a lot has happened in the past few years that makes the tooling for finding latency much simpler. Other talks at P99 will discuss the new ftrace tracers "osnoise" and "timerlat", but this talk will focus more on the new flexible and dynamic aspects of ftrace that facilitates finding latency issues which are more specific to your needs. Some of this work may still be in a proof of concept stage, but this talk will give you the advantage of knowing what tools will be available to you in the coming year.
MySQL Group Replication is a new 'synchronous', multi-master, auto-everything replication plugin for MySQL introduced with MySQL 5.7. It is the perfect tool for small 3-20 machine MySQL clusters to gain high availability and high performance. It stands for high availability because the fault of replica don't stop the cluster. Failed nodes can rejoin the cluster and new nodes can be added in a fully automatic way - no DBA intervention required. Its high performance because multiple masters process writes, not just one like with MySQL Replication. Running applications on it is simple: no read-write splitting, no fiddling with eventual consistency and stale data. The cluster offers strong consistency (generalized snapshot isolation).
It is based on Group Communication principles, hence the name.
Identifying privilege escalation paths within an Active Directory environment is crucial for a successful red team. Over the last few years, BloodHound has made it easier for red teamers to perform reconnaissance activities and identify these attacks paths. When evaluating BloodHound data, it is common to find ourselves having sufficient rights to modify a Group Policy Object (GPO). This level of access allows us to perform a number of attacks, targeting any computer or user object controlled by the vulnerable GPO.
In this talk we will present previous research related to GPO abuses and share a number of misconfigurations we have found in the wild. We will also present a tool that allows red teamers to target users and computers controlled by a vulnerable GPO in order to escalate privileges and move laterally within the environment.
A look at some of the ways available to deploy Postgres in a Kubernetes cloud environment, either in small scale using simple configurations, or in larger scale using tools such as Helm charts and the Crunchy PostgreSQL Operator. A short introduction to Kubernetes will be given to explain the concepts involved, followed by examples from each deployment method and observations on the key differences.
Prometheus Design and Philosophy by Julius Volz at Docker Distributed System Summit
Prometheus - https://github.com/Prometheus
Liveblogging: http://canopy.mirage.io/Liveblog/MonitoringDDS2016
Handling credentials, secrets and settings is a crucial aspect of any project. Developers must ensure that sensitive data is kept safe and secure from unauthorized access. However, ensuring safety shouldn't compromise local development convenience. Therefore, it's essential to adopt an approach that provides both security and ease of use.
Google Cloud Next '22 Recap: Serverless & Data editionDaniel Zivkovic
See what's new in #Serverless and #Data at GCP. Our guest, Guillaume Blaquiere - Stack Overflow contributor & #GCP #Developer Expert from France, covered the best #GoogleCloudNext announcements, practically demoed how to benefit from #BigQuery Remote Functions and answered many questions.
The meetup recording with TOC for easy navigation is at https://youtu.be/AuZZTwHIcdY
P.S. For more interactive lectures like this, go to http://youtube.serverlesstoronto.org/ or sign up for our upcoming live events at https://www.meetup.com/Serverless-Toronto/events/
NGINX: Basics & Best Practices - EMEA BroadcastNGINX, Inc.
On-demand recording: nginx.com/resources/webinars/nginx-basics-best-practices-live-emea
You have heard of NGINX and the benefits it can provide to your web application, but maybe you are not sure how to get started. There are a lot of tutorials online, but they can be outdated and contradict each other – making things more challenging.
This webinar will teach you how to:
* Install NGINX and verify it’s properly running
* Create NGINX configurations for reverse proxy, load balancing, and more
* Improve performance using keepalives and other NGINX directives
* Debug and troubleshoot using NGINX logs
On-demand recording: nginx.com/resources/webinars/nginx-basics-best-practices
You’ve heard of NGINX and the benefits it can provide to your web application, but maybe you’re not sure how to get started. There are a lot of tutorials online, but they can be outdated and contradict each other, making things more challenging. In this webinar we’ll cover the basics of NGINX to help you effectively begin using it as part of your existing or new web app.
This webinar covers how to:
* Install NGINX and verify it's properly running
* Create NGINX configurations for reverse proxy, load balancer, etc.
* Improve performance using keepalives and other NGINX directives
* Debug and troubleshoot using NGINX logs
Compliance Automation with InSpec
InSpec is an open source testing framework for infrastructure with a human- and machine-readable language for specifying compliance, security, and policy requirements. Using a combination of command-line and remote-execution tools, InSpec can help you keep your infrastructure aligned with security and compliance guidelines on an ongoing basis, rather than waiting for and then remediating from arduous annual audits. InSpec’s flexibility makes it a key tool choice for incorporating security into a complete continuous delivery workflow, reducing the risk of new features and releases breaking established host-based security guidelines. This talk covers the basics of working with InSpec, writing tests to reflect your organization’s security guidelines, and managing InSpec as part of a high-velocity workflow.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
A tale of scale & speed: How the US Navy is enabling software delivery from l...
Using Google Cloud Identity Secure LDAP with pfSense - Netgate Hangout October 2018
1. Using Google Cloud Identity
Secure LDAP with pfSense
October 2018 Hangout
Jim Pingle
2. Youtube Live
If the video looks fuzzy, Youtube
set the auto quality too low
Click the gear and choose 720p!
3. About this Hangout
●
Netgate News
●
What is LDAP?
●
Google Cloud Secure LDAP
●
Example Use Cases
●
Security Concerns
●
Setup on Google Cloud
●
Setup pfSense CE/pfSense 2.4.4
●
Setup Factory 2.4.4-p1 or later
●
Create Groups on pfSense
●
Testing Authentication
●
Using LDAP for pfSense
Administrative Logins
●
Other Uses
Google Partner Manager McCall McIntyre is in the audience today (Say hi!)
4. Netgate News
●
TNSR now available on Netgate Appliances
– https://www.netgate.com/press-releases/tnsr-now-available-on-netgate-appliances.html
– Netgate SG-5100, XG-1537, and XG-1541 for now, more models in the future
●
pfSense 2.4.4-RELEASE is out!
– If you have not upgraded yet, carefully read the release blog post, release notes, and upgrade guide
●
https://www.netgate.com/blog/pfsense-2-4-4-release-now-available.html
●
https://www.netgate.com/docs/pfsense/releases/2-4-4-new-features-and-changes.html
●
https://www.netgate.com/docs/pfsense/install/upgrade-guide.html
– Do not attempt to upgrade existing packages or install new packages on older releases before upgrading to pfSense
2.4.4
●
SG-5100 shipping now!
●
SG-1000 is now End of Sale
– Still supported, but no new device sales
– New device coming soon to take its place, details coming!
●
pfSense 2.3.x has reached its End of Life
– https://www.netgate.com/blog/pfsense-release-2-3-x-eol-reminder.html
5. Netgate News
●
Netgate Dual-Ethernet MinnowBoard Turbot device offers
– MBT-4220 price lowered to $299
– MBT-2220 and MBT-4220 now have an optional “black flame” laser etching add-on
– MBT devices now ship with a credit card sized USB key pre-loaded with pfSense
(use in bottom USB port)
– https://www.netgate.com/blog/netgate-dual-ethernet-minnowBoard-turbot-with-pfse
nse-special-offer.html
●
Linux Foundation Networking survey of Communication Service Providers
– https://www.netgate.com/blog/csps-ready-to-steamroll-open-source-networking.html
– https://www.lightreading.com/nfv/nfv-specs-open-source/the-reality-of-open-network
ing-in-csp-transformation-/a/d-id/746620
●
Jim Thompson spoke at the Embedded Linux Conference earlier this week,
his talk was about the technologies behind TNSR and how it is changing the
high-end router market
6. What is LDAP?
●
Lightweight Directory Access Protocol
●
Used for a variety of reasons, such as
– Central Authentication & Authorization
●
VPN, computer/network/server logins, IMAP/POP3, web applications, appliances, etc
– Organization directory (e.g. e-mail contacts)
– Store data about people/groups/units/entities
●
Implemented in a variety of ways, and used or provided by several directory service offerings, such as:
– OpenLDAP
– Google Cloud Identity (now)
– Microsoft Active Directory
– Apple Open Directory
– Novell eDirectory
●
Covered previously in other hangouts, the book, etc.
– https://www.netgate.com/resources/videos/radius-and-ldap-on-pfsense-24.html
7. Google Cloud Secure LDAP
●
Secure LDAP service that ties back to Google Cloud Identity
●
Can be used for authenticating cloud-hosted or on-premises applications and services
●
Companies that have already offloaded e-mail and drive storage to Google can now also use the
service for LDAP-based central auth
– No need to maintain separate authentication infrastructures and accounts locally and on Google services
●
Easy-to-use account management where users can maintain their own passwords
●
Currently rolling out to Cloud Identity and G Suite Enterprise customers over the next few weeks
●
https://cloud.google.com/blog/products/identity-security/simplifying-identity-and-access-manageme
nt-for-more-businesses
●
https://cloud.google.com/identity/
●
The setup described in this Hangout is also covered in the online pfSense docs
– https://www.netgate.com/docs/pfsense/usermanager/google-gsuite-auth-source.html
8. Example Use Cases
●
A company with multiple locations that uses G Suite Enterprise for
e-mail and storage that does not want to run a local LDAP server,
but still wants to take advantage of central authentication for
firewalls at all locations
●
A company that wants to use central authentication for VPNs, taking
advantage of the accounts already setup in Cloud Identity
●
Any other similar cases where using the hosted service has less
overhead and management than maintaining a local service
9. Security Concerns
●
Similar concerns to any hosted services or centrally located services across multiple locations in an organization
●
The classic tradeoff here is ease of management vs loss of control
●
Since the service itself is not controlled locally, there is some level of trust / risk involved
– Do you trust Google to handle this task?
– If you are using Cloud Identity / G Suite, odds are that is already something your org has decided!
●
Service is contingent on an active Internet connection and the service being up
– pfSense will fall back to local authentication in this case when used for web interface logins
– When used across multiple locations, the same connectivity concern applies there as well
– Primary factor there is reliability of the ISP or availability of redundant connectivity, which is not directly related to Google or
this service specifically
– Service availability concerns are low, as Google has a good track record of reliability
●
This does not open a channel through which Google can reach into your firewall or other devices
– Communication is initiated one way: The device queries the LDAP server, the LDAP server responds with results of query
10. Setup on Google Cloud
●
Currently requires an account using the "Cloud Premium" or "G Suite Enterprise" tier
●
Follow Google’s setup document at
https://support.google.com/cloudidentity/answer/9048516
– This must be followed exactly
– Not shown here because it varies by org and Google’s docs cover it thoroughly
●
Download the certificate and its key for use by pfSense
●
During the setup process, generate access credentials (username and password) to be used
for bind credentials
– https://support.google.com/cloudidentity/answer/9048541#generate-access-codes
●
Create any required groups and add members to these groups
– Note the exact names used as you will need to make groups with the same name on pfSense later!
11. Setup on pfSense
●
First step is to import the certificate
– Open the certificate files from Google in a text editor (Notepad, Notepad++, UE, etc)
– Navigate to System > Cert manager, Certificates tab
– Click Add/Sign to display the certificate import interface
– Change Method to Import an existing certificate
– Enter a Descriptive name, such as Google Cloud LDAP Client
– Copy and paste the contents of the downloaded certificate into the Certificate data box
– Copy and paste the contents of the downloaded key into the Private Key data box
– Click Save
●
Next steps depend on pfSense version (CE or Factory 2.4.4-p1)
12. Setup stunnel for CE or pfSense 2.4.4
●
On pfSense CE, and even on factory 2.4.4 and earlier, the LDAP client on the
firewall does not directly support an SSL client certificate, only a server certificate
●
The stunnel package works around this, setting up an encrypted tunnel to Google
Cloud Secure LDAP that can use the client certificate imported in the previous step
●
This requires stunnel package version 5.37, update the package if it’s already
installed on pfSense 2.4.4 but out of date
●
If not already on pfSense 2.4.4, upgrade to pfSense 2.4.4
●
If the stunnel package is not installed, install it from System > Package Manager,
Available Packages tab
13. Setup stunnel for CE or pfSense 2.4.4
●
Next, configure stunnel to connect to Google Cloud Secure LDAP
●
Navigate to Services > STunnel
●
Click Add to create a new profile
●
Enter a Description for this connection, such as Google Cloud Secure LDAP
●
Check Client Mode
●
Set Listen on IP to 127.0.0.1
●
Set Listen on port to 1636
●
Set the Certificate to the entry imported previously, in this case Google Cloud LDAP Client
●
Set Redirects to IP to ldap.google.com
●
Set Redirects to port to 636
●
Click Save
14. Setup LDAP for CE or pfSense 2.4.4 (stunnel)
●
This scenario is for CE or Factory 2.4.4 using stunnel
●
Select System > User manager, Authentication servers tab
●
Click Add to create a new entry
●
Enter a Descriptive name for this LDAP server, such as Google Cloud Secure LDAP
●
Set Type to LDAP
●
Set the Hostname or IP address to 127.0.0.1 so pfSense will connect through stunnel
●
Set Port value to 1636
●
Set Transport to TCP-Standard
– Since stunnel handles the encryption, this step uses plain TCP only, but since it only goes to localhost there is no danger
●
Set Protocol version to 3
●
Set Server timeout to 25
●
Set Search scope to Entire tree
15. Setup LDAP for Factory 2.4.4-p1 or later
●
This scenario is for Factory 2.4.4-p1 or later using built-in LDAP Client certificate support
●
Select System > User manager, Authentication servers tab
●
Click Add to create a new entry
●
Enter a Descriptive name for this LDAP server, such as Google Cloud Secure LDAP
●
Set Type to LDAP
●
Set the Hostname or IP address to ldap.google.com
●
Set Port value to 636
●
Set Transport to SSL - Encrypted
●
Set Peer Certificate Authority to Global Root CA List
●
Set Client Certificate to the entry imported previously, in this case Google Cloud LDAP Client
●
Set Protocol version to 3
●
Set Server timeout to 25
●
Set Search scope to Entire tree
16. Common LDAP Server Entries
●
These settings are unique to your domain/account, the example shown in the hangout (pfsense.org) or
the docs (example.com) is shown only as a demonstration and must be replaced with the actual domain
name and equivalent components!
– Set Base DN to the domain name in DN format
●
Ex: dc=example,dc=com
– Set Authentication containers to the Base DN prepended by the Users organizational unit
●
Ex: ou=Users,dc=example,dc=com
– Uncheck Bind anonymous to show Bind Credentials
– Set Bind credentials to the Secure LDAP username and password that were created on Google Cloud earlier
●
Set User naming attribute to uid
●
Set Group naming attribute to cn
●
Set Group member attribute to memberOf
●
Click Save
17. Create Groups on pfSense
●
When using LDAP auth for the pfSense WebGUI, permissions are
mapped to users and groups based on the values returned from LDAP
and entries that exist locally
●
If an LDAP user is a member of a group and that group exists on
pfSense with an identical name, then the user will have the privileges
assigned to that group
– Similarly, if an LDAP username matches a local user, the privileges of that user
also apply
●
Earlier, you made groups on Google Cloud and added members, now we
need to create matching entries on pfSense
18. Create Groups on pfSense
●
Create the group on pfSense
– Navigate to System > User Manager, Groups tab
– Click Add to make a new group entry
– Enter the Group name (Ex: fwadmins)
– Set the Scope to Remote
– Enter a Description, Remote Firewall Administrators
– Click Save
●
Edit the group again to add privileges
– Click the pencil icon on the row for the newly created group
– Click Add in the Assigned Privileges section
– Select the desired permissions for the group, for example: WebCfg - All pages
●
Do not select every item in this list! That will also select User - Config: Deny Config Write which prevents users from making
changes to the configuration
– Click Save to store the privileges
19. Testing LDAP Authentication
●
Test from Diagnostics > Authentication
●
Select the Google Cloud Secure LDAP server from the list and enter valid credentials, then click test
●
If auth was successful, it should also list any groups the user is a member of which also were found
locally on pfSense
– If auth worked but no groups were found, ensure that the name of the group matches on Google Cloud and on
pfSense, and ensure the user is a member of the group in the settings for the account on Google Cloud
●
If the authentication failed, check the main system log for errors and review every step in this
hangout and the online docs again
●
May need 16/11 from console/ssh after SSL changes to clear the LDAP environment settings
●
Only use the username is checked, anything after the @ is ignored when entered
– For example, joe@example.com will auth the same as joe@movie.edu
– The domain is ignored, only the username is taken and authenticated inside of the configured LDAP containers
20. Use LDAP For pfSense Administration Logins
●
Assuming authentication was successful and showed the correct groups, the server can now be
used for authenticating users on pfSense!
– Note that currently this only works for the GUI, and not SSH
●
To change pfSense so it uses Google Cloud Secure LDAP for firewall authentication…
– Navigate to System > User manager, Settings tab
– Set the Authentication server to Google Cloud Secure LDAP
– Click Save
●
After completing those steps, log out and then back in using a Google account for your organization
●
If the account fails, see the previous troubleshooting steps
●
When LDAP authentication fails, local authentication is tried
– A local account such as the default admin user can be used to get back in and adjust settings as needed if the
LDAP server is failing authentication or unreachable
21. Alternate Uses
●
Use directly for VPN auth if all users have access
– Users still need certs for SSL/TLS auth in OpenVPN
– Can use auth without certs if needed (easier, but less secure)
●
Add another LDAP server entry using extended filter so that it
can only auth a single group, e.g. VPNusers, then use that
server for OpenVPN/IPsec
●
Central Captive Portal auth source for the entire company
22. Conclusion
●
Questions?
●
Additional Resources for LDAP and Privileges:
– https://www.netgate.com/resources/videos/radius-and-ldap-on-pfsens
e-24.html
– https://www.netgate.com/resources/videos/user-management-and-pri
vileges-on-pfsense-24.html
– https://www.netgate.com/docs/pfsense/book/usermanager/index.html
●
Ideas for hangout topics? Post on forum, Reddit, etc