“Identity management (IdM) describes the management of individual principals, their
authentication, authorization, and privileges within or across system and enterprise
boundaries with the goal of increasing security and productivity while decreasing cost,
downtime and repetitive tasks.”
I demonstrate in this short guide how to upgrade Red Hat IdM (freeIPA) from rhel 6 into 7.x
This session will provide a guide to Alfresco truststores and keystores. Several live examples will be shown, including the replacement of existing cryptographic stores or certificates. Additionally, a troubleshooting configuration guide for mTLS communication will be provided.
Ian Robinson, Engineer at Neo4j, talks about how you productionize your Neo4j-based application. In this talk from GraphConnect San Francisco 2015, he looks at some of the most important considerations around designing, building and operating a Neo4j app.
Topics include:
* Where Neo4j fits in your application architecture, in both its embedded and server modes
* How to configure its clustering for high availability and high read throughput
* Backup strategies
* The new monitoring capabilities in Neo4j 2.3
This session will provide a guide to Alfresco truststores and keystores. Several live examples will be shown, including the replacement of existing cryptographic stores or certificates. Additionally, a troubleshooting configuration guide for mTLS communication will be provided.
Ian Robinson, Engineer at Neo4j, talks about how you productionize your Neo4j-based application. In this talk from GraphConnect San Francisco 2015, he looks at some of the most important considerations around designing, building and operating a Neo4j app.
Topics include:
* Where Neo4j fits in your application architecture, in both its embedded and server modes
* How to configure its clustering for high availability and high read throughput
* Backup strategies
* The new monitoring capabilities in Neo4j 2.3
Delivering High Performance Websites with NGINXNGINX, Inc.
NGINX Plus is an easy-to-install, proven software solution to deliver your sites and applications through state-of-the-art intelligent load balancing and high performance acceleration. Improve your servers’ performance, scalability, and reliability with application delivery from NGINX Plus.
NGINX Plus significantly increases application performance during periods of high load with its caching, HTTP connection processing, and efficient offloading of traffic from slow networks. NGINX Plus offers enterprise application load balancing, sophisticated health checks, and more, to balance workloads and avoid user-visible errors.
Check out this webinar to:
* Learn why web performance matters more than ever, in the face of growing application complexity and traffic volumes
* Get the lowdown on the performance challenges of HTTP, and why the real world is so different to a development environment
* Understand why NGINX and NGINX Plus are such popular solutions for mitigating these problems and restoring peak performance
* Look at some real-world deployment examples of accelerating traffic in complex scenarios
High Availability Content Caching with NGINXNGINX, Inc.
On-Demand Recording:
https://www.nginx.com/resources/webinars/high-availability-content-caching-nginx/
You trust NGINX to be your web server, but did you know it’s also a high-performance content cache? In fact, the world’s most popular CDNs – CloudFlare, MaxCDN, and Level 3 among them – are built on top of the open source NGINX software.
NGINX content caching can drastically improve the performance of your applications. We’ll start with basic configuration, then move on to advanced concepts and best practices for architecting high availability and capacity in your application infrastructure.
Join this webinar to:
* Enable content caching with the key configuration directives
* Use micro caching with NGINX Plus to cache dynamic content while maintaining low CPU utilization
* Partition your cache across multiple servers for high availability and increased capacity
* Log transactions and troubleshoot your NGINX content cache
A systematic overview of Consul's different network models, how they work, what kind of use cases they serve, and how prepared queries can help provide glue to keep service discovery simple across all.
Nagios Conference 2012 - Scott Wilkerson - Passive Monitoring Solutions For R...Nagios
Scott Wilkerson presentation on using Nagios to monitor remote networks (NRDS & Reflector).
The presentation was given during the Nagios World Conference North America held Sept 25-28th, 2012 in Saint Paul, MN. For more information on the conference (including photos and videos), visit: http://go.nagios.com/nwcna
Production Readiness Strategies in an Automated WorldSean Chittenden
Production Ready. What does it mean? And to whom? Does that term factor in post-launch concerns such as debugability and ownership? What are the lifecycle phases for moving an idea into a hardened production system?
As the world continues its furious adoption of automation, Foo-as-a-Service, and ever changing tools, what are the baseline assumptions, risks, checklists, and processes required to support the evolving landscape of "production ready." In this talk we will deploy a sample application and build both a checklist and scorecard to evaluate the readiness of a system and an organization's practices.
Part 3 - Local Name Resolution in Linux, FreeBSD and macOS/iOSMen and Mice
The focus of this webinar will be to take a deeper look into this local name-resolution system and the implementations for other Unix systems like Linux and FreeBSD. Linux’s new über-Daemon “systemd” supports both mDNS and the Windows LLMNR (Link-Local-Multicast-Name-Resolution). We will also show how well a Systemd-Linux behaves in heterogenous networks running both Windows and macOS.
Presentation of a few mechanisms that can help to automate the bootstrap process in IoT environment.
This is the summary of my work done during an 8 weeks internship at red hat
Rate Limiting with NGINX and NGINX PlusNGINX, Inc.
On-demand recording: https://www.nginx.com/resources/webinars/rate-limiting-nginx/
Learn how to mitigate DDoS and password-guessing attacks by limiting the number of HTTP requests a user can make in a given period of time.
This webinar will teach you how to:
* How to protect application servers from being overwhelmed with request limits
* About the burst and no‑delay features for minimizing delay while handling large bursts of user requests
* How to use the map and geo blocks to impose different rate limits on different HTTP user requests
* About using the limit_req_log_level directive to set logging levels for rate‑limiting events
About the webinar
A delay of even a few seconds for a screen to render is interpreted by many users as a breakdown in the experience. There are many reasons for these breakdowns in the user experience, one of which is DDoS attacks which tie up your system’s resources.
Rate limiting is a powerful feature of NGINX that can mitigate DDoS attacks, which would otherwise overload your servers and hinder application performance. In this webinar, we’ll cover basic concepts as well as advanced configuration. We will finish with a live demo that shows NGINX rate limiting in action.
APNIC Chief Scientist Geoff Huston briefly explains DNSSEC and the role of the KSK, and the way in which we can measure the possible impact of this planned roll.
Modern tooling to assist with developing applications on FreeBSDSean Chittenden
Discuss a workflow and the tooling for FreeBSD engineers to develop locally on their laptop (OS-X, Windows, or FreeBSD), and push applications to bare metal or the cloud. The tooling required to provide good automation from a developer laptop to production takes time to evolve, however this lecture will jumpstart a series of best practices for FreeBSD engineers who want to see their business applications run on FreeBSD.
The objective of this article is to describe what to monitor in and around Alfresco in order to have a good understanding of how the applications are performing and to be aware of potential issues.
Scaling your logging infrastructure using syslog-ngPeter Czanik
This talk was presented at All Things Open: https://allthingsopen.org/talk/scaling-your-logging-infrastructure/
Event logging is important not only for IT security and operations, but also for business decisions. The syslog-ng application is an enhanced logging daemon, with a focus on central log collection. It collects logs from many different sources, processes and filters them and finally it stores them or routes them for further analysis.
From this session you will learn (using examples from syslog-ng) why and how to parse important information from incoming messages, and how to route logs, feeding downstream systems using arbitrary formats. We will also discuss how the client – relay – server architecture can solve scalability problems. Also, I will present some of the recently introduced “Big Data” destinations of syslog-ng, which can help to scale your infrastructure even further.
Delivering High Performance Websites with NGINXNGINX, Inc.
NGINX Plus is an easy-to-install, proven software solution to deliver your sites and applications through state-of-the-art intelligent load balancing and high performance acceleration. Improve your servers’ performance, scalability, and reliability with application delivery from NGINX Plus.
NGINX Plus significantly increases application performance during periods of high load with its caching, HTTP connection processing, and efficient offloading of traffic from slow networks. NGINX Plus offers enterprise application load balancing, sophisticated health checks, and more, to balance workloads and avoid user-visible errors.
Check out this webinar to:
* Learn why web performance matters more than ever, in the face of growing application complexity and traffic volumes
* Get the lowdown on the performance challenges of HTTP, and why the real world is so different to a development environment
* Understand why NGINX and NGINX Plus are such popular solutions for mitigating these problems and restoring peak performance
* Look at some real-world deployment examples of accelerating traffic in complex scenarios
High Availability Content Caching with NGINXNGINX, Inc.
On-Demand Recording:
https://www.nginx.com/resources/webinars/high-availability-content-caching-nginx/
You trust NGINX to be your web server, but did you know it’s also a high-performance content cache? In fact, the world’s most popular CDNs – CloudFlare, MaxCDN, and Level 3 among them – are built on top of the open source NGINX software.
NGINX content caching can drastically improve the performance of your applications. We’ll start with basic configuration, then move on to advanced concepts and best practices for architecting high availability and capacity in your application infrastructure.
Join this webinar to:
* Enable content caching with the key configuration directives
* Use micro caching with NGINX Plus to cache dynamic content while maintaining low CPU utilization
* Partition your cache across multiple servers for high availability and increased capacity
* Log transactions and troubleshoot your NGINX content cache
A systematic overview of Consul's different network models, how they work, what kind of use cases they serve, and how prepared queries can help provide glue to keep service discovery simple across all.
Nagios Conference 2012 - Scott Wilkerson - Passive Monitoring Solutions For R...Nagios
Scott Wilkerson presentation on using Nagios to monitor remote networks (NRDS & Reflector).
The presentation was given during the Nagios World Conference North America held Sept 25-28th, 2012 in Saint Paul, MN. For more information on the conference (including photos and videos), visit: http://go.nagios.com/nwcna
Production Readiness Strategies in an Automated WorldSean Chittenden
Production Ready. What does it mean? And to whom? Does that term factor in post-launch concerns such as debugability and ownership? What are the lifecycle phases for moving an idea into a hardened production system?
As the world continues its furious adoption of automation, Foo-as-a-Service, and ever changing tools, what are the baseline assumptions, risks, checklists, and processes required to support the evolving landscape of "production ready." In this talk we will deploy a sample application and build both a checklist and scorecard to evaluate the readiness of a system and an organization's practices.
Part 3 - Local Name Resolution in Linux, FreeBSD and macOS/iOSMen and Mice
The focus of this webinar will be to take a deeper look into this local name-resolution system and the implementations for other Unix systems like Linux and FreeBSD. Linux’s new über-Daemon “systemd” supports both mDNS and the Windows LLMNR (Link-Local-Multicast-Name-Resolution). We will also show how well a Systemd-Linux behaves in heterogenous networks running both Windows and macOS.
Presentation of a few mechanisms that can help to automate the bootstrap process in IoT environment.
This is the summary of my work done during an 8 weeks internship at red hat
Rate Limiting with NGINX and NGINX PlusNGINX, Inc.
On-demand recording: https://www.nginx.com/resources/webinars/rate-limiting-nginx/
Learn how to mitigate DDoS and password-guessing attacks by limiting the number of HTTP requests a user can make in a given period of time.
This webinar will teach you how to:
* How to protect application servers from being overwhelmed with request limits
* About the burst and no‑delay features for minimizing delay while handling large bursts of user requests
* How to use the map and geo blocks to impose different rate limits on different HTTP user requests
* About using the limit_req_log_level directive to set logging levels for rate‑limiting events
About the webinar
A delay of even a few seconds for a screen to render is interpreted by many users as a breakdown in the experience. There are many reasons for these breakdowns in the user experience, one of which is DDoS attacks which tie up your system’s resources.
Rate limiting is a powerful feature of NGINX that can mitigate DDoS attacks, which would otherwise overload your servers and hinder application performance. In this webinar, we’ll cover basic concepts as well as advanced configuration. We will finish with a live demo that shows NGINX rate limiting in action.
APNIC Chief Scientist Geoff Huston briefly explains DNSSEC and the role of the KSK, and the way in which we can measure the possible impact of this planned roll.
Modern tooling to assist with developing applications on FreeBSDSean Chittenden
Discuss a workflow and the tooling for FreeBSD engineers to develop locally on their laptop (OS-X, Windows, or FreeBSD), and push applications to bare metal or the cloud. The tooling required to provide good automation from a developer laptop to production takes time to evolve, however this lecture will jumpstart a series of best practices for FreeBSD engineers who want to see their business applications run on FreeBSD.
The objective of this article is to describe what to monitor in and around Alfresco in order to have a good understanding of how the applications are performing and to be aware of potential issues.
Scaling your logging infrastructure using syslog-ngPeter Czanik
This talk was presented at All Things Open: https://allthingsopen.org/talk/scaling-your-logging-infrastructure/
Event logging is important not only for IT security and operations, but also for business decisions. The syslog-ng application is an enhanced logging daemon, with a focus on central log collection. It collects logs from many different sources, processes and filters them and finally it stores them or routes them for further analysis.
From this session you will learn (using examples from syslog-ng) why and how to parse important information from incoming messages, and how to route logs, feeding downstream systems using arbitrary formats. We will also discuss how the client – relay – server architecture can solve scalability problems. Also, I will present some of the recently introduced “Big Data” destinations of syslog-ng, which can help to scale your infrastructure even further.
Cloud Platform Symantec Meetup Nov 2014Miguel Zuniga
Openstack Lessons learned
Continuous Integration and Deployment using Openstack
Tuning Openstack for High Availability and Performance in Large Production Deployments
Leland Lammert's presentation on Distributed Heirarchical Nagios.
The presentation was given during the Nagios World Conference North America held Oct 13th - Oct 16th, 2014 in Saint Paul, MN. For more information on the conference (including photos and videos), visit: http://go.nagios.com/conference
Monitoring is an key part of operating and maintaining a cloud environment. In the first part of this talk Alexander shows how CloudStack and the components it depends on can be monitored. In the second part he shows how its possible to build a central monitoring system which can be used by the customers too.
Introduction to InSpec and 1.0 release updateAlex Pop
Contains an introduction to infrastructure and compliance tests as code and how InSpec can be used for this.
Agenda:
* Why infrastructure tests as code
* What is InSpec and how it works
* Core and custom resources
* What's new in InSpec 1.0 (released Sept 26, 2016)
* Documentation and installation
* Integrations
* Demo
* Chef Community Summit
At G&D we have one ICINGA system specialized in monitoring our complex SAP environment. To keep ICINGA “up to date” the “Config Build” is automated with the help of GLPI.
All technical information’s are collected by GLPI’s “Fusioninventory” plugin, some custom ICINGA fields are added with the “Fields” plugin to our Server- , Database- and SAP Objects.
To build the ICINGA configuration we use various database views (GLPI’s mysql) and some python scripts … but it would be possible to use the “Icinga Director” as well.
Finally, we are informed if the monitoring configuration would change due to system changes detected by GLPI. This means that we can adjust our monitoring fully- or semi-automatically.
This is the document which explain the step by step procedure to upgrade PowerVC from 1.3.0.2 to 1.3.2.0. I've added useful information in the documents.
Nagios Conference 2014 - Rob Hassing - How To Maintain Over 20 Monitoring App...Nagios
Rob Hassing's presentation on How To Maintain Over 20 Monitoring Appliances.
The presentation was given during the Nagios World Conference North America held Oct 13th - Oct 16th, 2014 in Saint Paul, MN. For more information on the conference (including photos and videos), visit: http://go.nagios.com/conference
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
2. Promote IdM(FreeIPA) to RHEL 7
Before we starting the IdM upgrading to rhel 7 we need to ask, What is IdM ?
“Identity management (IdM) describes the management of individual principals, their
authentication, authorization, and privileges within or across system and enterprise
boundaries with the goal of increasing security and productivity while decreasing cost,
downtime and repetitive tasks.”
Why IdM, what type of problem may solved?
• Identities
– Where are my users stored? What properties do they have? How is this data made
available to systems and applications?
• Authentication
– What credentials do my users use to authenticate? Passwords? Smart Cards?
Special devices? Is there SSO? How can the same user access file stores and web
applications without requiring re-authentication?
• Access control
– Which users have access to which systems, services, applications? What commands
can they run on those systems? What SELinux context is a user is mapped to?
• Policies
– What is the strength of the password? What are the automount rules? What are
Kerberos ticket policies?
When migrating an IdM server from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7 or
CentOS , the process is very similar to promoting a replica to a master:
1. A new server is created on Red Hat Enterprise Linux 7.
2. All data are migrated over to the new server.
3. All services, such as CRL and certificate creation, DNS management, Kerberos KDC
administration, are transitioned over to the new system.
Upgrading IdM into Red Hat 7.x 2
3. The overview of our lab:
Red Hat 6:
OS : rhel 6.7
IPA version: 3.x
IP: 192.168.100.20
hostname: ipa01.rhlab.dev
DNS: rhlab.dev
Red Hat 7:
OS : rhel 7.1
IPA version: 4.x
IP: 192.168.100.21
hostname: ipa02.rhlab.dev
DNS: rhlab.dev
Client:
OS : rhel 6.7
IPA client version: 3.x
IP: 192.168.100.22
hostname: client.rhlab.dev
DNS: rhlab.dev
Upgrading IdM into Red Hat 7.x 3
4. Upgrading process:
Assuming you've already IPA installed on rhel 6.7, to migrating from rhel 6 to 7, you have to have
go through these steps:
1. Update rhel 6 to latest version, and so on ipa packages.
[root@ipa01 ~]# yum update ipa-*
2. Configure firewall if required on rhel 7.
[root@ipa02 ~]# firewall-cmd --permanent –add-
port={80/tcp,443/tcp,389/tcp,636/tcp,88/tcp,464/tcp,88/udp,464/udp,2
2/tcp}
[root@ipa02 ~]# firewall-cmd --reload
3. Installing IdM packages on rhel 7.
[root@ipa02 ~]# yum install ipa-server ipa-server-dns -y
4. Copy the Python schema update script from rhel 7 to rhel 6.
[root@ipa02 ~]# scp /usr/share/ipa/copy-schema-to-ca.py ipa01:/root/
5. Run the schema update script on rhel 6.
[root@ipa01 ~]# python copy-schema-to-ca.py
ipa : INFO Installed /etc/dirsrv/slapd-PKI-
IPA//schema/60kerberos.ldif
ipa : INFO Installed /etc/dirsrv/slapd-PKI-
IPA//schema/60samba.ldif
ipa : INFO Installed /etc/dirsrv/slapd-PKI-
IPA//schema/60ipaconfig.ldif
ipa : INFO Installed /etc/dirsrv/slapd-PKI-
IPA//schema/60basev2.ldif
ipa : INFO Installed /etc/dirsrv/slapd-PKI-
IPA//schema/60basev3.ldif
ipa : INFO Installed /etc/dirsrv/slapd-PKI-
IPA//schema/60ipadns.ldif
ipa : INFO Installed /etc/dirsrv/slapd-PKI-
IPA//schema/61kerberos-ipav3.ldif
ipa : INFO Installed /etc/dirsrv/slapd-PKI-
IPA//schema/65ipasudo.ldif
ipa : INFO Installed /etc/dirsrv/slapd-PKI-
IPA//schema/05rfc2247.ldif
ipa : INFO Restarting CA DS
ipa : INFO Schema updated successfully
Upgrading IdM into Red Hat 7.x 4
5. 6. On rhel 6 create replica file for rhel 7.
[root@ipa01 ~]# ipa-replica-prepare ipa02.rhlab.dev --ip-address
192.168.100.21
Directory Manager (existing master) password:
Preparing replica for ipa01.rhlab.dev from ipa01.rhlab.dev
Creating SSL certificate for the Directory Server
Creating SSL certificate for the dogtag Directory Server
Saving dogtag Directory Server port
Creating SSL certificate for the Web Server
Exporting RA certificate
Copying additional files
Finalizing configuration
Packaging replica information into /var/lib/ipa/replica-info-
ipa02.rhlab.dev.gpg
Adding DNS records for ipa02.rhlab.dev
Using reverse zone 2.0.192.in-addr.arpa.
The ipa-replica-prepare command was successful
7. Installing replica on rhel 7: use the --setup-ca option to set up a Dogtag Certificate
System instance and the --setup-dns option to configure the DNS server. The replica
server's IP address in this example is 192.168.100.21.
[root@ipa02 ~]# ipa-replica-install --setup-ca –ip-
address=192.168.100.21 -p password -w password -N --setup-dns –-no-
forwarder -U /var/lib/ipa/replica-info-ipa02.rhlab.dev.gpg
Run connection check to master
Check connection from replica to remote master 'ipa01.rhlab.dev':
Directory Service: Unsecure port (389): OK
Directory Service: Secure port (636): OK
Kerberos KDC: TCP (88): OK
Kerberos Kpasswd: TCP (464): OK
HTTP Server: Unsecure port (80): OK
HTTP Server: Secure port (443): OK
PKI-CA: Directory Service port (7389): OK
...
8. Verifying the configuration on both systems.
Upgrading IdM into Red Hat 7.x 5
6. ◦ Verify that the IdM services are running:
root@ipa02 ~]# ipactl status
Directory Service: RUNNING
krb5kdc Service: RUNNING
kadmin Service: RUNNING
named Service: RUNNING
ipa_memcached Service: RUNNING
httpd Service: RUNNING
pki-tomcatd Service: RUNNING
ipa-otpd Service: RUNNING
ipa: INFO: The ipactl command was successful
◦ Verify that both IdM CAs are configured as master servers.
[root@ipa02 ~]# kinit admin
[root@ipa02 ~]# ipa-replica-manage list
ipa01.rhlab.dev: master
ipa02.rhlab.dev: master
[root@ipa02 ~]# ipa-replica-manage list -v ipa02.rhlab.dev
ipa02.rhlab.dev: replica
last init status: None
last init ended: None
last update status: 0 Replica acquired successfully: Incremental
update started
last update ended: None
9. On rhel 6 disable renewal of CA subsystem certificate or issues CRLs.
◦ Identify which server instance is the master CA server. Both CRL generation and
renewal operations are handled by the same CA server. So, the master CA can be
identified by having the renew_ca_cert certificate being tracked by certmonger.
[root@ipa01 ~]# getcert list -d /var/lib/pki-ca/alias -n "subsystemCert
cert-pki-ca" | grep post-save
post-save command: /usr/lib64/ipa/certmonger/renew_ca_cert
"subsystemCert cert-pki-ca"
◦ On the original master CA, disable tracking for all of the original CA certificates.
[root@ipa01 ~]# getcert stop-tracking -d /var/lib/pki-ca/alias -n
"auditSigningCert cert-pki-ca"
Request "20151127184547" removed.
[root@ipa01 ~]# getcert stop-tracking -d /var/lib/pki-ca/alias -n
"ocspSigningCert cert-pki-ca"
Request "20151127184548" removed.
[root@ipa01 ~]# getcert stop-tracking -d /var/lib/pki-ca/alias -n
"subsystemCert cert-pki-ca"
Request "20151127184549" removed.
[root@ipa01 ~]# getcert stop-tracking -d /etc/httpd/alias -n ipaCert
Request "20151127184550" removed.
Upgrading IdM into Red Hat 7.x 6
7. ◦ Reconfigure the original master CA to retrieve renewed certificates from a new master
CA.
1. Copy the renewal helper into the certmonger service directory, and set the
appropriate permissions.
[root@ipa01 ~]# cp /usr/share/ipa/ca_renewal
/var/lib/certmonger/cas/ca_renewal
[root@ipa01 ~]# chmod 0600 /var/lib/certmonger/cas/ca_renewal
2. Update the SELinux configuration.
[root@ipa01 ~]# /sbin/restorecon
/var/lib/certmonger/cas/ca_renewal
3. Restart certmonger.
[root@ipa01 ~]# service certmonger restart
4. Check that the CA is listed to retrieve certificates. This is printed in the CA
configuration.
[root@ipa01 ~]# getcert list-cas
...
CA 'dogtag-ipa-retrieve-agent-submit':
is-default: no
ca-type: EXTERNAL
helper-location: /usr/libexec/certmonger/dogtag-ipa-
retrieve-agent-submit
5. Get the CA certificate database PIN.
[root@ipa01 ~]# grep internal= /var/lib/pki-ca/conf/password.conf
6. Configure certmonger to track the certificates for external renewal. This
requires the database PIN.
[root@ipa01 ~]# getcert start-tracking -c dogtag-ipa-retrieve-agent-submit
-d /var/lib/pki-ca/alias -n "auditSigningCert cert-pki-ca" -B
/usr/lib64/ipa/certmonger/stop_pkicad -C
'/usr/lib64/ipa/certmonger/restart_pkicad "auditSigningCert cert-pki-ca"' -T
"auditSigningCert cert-pki-ca" -P database_pin
New tracking request "20151127184743" added.
[root@ipa01 ~]# getcert start-tracking -c dogtag-ipa-retrieve-
agent-submit -d /var/lib/pki-ca/alias -n "ocspSigningCert
cert-pki-ca" -B /usr/lib64/ipa/certmonger/stop_pkicad -C
'/usr/lib64/ipa/certmonger/restart_pkicad "ocspSigningCert
cert-pki-ca"' -T "ocspSigningCert cert-pki-ca" -P database_pin
New tracking request "20151127184744" added.
[root@ipa01 ~]# getcert start-tracking -c dogtag-ipa-retrieve-
Upgrading IdM into Red Hat 7.x 7
8. agent-submit -d /var/lib/pki-ca/alias -n "subsystemCert cert-
pki-ca" -B /usr/lib64/ipa/certmonger/stop_pkicad -C
'/usr/lib64/ipa/certmonger/restart_pkicad "subsystemCert cert-
pki-ca"' -T "subsystemCert cert-pki-ca" -P database_pin
New tracking request "20151127184745" added.
[root@ipa01 ~]# getcert start-tracking -c dogtag-ipa-retrieve-
agent-submit -d /etc/httpd/alias -n ipaCert -C
/usr/lib64/ipa/certmonger/restart_httpd -T ipaCert -p
/etc/httpd/alias/pwdfile.txt
New tracking request "20151127184746" added.
◦ Stop CRL generation on the original master CA.
1. Stop CA service.
[root@ipa01 ~]# service pki-cad stop
2. Open the CA configuration file.
[root@ipa01 ~]# vim /var/lib/pki-ca/conf/CS.cfg
3. Change the values of the ca.crl.MasterCRL.enableCRLCache and
ca.crl.MasterCRL.enableCRLUpdates parameters to false to disable CRL
generation.
ca.crl.MasterCRL.enableCRLCache=false
ca.crl.MasterCRL.enableCRLUpdates=false
4. Start the CA service.
[root@ipa01service pki-cad start
◦ Configure Apache to redirect CRL requests to the new master.
1. Open the CA proxy configuration.
[root@ipa01 ~]# vim /etc/httpd/conf.d/ipa-pki-proxy.conf
2. Uncomment the RewriteRule on the last line and replace the ipa01 server
URL with the new Red Hat Enterprise Linux 7 server URL.
RewriteRule ^/ipa/crl/MasterCRL.bin
https://ipa02.rhlab.dev/ca/ee/ca/getCRL?
op=getCRL&crlIssuingPoint=MasterCRL [L,R=301,NC]
3. Restart Apache.
[root@ipa01 ~]# systemctl restart httpd.service
Upgrading IdM into Red Hat 7.x 8
9. 10. Configure rhel 7 IdM instance as master.
◦ Configure CA renewal using the ipa-csreplica-manage utility.
[root@ipa02 ~]# ipa-csreplica-manage set-renewal-master
◦ Configure the new master CA to generate CRLs.
1. Stop CA service.
[root@ipa02 ~]# systemctl stop pki-tomcatd@pki-tomcat.service
2. Open the CA configuration file.
[root@ipa01 ~]# vim /etc/pki/pki-tomcat/ca/CS.cfg
3. Change the values of the ca.crl.MasterCRL.enableCRLCache and
ca.crl.MasterCRL.enableCRLUpdates parameters to true to enable CRL
generation.
ca.crl.MasterCRL.enableCRLCache=true
ca.crl.MasterCRL.enableCRLUpdates=true
4. Start CA service.
[root@opa02 ~]# systemctl start pki-tomcatd@pki-tomcat.service
◦ Configure Apache to disable redirect CRL requests. As a clone, all CRL requests were
routed to the original master. As the new master, this instance will respond to CRL
requests.
1. Open the CA proxy configuration.
[root@ipa02 ~]# vim /etc/httpd/conf.d/ipa-pki-proxy.conf
2. Comment out the RewriteRule argument on the last line.
#RewriteRule ^/ipa/crl/MasterCRL.bin
https://ipa02.rhlab.dev/ca/ee/ca/getCRL?
op=getCRL&crlIssuingPoint=MasterCRL [L,R=301,NC]
3. Restart Apache.
[root@ipa02 ~]# systemctl restart httpd.service
4. To check if the server is certificate renewal master.
# ldapsearch -H ldap://127.0.0.1 -D 'cn=Directory Manager' -W -b
cn=masters,cn=ipa,cn=etc,dc=rhlab,dc=dev
'(ipaConfigString=caRenewalMaster)' -LLL Enter LDAP Password: dn:
Upgrading IdM into Red Hat 7.x 9
10. cn=CA,cn=ipa02.rhlab.dev,cn=masters,cn=ipa,cn=etc,dc=rhlab,dc=dev
objectClass: nsContainer objectClass: ipaConfigObject
objectClass: top ipaConfigString: enabledService ipaConfigString:
startOrder 50 ipaConfigString: caRenewalMaster cn: CA Note: In
the above output "caRenewalMaster" should be present.
5. To check if the server is CRL generation master.
# grep -i ca.crl.MasterCRL.enableCRL /etc/pki/pki-
tomcat/ca/CS.cfg ca.crl.MasterCRL.enableCRLCache=true
ca.crl.MasterCRL.enableCRLUpdates=true
11. Remove rhel 6 replica from rhel 7.
◦ Stop all services on the rhel 6 system; this forces domain discovery to the rhel 7 server.
[root@ipa01 ~]# ipactl stop
Stopping CA Service
Stopping pki-ca: [ OK ]
Stopping HTTP Service
Stopping httpd: [ OK ]
Stopping MEMCACHE Service
Stopping ipa_memcached: [ OK ]
Stopping DNS Service
Stopping named: . [ OK ]
Stopping KPASSWD Service
Stopping Kerberos 5 Admin Server: [ OK ]
Stopping KDC Service
Stopping Kerberos 5 KDC: [ OK ]
Stopping Directory Service
Shutting down dirsrv:
RHLAB-DEV... [ OK ]
PKI-IPA... [ OK ]
◦ Decommission the rhel 6 host. [ipa01.rhlab.dev]
[root@ipa02 ~]# ipa-replica-manage del ipa01.rhlab.dev
Connection to 'ipa01.rhlab.dev' failed:
Forcing removal of ipa01.rhlab.dev
Skipping calculation to determine if one or more masters would be
orphaned.
Deleting replication agreements between ipa01.rhlab.dev and r
ipa02.rhlab.dev
Failed to get list of agreements from 'ipa01.rhlab.dev ':
Forcing removal on 'ipa02.rhlab.dev'
Any DNA range on 'ipa01.rhlab.dev' will be lost
Deleted replication agreement from 'ipa02.rhlab.dev' to
'ipa01.rhlab.dev'
Background task created to clean replication data. This may take a
while.
This may be safely interrupted with Ctrl+C
◦ Remove the local IdM configuration. On [ipa01.rhlab.dev]
[root@ipa01 ~]# ipa-server-install --uninstall --U
Upgrading IdM into Red Hat 7.x 10
11. 12. Configure the client to take the new configuration.
◦ Open sssd.conf file
[root@client ~]# vim /etc/sssd/sssd.conf
◦ Update ipa_server = _srv_, ipa01.rhlab.dev , with
ipa_server = _srv_, ipa02.rhlab.dev
dns_discovery_domain = rhlab.dev
◦ Make sure that RHEL 7.1 ipa server ipaaddres is at the top in file /etc/resolv.conf
search rhlab.dev
nameserver 192.168.100.21
◦ restart sssd service
service sssd stop ;rm -Rf /var/lib/sss/db/*; service sssd start
13. Create addition replica for rhel 7 if required.
[root@ipa02 ~]# ipa-replica-prepare ipa03.rhlab.dev --ip-address
192.168.100.23
Upgrading IdM into Red Hat 7.x 11