SlideShare a Scribd company logo
1 of 21
Refference Exam EX294 || EX407
https://tekneed.com/rhce-8-ex294-exam-practice-question-answer-collections/
https://gist.github.com/waseem-h/6793ba3328f27df1a815402710acb3ff
https://www.lisenet.com/2019/ansible-sample-exam-for-ex294/
https://ziyonotes.uz/rJt6DcqXr
Requirements
There are 18 questions in total.
You will need five RHEL 7 (or CentOS 7) virtual machines to be able to successfully complete
all questions.
One VM will be configured as an Ansible control node. Other four VMs will be used to apply
playbooks to solve the sample exam questions. The following FQDNs will be used throughout
the sample exam.
1. ansible-control.hl.local – Ansible control node
2. ansible2.hl.local – managed host
3. ansible3.hl.local – managed host
4. ansible4.hl.local – managed host
5. ansible5.hl.local – managed host
There are a couple of requiremens that should be met before proceeding further:
1. ansible-control.hl.local server has passwordless SSH access to all managed servers
(using the root user).
2. ansible5.hl.local server has a 1GB secondary /dev/sdb disk attached.
3. There are no regular users created on any of the servers.
Tips and Suggestions
I tried to cover as many exam objectives as possible, however, note that there will be no
questions related to dynamic inventory.
Some questions may depend on the outcome of others. Please read all questions before
proceeding.
Sample Exam Questions
Note: you have root access to all five servers.
Task 1: Ansible Installation and Configuration
Install ansible package on the control node (including any dependencies) and configure the
following:
1. Create a regular user automation with the password of devops. Use this user for all
sample exam tasks.
2. All playbooks and other Ansible configuration that you create for this sample exam
should be stored in /home/automation/plays.
Create a configuration file /home/automation/plays/ansible.cfg to meet the following
requirements:
1. The roles path should include /home/automation/plays/roles, as well as any other
path that may be required for the course of the sample exam.
2. The inventory file path is /home/automation/plays/inventory.
3. Privilege escallation is disabled by default.
4. Ansible should be able to manage 10 hosts at a single time.
5. Ansible should connect to all managed nodes using the cloud_user user.
Create an inventory file /home/automation/plays/inventory with the following:
1. ansible2.hl.local is a member of the proxy host group.
2. ansible3.hl.local is a member of the webservers host group.
3. ansible4.hl.local is a member of the webservers host group.
4. ansible5.hl.local is a member of the database host group.
# Solution Task1
cat inventory
[proxy]
77726771c.mylabserver.com
[webservers]
77726772c.mylabserver.com
77726773c.mylabserver.com
[database]
77726774c.mylabserver.com
[prod:children]
database
cat ansible.cfg
[defaults]
roles_path = ./roles
inventory = ./inventory
remote_user = cloud_user
forks = 10
[prvilege_escalation]
become = False
Task 2: Ad-Hoc Commands
Generate an SSH keypair on the control node. You can perform this step manually.
Write a script /home/automation/plays/adhoc that uses Ansible ad-hoc commands to achieve
the following:
o User automation is created on all inventory hosts (not the control node).
o SSH key (that you generated) is copied to all inventory hosts for the automation user and
stored in /home/automation/.ssh/authorized_keys.
o The automation user is allowed to elevate privileges on all inventory hosts without having to
provide a password.
After running the adhoc script on the control node as the automation user, you should be able to
SSH into all inventory hosts using the automation user without password, as well as a run all
privileged commands.
Task 3: File Content
Create a playbook /home/automation/plays/motd.yml that runs on all inventory hosts and
does the following:
1. The playbook should replace any existing content of /etc/motd with text. Text depends
on the host group.
2. On hosts in the proxy host group the line should be “Welcome to HAProxy server”.
3. On hosts in the webserver host group the line should be “Welcome to Apache server”.
4. On hosts in the database host group the line should be “Welcome to MySQL server”.
# Solution - Task 3
cat motd.yml
---
- name: Changing MOTD
hosts: all
become: yes
tasks:
- name: Copy the content to HAProxy
copy:
content: "Welcome to HAProxy servern"
dest: /etc/motd
when: "'proxy' in group_names"
- name: Copy the content to Apache
copy:
content: "Welcome to Apache servern"
dest: /etc/motd
when: "'webservers' in group_names"
- name: Copy the content to MySQL
copy:
content: "Welcome to MySQL servern"
dest: /etc/motd
when: "'database' in group_names"
Task 4: Configure SSH Server
Create a playbook /home/automation/plays/sshd.yml that runs on all inventory hosts and
configures SSHD daemon as follows:
1. banner is set to /etc/motd
2. X11Forwarding is disabled
3. MaxAuthTries is set to 3
# Solution - Task 4
cat sshd.yml
---
- name: Change SSH configuration
hosts: all
become: yes
tasks:
- name: Change default banner path
lineinfile:
path: /etc/ssh/sshd_config
regexp: '^Banner'
line: 'Banner /etc/motd'
- name: X11 Forwarding is disabled
lineinfile:
path: /etc/ssh/sshd_config
regexp: '^X11Forwarding'
line: 'X11Forwarding no'
- name: MaxAuthTries is set to 3
lineinfile:
path: /etc/ssh/sshd_config
regexp: '^#MaxAuthTries'
line: 'MaxAuthTries 3'
- name: Restart the sssd service
service:
name: sshd
state: restarted
enabled: yes
- name: Check the Configuration
shell: "grep MaxAuthTries /etc/ssh/sshd_config; grep X11Forwarding
/etc/ssh/sshd_config; grep Banner /etc/ssh/sshd_config"
register: check_result
- name: Results
debug:
msg: "{{ check_result.stdout }}"
Task 5: Ansible Vault
Create Ansible vault file /home/automation/plays/secret.yml. Encryption/decryption
password is devops.
Add the following variables to the vault:
1. user_password with value of devops
2. database_password with value of devops
Task 6: Users and Groups
You have been provided with the list of users below.
Use /home/automation/plays/vars/user_list.yml file to save this content.
---
users:
- username: alice
uid: 1201
- username: vincent
uid: 1202
- username: sandy
uid: 2201
- username: patrick
uid: 2202
Create a playbook /home/automation/plays/users.yml that uses the vault file
/home/automation/plays/secret.yml to achieve the following:
1. Users whose user ID starts with 1 should be created on servers in the webservers host
group. User password should be used from the user_password variable.
2. Users whose user ID starts with 2 should be created on servers in the database host
group. User password should be used from the user_password variable.
3. All users should be members of a supplementary group wheel.
4. Shell should be set to /bin/bash for all users.
5. Account passwords should use the SHA512 hash format.
After running the playbook, users should be able to SSH into their respective servers without
passwords.
# Solution - Task 6
---
- name: Create users
hosts: all
become: yes
vars_files:
- ./users_list.yml
- ./secret.yml
tasks:
- name: Ensure group is exist
group:
name: wheel
state: present
- name: Create users
user:
name: "{{ item.username }}"
group: wheel
password: "{{ user_password | password_hash('sha512') }}"
shell: /bin/bash
update_password: on_create
with_items: "{{ users }}"
when:
- ansible_fqdn in groups['webservers']
- "item.uid|string|first == '1'"
- name: Create users in database
user:
name: "{{ item.username }}"
group: wheel
password: "{{ user_password | password_hash('sha512') }}"
shell: /bin/bash
uid: "{{ item.uid }}"
update_password: on_create
with_items: "{{ users }}"
when:
- ansible_fqdn in groups['database']
- "item.uid|string|first == '2'"
Task 7: Scheduled Tasks
Create a playbook /home/automation/plays/regular_tasks.yml that runs on servers in the
proxy host group and does the following:
1. A root crontab record is created that runs every hour.
2. The cron job appends the file /var/log/time.log with the output from the date
command.
# Solution - Task 7
---
- name: Scheduled tasks
hosts: all
become: yes
tasks:
- name: Ensure file exists
file:
path: /var/log/time.log
state: touch
mode: 0644
- name: Create cronjob for root user
cron:
name: "check time"
minute: "0"
user: root
job: "date >> /var/log/time.log"
Task 8: Software Repositories
Create a playbook /home/automation/plays/repository.yml that runs on servers in the
database host group and does the following:
1. A YUM repository file is created.
2. The name of the repository is mysql56-community.
3. The description of the repository is “MySQL 5.6 YUM Repo”.
4. Repository baseurl is http://repo.mysql.com/yum/mysql-5.6-community/el/7/x86_64/.
5. Repository GPG key is at http://repo.mysql.com/RPM-GPG-KEY-mysql.
6. Repository GPG check is enabled.
7. Repository is enabled.
# Solution - Task 8
---
- name: Software repositories
hosts: database
become: yes
tasks:
- name: Create msyql repository
yum_repository:
name: mysql56-community
description: "MySQL 5.6 YUM Repo"
baseurl: "http://repo.mysql.com/yum/mysql-5.6-community/el/7/x86_64/"
enabled: yes
gpgcheck: yes
gpgkey: "http://repo.mysql.com/RPM-GPG-KEY-mysql"
Task 9: Create and Work with Roles
Create a role called sample-mysql and store it in /home/automation/plays/roles. The role
should satisfy the following requirements:
1. A primary partition number 1 of size 800MB on device /dev/sdb is created.
2. An LVM volume group called vg_database is created that uses the primary partition
created above.
3. An LVM logical volume called lv_mysql is created of size 512MB in the volume group
vg_database.
4. An XFS filesystem on the logical volume lv_mysql is created.
5. Logical volume lv_mysql is permanently mounted on /mnt/mysql_backups.
6. mysql-community-server package is installed.
7. Firewall is configured to allow all incoming traffic on MySQL port TCP 3306.
8. MySQL root user password should be set from the variable database_password (see task
#5).
9. MySQL server should be started and enabled on boot.
10. MySQL server configuration file is generated from the my.cnf.j2 Jinja2 template with
the following content:
[mysqld]
bind_address = {{ ansible_default_ipv4.address }}
skip_name_resolve
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
symbolic-links=0
sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
Create a playbook /home/automation/plays/mysql.yml that uses the role and runs on hosts in
the database host group.
# Solution - Task 9
Roles for this task is located in https://github.com/khamidziyo/ex407/tree/master/roles
cat mysql.yml
---
- name: Install mysql role
hosts: database
become: yes
vars_files:
- secret.yml
roles:
- sample-mysql
Task 10: Create and Work with Roles (Some More)
Create a role called sample-apache and store it in /home/automation/plays/roles. The role
should satisfy the following requirements:
1. The httpd, mod_ssl and php packages are installed. Apache service is running and
enabled on boot.
2. Firewall is configured to allow all incoming traffic on HTTP port TCP 80 and HTTPS
port TCP 443.
3. Apache service should be restarted every time the file /var/www/html/index.html is
modified.
4. A Jinja2 template file index.html.j2 is used to create the file
/var/www/html/index.html with the following content:
The address of the server is: IPV4ADDRESS
IPV4ADDRESS is the IP address of the managed node.
Create a playbook /home/automation/plays/apache.yml that uses the role and runs on hosts
in the webservers host group.
# Solution - Task 10
Roles for this task is located in https://github.com/khamidziyo/ex407/tree/master/roles
cat apache.yml
---
- name: Configure apache
hosts: webservers
become: yes
roles:
- sample-apache
Task 11: Download Roles From an Ansible Galaxy and Use Them
Use Ansible Galaxy to download and install geerlingguy.haproxy role in
/home/automation/plays/roles.
Create a playbook /home/automation/plays/haproxy.yml that runs on servers in the proxy
host group and does the following:
1. Use geerlingguy.haproxy role to load balance request between hosts in the webservers
host group.
2. Use roundrobin load balancing method.
3. HAProxy backend servers should be configured for HTTP only (port 80).
4. Firewall is configured to allow all incoming traffic on port TCP 80.
If your playbook works, then doing “curl http://ansible2.hl.local/” should return output from
the web server (see task #10). Running the command again should return output from the other
web server.
# Solution - Task 11
---
- name: Configure HAPROXY
hosts: proxy
become: yes
roles:
- geerlingguy.haproxy
vars:
haproxy_frontend_port: 80
haproxy_frontend_mode: 'http'
haproxy_backend_balance_method: 'roundrobin'
haproxy_backend_servers:
- name: app1
address: 54.153.48.114:80
- name: app2
address: 18.144.27.107:80
tasks:
- name: Ensure firewalld and its dependencies are installed
yum:
name: firewalld
state: latest
- name: Ensure firewalld is running
service:
name: firewalld
state: started
enabled: yes
- name: Ensure firewalld is allowing to the traffic
firewalld:
port: 80/tcp
permanent: yes
immediate: yes
state: enabled
Task 12: Security
Create a playbook /home/automation/plays/selinux.yml that runs on hosts in the
webservers host group and does the following:
1. Uses the selinux RHEL system role.
2. Enables httpd_can_network_connect SELinux boolean.
3. The change must survive system reboot.
---
- name: Security playbook
hosts: webservers
become: yes
vars:
selinux_booleans:
- name: httpd_can_network_connect
state: on
persistent: yes
roles:
- linux-system-roles.selinux
Task 13: Use Conditionals to Control Play Execution
Create a playbook /home/automation/plays/sysctl.yml that runs on all inventory hosts and
does the following:
1. If a server has more than 2048MB of RAM, then parameter vm.swappiness is set to 10.
2. If a server has less than 2048MB of RAM, then the following error message is displayed:
Server memory less than 2048MB
# Solution - Task 13
---
- name: Use Conditionals to Control Play Execution
hosts: all
become: yes
tasks:
- name: Change vm.swappiness
sysctl:
name: vm.swappiness
value: 10
state: present
when: ansible_memtotal_mb >= 2048
- name: Report not enough memory
debug:
msg: "Server memory less than 2048MB. RAM size: {{
ansible_memtotal_mb }}"
when: ansible_memtotal_mb < 2048
Task 14: Use Archiving
Create a playbook /home/automation/plays/archive.yml that runs on hosts in the database
host group and does the following:
1. A file /mnt/mysql_backups/database_list.txt is created that contains the following
line: dev,test,qa,prod.
2. A gzip archive of the file /mnt/mysql_backups/database_list.txt is created and
stored in /mnt/mysql_backups/archive.gz.
# Solution - Task 14
---
- name: Use Archiving
hosts: database
become: yes
tasks:
- name: Check if the directory is exist
stat:
path: /mnt/mysql_backups/
register: backup_directory_status
- name: Create directory when not exist
file:
path: /mnt/mysql_backups/
state: directory
mode: 0775
owner: root
group: root
when: backup_directory_status.stat.exists == false
- name: Copy the content
copy:
content: "dev,test,qa,prod"
dest: /mnt/mysql_backups/database_list.txt
- name: Create archive
archive:
path: /mnt/mysql_backups/database_list.txt
dest: /mnt/mysql_backups/archive.gz
format: gz
Task 15: Work with Ansible Facts
Create a playbook /home/automation/plays/facts.yml that runs on hosts in the database
host group and does the following:
1. A custom Ansible fact server_role=mysql is created that can be retrieved from
ansible_local.custom.sample_exam when using Ansible setup module.
# Solution - Task 15
---
- name: Work with Ansible Facts
hosts: database
become: yes
tasks:
- name: Ensure directory is exist
file:
path: /etc/ansible/facts.d
state: directory
recurse: yes
- name: Copy the content to the file
copy:
content: "[sample_exam]nserver_role=mysqln"
dest: /etc/ansible/facts.d/custom.fact
Task 16: Software Packages
Create a playbook /home/automation/plays/packages.yml that runs on all inventory hosts
and does the following:
1. Installs tcpdump and mailx packages on hosts in the proxy host groups.
2. Installs lsof and mailx and packages on hosts in the database host groups.
# Solution - Task 16
---
- name: Install packages
hosts: all
become: yes
tasks:
- name: Installs tcpdump and mailx packages on hosts in the proxy host
groups
yum:
name:
- tcpdump
- mailx
state: latest
when: inventory_hostname in groups['proxy']
- name: Installs lsof and mailx and packages on hosts in the database
host groups
yum:
name:
- lsof
- mailx
state: latest
when: inventory_hostname in groups['database']
Task 17: Services
Create a playbook /home/automation/plays/target.yml that runs on hosts in the webserver
host group and does the following:
1. Sets the default boot target to multi-user.
# Solution - Task 17
---
- name: default boot target
hosts: webservers
become: yes
tasks:
- name: Set default boot target to multi-user
file:
src: /usr/lib/systemd/system/multi-user.target
dest: /etc/systemd/system/default.target
state: link
Task 18. Create and Use Templates to Create Customised Configuration Files
Create a playbook /home/automation/plays/server_list.yml that does the following:
1. Playbook uses a Jinja2 template server_list.j2 to create a file
/etc/server_list.txt on hosts in the databasehost group.
2. The file /etc/server_list.txt is owned by the automation user.
3. File permissions are set to 0600.
4. SELinux file label should be set to net_conf_t.
5. The content of the file is a list of FQDNs of all inventory hosts.
After running the playbook, the content of the file /etc/server_list.txt should be the
following:
ansible2.hl.local
ansible3.hl.local
ansible4.hl.local
ansible5.hl.local
Note: if the FQDN of any inventory host changes, re-running the playbook should update the file
with the new values.
# Solution - Task 18
cat server_list.j2
################
{% for host in groups.all %}
{{ hostvars[host].inventory_hostname }}
{% endfor %}
################
cat server_list.yml
---
- name: Create and Use Templates to Create Customised Configuration Files
hosts: database
become: yes
tasks:
- name: Create server list
template:
src: ./server_list.j2
dest: /etc/server_list.txt
owner: cloud_user
mode: '0600'
setype: net_conf_t
Ansible ex407 and EX 294
Ansible ex407 and EX 294
Ansible ex407 and EX 294
Ansible ex407 and EX 294
Ansible ex407 and EX 294
Ansible ex407 and EX 294
Ansible ex407 and EX 294

More Related Content

What's hot

Managing Terraform Module Versioning and Dependencies
Managing Terraform Module Versioning and Dependencies Managing Terraform Module Versioning and Dependencies
Managing Terraform Module Versioning and Dependencies Nebulaworks
 
Troubleshooting common oslo.messaging and RabbitMQ issues
Troubleshooting common oslo.messaging and RabbitMQ issuesTroubleshooting common oslo.messaging and RabbitMQ issues
Troubleshooting common oslo.messaging and RabbitMQ issuesMichael Klishin
 
Ansible presentation
Ansible presentationAnsible presentation
Ansible presentationJohn Lynch
 
High Availability With DRBD & Heartbeat
High Availability With DRBD & HeartbeatHigh Availability With DRBD & Heartbeat
High Availability With DRBD & HeartbeatChris Barber
 
Docker Security workshop slides
Docker Security workshop slidesDocker Security workshop slides
Docker Security workshop slidesDocker, Inc.
 
CloudStack - Top 5 Technical Issues and Troubleshooting
CloudStack - Top 5 Technical Issues and TroubleshootingCloudStack - Top 5 Technical Issues and Troubleshooting
CloudStack - Top 5 Technical Issues and TroubleshootingShapeBlue
 
[오픈소스컨설팅] Ansible을 활용한 운영 자동화 교육
[오픈소스컨설팅] Ansible을 활용한 운영 자동화 교육[오픈소스컨설팅] Ansible을 활용한 운영 자동화 교육
[오픈소스컨설팅] Ansible을 활용한 운영 자동화 교육Ji-Woong Choi
 
Introduction to ansible
Introduction to ansibleIntroduction to ansible
Introduction to ansibleOmid Vahdaty
 
[오픈소스컨설팅] Red Hat ReaR (relax and-recover) Quick Guide
[오픈소스컨설팅] Red Hat ReaR (relax and-recover) Quick Guide[오픈소스컨설팅] Red Hat ReaR (relax and-recover) Quick Guide
[오픈소스컨설팅] Red Hat ReaR (relax and-recover) Quick GuideJi-Woong Choi
 
VMware - HCX - Architecture and Design .pdf
VMware - HCX - Architecture and Design .pdfVMware - HCX - Architecture and Design .pdf
VMware - HCX - Architecture and Design .pdfGiancarloSampaolesi
 
Kubernetes Networking 101
Kubernetes Networking 101Kubernetes Networking 101
Kubernetes Networking 101Weaveworks
 
Docker Networking Overview
Docker Networking OverviewDocker Networking Overview
Docker Networking OverviewSreenivas Makam
 

What's hot (20)

Managing Terraform Module Versioning and Dependencies
Managing Terraform Module Versioning and Dependencies Managing Terraform Module Versioning and Dependencies
Managing Terraform Module Versioning and Dependencies
 
Troubleshooting common oslo.messaging and RabbitMQ issues
Troubleshooting common oslo.messaging and RabbitMQ issuesTroubleshooting common oslo.messaging and RabbitMQ issues
Troubleshooting common oslo.messaging and RabbitMQ issues
 
Ansible presentation
Ansible presentationAnsible presentation
Ansible presentation
 
Ansible - Introduction
Ansible - IntroductionAnsible - Introduction
Ansible - Introduction
 
Docker compose
Docker composeDocker compose
Docker compose
 
Automated master failover
Automated master failoverAutomated master failover
Automated master failover
 
High Availability With DRBD & Heartbeat
High Availability With DRBD & HeartbeatHigh Availability With DRBD & Heartbeat
High Availability With DRBD & Heartbeat
 
Manual instalacion y configuracion Dns
Manual instalacion y configuracion DnsManual instalacion y configuracion Dns
Manual instalacion y configuracion Dns
 
Ansible
AnsibleAnsible
Ansible
 
Docker Security workshop slides
Docker Security workshop slidesDocker Security workshop slides
Docker Security workshop slides
 
CloudStack - Top 5 Technical Issues and Troubleshooting
CloudStack - Top 5 Technical Issues and TroubleshootingCloudStack - Top 5 Technical Issues and Troubleshooting
CloudStack - Top 5 Technical Issues and Troubleshooting
 
Podman rootless containers
Podman rootless containersPodman rootless containers
Podman rootless containers
 
Ceph issue 해결 사례
Ceph issue 해결 사례Ceph issue 해결 사례
Ceph issue 해결 사례
 
[오픈소스컨설팅] Ansible을 활용한 운영 자동화 교육
[오픈소스컨설팅] Ansible을 활용한 운영 자동화 교육[오픈소스컨설팅] Ansible을 활용한 운영 자동화 교육
[오픈소스컨설팅] Ansible을 활용한 운영 자동화 교육
 
Introduction to ansible
Introduction to ansibleIntroduction to ansible
Introduction to ansible
 
[오픈소스컨설팅] Red Hat ReaR (relax and-recover) Quick Guide
[오픈소스컨설팅] Red Hat ReaR (relax and-recover) Quick Guide[오픈소스컨설팅] Red Hat ReaR (relax and-recover) Quick Guide
[오픈소스컨설팅] Red Hat ReaR (relax and-recover) Quick Guide
 
VMware - HCX - Architecture and Design .pdf
VMware - HCX - Architecture and Design .pdfVMware - HCX - Architecture and Design .pdf
VMware - HCX - Architecture and Design .pdf
 
Kubernetes Networking 101
Kubernetes Networking 101Kubernetes Networking 101
Kubernetes Networking 101
 
Docker Networking Overview
Docker Networking OverviewDocker Networking Overview
Docker Networking Overview
 
Docker Tutorial.pdf
Docker Tutorial.pdfDocker Tutorial.pdf
Docker Tutorial.pdf
 

Similar to Ansible ex407 and EX 294

Puppet: Eclipsecon ALM 2013
Puppet: Eclipsecon ALM 2013Puppet: Eclipsecon ALM 2013
Puppet: Eclipsecon ALM 2013grim_radical
 
Automation with ansible
Automation with ansibleAutomation with ansible
Automation with ansibleKhizer Naeem
 
Supercharging your PHP pages with mod_lsapi in CloudLinux OS
Supercharging your PHP pages with mod_lsapi in CloudLinux OSSupercharging your PHP pages with mod_lsapi in CloudLinux OS
Supercharging your PHP pages with mod_lsapi in CloudLinux OSCloudLinux
 
Virtualization and automation of library software/machines + Puppet
Virtualization and automation of library software/machines + PuppetVirtualization and automation of library software/machines + Puppet
Virtualization and automation of library software/machines + PuppetOmar Reygaert
 
Drupal camp South Florida 2011 - Introduction to the Aegir hosting platform
Drupal camp South Florida 2011 - Introduction to the Aegir hosting platformDrupal camp South Florida 2011 - Introduction to the Aegir hosting platform
Drupal camp South Florida 2011 - Introduction to the Aegir hosting platformHector Iribarne
 
eZ Publish cluster unleashed revisited
eZ Publish cluster unleashed revisitedeZ Publish cluster unleashed revisited
eZ Publish cluster unleashed revisitedBertrand Dunogier
 
Montreal On Rails 5 : Rails deployment using : Nginx, Mongrel, Mongrel_cluste...
Montreal On Rails 5 : Rails deployment using : Nginx, Mongrel, Mongrel_cluste...Montreal On Rails 5 : Rails deployment using : Nginx, Mongrel, Mongrel_cluste...
Montreal On Rails 5 : Rails deployment using : Nginx, Mongrel, Mongrel_cluste...addame
 
Webinar: Automate IBM Connections Installations and more
Webinar: Automate IBM Connections Installations and moreWebinar: Automate IBM Connections Installations and more
Webinar: Automate IBM Connections Installations and morepanagenda
 
Ansible automation tool with modules
Ansible automation tool with modulesAnsible automation tool with modules
Ansible automation tool with modulesmohamedmoharam
 
Webinar - Managing Files with Puppet
Webinar - Managing Files with PuppetWebinar - Managing Files with Puppet
Webinar - Managing Files with PuppetOlinData
 
A tour of Ansible
A tour of AnsibleA tour of Ansible
A tour of AnsibleDevOps Ltd.
 
Using filesystem capabilities with rsync
Using filesystem capabilities with rsyncUsing filesystem capabilities with rsync
Using filesystem capabilities with rsyncHazel Smith
 
Hadoop meet Rex(How to construct hadoop cluster with rex)
Hadoop meet Rex(How to construct hadoop cluster with rex)Hadoop meet Rex(How to construct hadoop cluster with rex)
Hadoop meet Rex(How to construct hadoop cluster with rex)Jun Hong Kim
 
Kubernetes Story - Day 1: Build and Manage Containers with Podman
Kubernetes Story - Day 1: Build and Manage Containers with PodmanKubernetes Story - Day 1: Build and Manage Containers with Podman
Kubernetes Story - Day 1: Build and Manage Containers with PodmanMihai Criveti
 
[Devconf.cz][2017] Understanding OpenShift Security Context Constraints
[Devconf.cz][2017] Understanding OpenShift Security Context Constraints[Devconf.cz][2017] Understanding OpenShift Security Context Constraints
[Devconf.cz][2017] Understanding OpenShift Security Context ConstraintsAlessandro Arrichiello
 

Similar to Ansible ex407 and EX 294 (20)

Automating with Ansible
Automating with AnsibleAutomating with Ansible
Automating with Ansible
 
Puppet: Eclipsecon ALM 2013
Puppet: Eclipsecon ALM 2013Puppet: Eclipsecon ALM 2013
Puppet: Eclipsecon ALM 2013
 
Automation with ansible
Automation with ansibleAutomation with ansible
Automation with ansible
 
Supercharging your PHP pages with mod_lsapi in CloudLinux OS
Supercharging your PHP pages with mod_lsapi in CloudLinux OSSupercharging your PHP pages with mod_lsapi in CloudLinux OS
Supercharging your PHP pages with mod_lsapi in CloudLinux OS
 
Virtualization and automation of library software/machines + Puppet
Virtualization and automation of library software/machines + PuppetVirtualization and automation of library software/machines + Puppet
Virtualization and automation of library software/machines + Puppet
 
Drupal camp South Florida 2011 - Introduction to the Aegir hosting platform
Drupal camp South Florida 2011 - Introduction to the Aegir hosting platformDrupal camp South Florida 2011 - Introduction to the Aegir hosting platform
Drupal camp South Florida 2011 - Introduction to the Aegir hosting platform
 
eZ Publish cluster unleashed revisited
eZ Publish cluster unleashed revisitedeZ Publish cluster unleashed revisited
eZ Publish cluster unleashed revisited
 
Montreal On Rails 5 : Rails deployment using : Nginx, Mongrel, Mongrel_cluste...
Montreal On Rails 5 : Rails deployment using : Nginx, Mongrel, Mongrel_cluste...Montreal On Rails 5 : Rails deployment using : Nginx, Mongrel, Mongrel_cluste...
Montreal On Rails 5 : Rails deployment using : Nginx, Mongrel, Mongrel_cluste...
 
linux installation.pdf
linux installation.pdflinux installation.pdf
linux installation.pdf
 
Webinar: Automate IBM Connections Installations and more
Webinar: Automate IBM Connections Installations and moreWebinar: Automate IBM Connections Installations and more
Webinar: Automate IBM Connections Installations and more
 
Introducing Ansible
Introducing AnsibleIntroducing Ansible
Introducing Ansible
 
Ansible automation tool with modules
Ansible automation tool with modulesAnsible automation tool with modules
Ansible automation tool with modules
 
Linux configer
Linux configerLinux configer
Linux configer
 
Webinar - Managing Files with Puppet
Webinar - Managing Files with PuppetWebinar - Managing Files with Puppet
Webinar - Managing Files with Puppet
 
A tour of Ansible
A tour of AnsibleA tour of Ansible
A tour of Ansible
 
Using filesystem capabilities with rsync
Using filesystem capabilities with rsyncUsing filesystem capabilities with rsync
Using filesystem capabilities with rsync
 
Hadoop on osx
Hadoop on osxHadoop on osx
Hadoop on osx
 
Hadoop meet Rex(How to construct hadoop cluster with rex)
Hadoop meet Rex(How to construct hadoop cluster with rex)Hadoop meet Rex(How to construct hadoop cluster with rex)
Hadoop meet Rex(How to construct hadoop cluster with rex)
 
Kubernetes Story - Day 1: Build and Manage Containers with Podman
Kubernetes Story - Day 1: Build and Manage Containers with PodmanKubernetes Story - Day 1: Build and Manage Containers with Podman
Kubernetes Story - Day 1: Build and Manage Containers with Podman
 
[Devconf.cz][2017] Understanding OpenShift Security Context Constraints
[Devconf.cz][2017] Understanding OpenShift Security Context Constraints[Devconf.cz][2017] Understanding OpenShift Security Context Constraints
[Devconf.cz][2017] Understanding OpenShift Security Context Constraints
 

Recently uploaded

API Governance and Monetization - The evolution of API governance
API Governance and Monetization -  The evolution of API governanceAPI Governance and Monetization -  The evolution of API governance
API Governance and Monetization - The evolution of API governanceWSO2
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FMESafe Software
 
The Zero-ETL Approach: Enhancing Data Agility and Insight
The Zero-ETL Approach: Enhancing Data Agility and InsightThe Zero-ETL Approach: Enhancing Data Agility and Insight
The Zero-ETL Approach: Enhancing Data Agility and InsightSafe Software
 
Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)Zilliz
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdfSandro Moreira
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...Zilliz
 
JavaScript Usage Statistics 2024 - The Ultimate Guide
JavaScript Usage Statistics 2024 - The Ultimate GuideJavaScript Usage Statistics 2024 - The Ultimate Guide
JavaScript Usage Statistics 2024 - The Ultimate GuidePixlogix Infotech
 
Vector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptxVector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptxRemote DBA Services
 
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfOrbitshub
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FMESafe Software
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...DianaGray10
 
Introduction to use of FHIR Documents in ABDM
Introduction to use of FHIR Documents in ABDMIntroduction to use of FHIR Documents in ABDM
Introduction to use of FHIR Documents in ABDMKumar Satyam
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Orbitshub
 
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​Bhuvaneswari Subramani
 
TEST BANK For Principles of Anatomy and Physiology, 16th Edition by Gerard J....
TEST BANK For Principles of Anatomy and Physiology, 16th Edition by Gerard J....TEST BANK For Principles of Anatomy and Physiology, 16th Edition by Gerard J....
TEST BANK For Principles of Anatomy and Physiology, 16th Edition by Gerard J....rightmanforbloodline
 
AI in Action: Real World Use Cases by Anitaraj
AI in Action: Real World Use Cases by AnitarajAI in Action: Real World Use Cases by Anitaraj
AI in Action: Real World Use Cases by AnitarajAnitaRaj43
 
Decarbonising Commercial Real Estate: The Role of Operational Performance
Decarbonising Commercial Real Estate: The Role of Operational PerformanceDecarbonising Commercial Real Estate: The Role of Operational Performance
Decarbonising Commercial Real Estate: The Role of Operational PerformanceIES VE
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxRustici Software
 
TrustArc Webinar - Unified Trust Center for Privacy, Security, Compliance, an...
TrustArc Webinar - Unified Trust Center for Privacy, Security, Compliance, an...TrustArc Webinar - Unified Trust Center for Privacy, Security, Compliance, an...
TrustArc Webinar - Unified Trust Center for Privacy, Security, Compliance, an...TrustArc
 

Recently uploaded (20)

API Governance and Monetization - The evolution of API governance
API Governance and Monetization -  The evolution of API governanceAPI Governance and Monetization -  The evolution of API governance
API Governance and Monetization - The evolution of API governance
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
The Zero-ETL Approach: Enhancing Data Agility and Insight
The Zero-ETL Approach: Enhancing Data Agility and InsightThe Zero-ETL Approach: Enhancing Data Agility and Insight
The Zero-ETL Approach: Enhancing Data Agility and Insight
 
Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..
 
JavaScript Usage Statistics 2024 - The Ultimate Guide
JavaScript Usage Statistics 2024 - The Ultimate GuideJavaScript Usage Statistics 2024 - The Ultimate Guide
JavaScript Usage Statistics 2024 - The Ultimate Guide
 
Vector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptxVector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptx
 
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Introduction to use of FHIR Documents in ABDM
Introduction to use of FHIR Documents in ABDMIntroduction to use of FHIR Documents in ABDM
Introduction to use of FHIR Documents in ABDM
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
 
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
 
TEST BANK For Principles of Anatomy and Physiology, 16th Edition by Gerard J....
TEST BANK For Principles of Anatomy and Physiology, 16th Edition by Gerard J....TEST BANK For Principles of Anatomy and Physiology, 16th Edition by Gerard J....
TEST BANK For Principles of Anatomy and Physiology, 16th Edition by Gerard J....
 
AI in Action: Real World Use Cases by Anitaraj
AI in Action: Real World Use Cases by AnitarajAI in Action: Real World Use Cases by Anitaraj
AI in Action: Real World Use Cases by Anitaraj
 
Decarbonising Commercial Real Estate: The Role of Operational Performance
Decarbonising Commercial Real Estate: The Role of Operational PerformanceDecarbonising Commercial Real Estate: The Role of Operational Performance
Decarbonising Commercial Real Estate: The Role of Operational Performance
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
TrustArc Webinar - Unified Trust Center for Privacy, Security, Compliance, an...
TrustArc Webinar - Unified Trust Center for Privacy, Security, Compliance, an...TrustArc Webinar - Unified Trust Center for Privacy, Security, Compliance, an...
TrustArc Webinar - Unified Trust Center for Privacy, Security, Compliance, an...
 

Ansible ex407 and EX 294

  • 1. Refference Exam EX294 || EX407 https://tekneed.com/rhce-8-ex294-exam-practice-question-answer-collections/ https://gist.github.com/waseem-h/6793ba3328f27df1a815402710acb3ff https://www.lisenet.com/2019/ansible-sample-exam-for-ex294/ https://ziyonotes.uz/rJt6DcqXr Requirements There are 18 questions in total. You will need five RHEL 7 (or CentOS 7) virtual machines to be able to successfully complete all questions. One VM will be configured as an Ansible control node. Other four VMs will be used to apply playbooks to solve the sample exam questions. The following FQDNs will be used throughout the sample exam. 1. ansible-control.hl.local – Ansible control node 2. ansible2.hl.local – managed host 3. ansible3.hl.local – managed host 4. ansible4.hl.local – managed host 5. ansible5.hl.local – managed host There are a couple of requiremens that should be met before proceeding further: 1. ansible-control.hl.local server has passwordless SSH access to all managed servers (using the root user). 2. ansible5.hl.local server has a 1GB secondary /dev/sdb disk attached. 3. There are no regular users created on any of the servers. Tips and Suggestions I tried to cover as many exam objectives as possible, however, note that there will be no questions related to dynamic inventory. Some questions may depend on the outcome of others. Please read all questions before proceeding. Sample Exam Questions Note: you have root access to all five servers.
  • 2. Task 1: Ansible Installation and Configuration Install ansible package on the control node (including any dependencies) and configure the following: 1. Create a regular user automation with the password of devops. Use this user for all sample exam tasks. 2. All playbooks and other Ansible configuration that you create for this sample exam should be stored in /home/automation/plays. Create a configuration file /home/automation/plays/ansible.cfg to meet the following requirements: 1. The roles path should include /home/automation/plays/roles, as well as any other path that may be required for the course of the sample exam. 2. The inventory file path is /home/automation/plays/inventory. 3. Privilege escallation is disabled by default. 4. Ansible should be able to manage 10 hosts at a single time. 5. Ansible should connect to all managed nodes using the cloud_user user. Create an inventory file /home/automation/plays/inventory with the following: 1. ansible2.hl.local is a member of the proxy host group. 2. ansible3.hl.local is a member of the webservers host group. 3. ansible4.hl.local is a member of the webservers host group. 4. ansible5.hl.local is a member of the database host group. # Solution Task1 cat inventory [proxy] 77726771c.mylabserver.com [webservers] 77726772c.mylabserver.com 77726773c.mylabserver.com [database] 77726774c.mylabserver.com [prod:children] database cat ansible.cfg [defaults] roles_path = ./roles inventory = ./inventory remote_user = cloud_user forks = 10 [prvilege_escalation] become = False
  • 3. Task 2: Ad-Hoc Commands Generate an SSH keypair on the control node. You can perform this step manually. Write a script /home/automation/plays/adhoc that uses Ansible ad-hoc commands to achieve the following: o User automation is created on all inventory hosts (not the control node). o SSH key (that you generated) is copied to all inventory hosts for the automation user and stored in /home/automation/.ssh/authorized_keys. o The automation user is allowed to elevate privileges on all inventory hosts without having to provide a password. After running the adhoc script on the control node as the automation user, you should be able to SSH into all inventory hosts using the automation user without password, as well as a run all privileged commands. Task 3: File Content Create a playbook /home/automation/plays/motd.yml that runs on all inventory hosts and does the following: 1. The playbook should replace any existing content of /etc/motd with text. Text depends on the host group. 2. On hosts in the proxy host group the line should be “Welcome to HAProxy server”. 3. On hosts in the webserver host group the line should be “Welcome to Apache server”. 4. On hosts in the database host group the line should be “Welcome to MySQL server”. # Solution - Task 3 cat motd.yml --- - name: Changing MOTD hosts: all become: yes tasks: - name: Copy the content to HAProxy copy: content: "Welcome to HAProxy servern" dest: /etc/motd when: "'proxy' in group_names" - name: Copy the content to Apache copy: content: "Welcome to Apache servern" dest: /etc/motd when: "'webservers' in group_names" - name: Copy the content to MySQL copy: content: "Welcome to MySQL servern" dest: /etc/motd when: "'database' in group_names"
  • 4. Task 4: Configure SSH Server Create a playbook /home/automation/plays/sshd.yml that runs on all inventory hosts and configures SSHD daemon as follows: 1. banner is set to /etc/motd 2. X11Forwarding is disabled 3. MaxAuthTries is set to 3 # Solution - Task 4 cat sshd.yml --- - name: Change SSH configuration hosts: all become: yes tasks: - name: Change default banner path lineinfile: path: /etc/ssh/sshd_config regexp: '^Banner' line: 'Banner /etc/motd' - name: X11 Forwarding is disabled lineinfile: path: /etc/ssh/sshd_config regexp: '^X11Forwarding' line: 'X11Forwarding no' - name: MaxAuthTries is set to 3 lineinfile: path: /etc/ssh/sshd_config regexp: '^#MaxAuthTries' line: 'MaxAuthTries 3' - name: Restart the sssd service service: name: sshd state: restarted enabled: yes - name: Check the Configuration shell: "grep MaxAuthTries /etc/ssh/sshd_config; grep X11Forwarding /etc/ssh/sshd_config; grep Banner /etc/ssh/sshd_config" register: check_result - name: Results debug: msg: "{{ check_result.stdout }}"
  • 5. Task 5: Ansible Vault Create Ansible vault file /home/automation/plays/secret.yml. Encryption/decryption password is devops. Add the following variables to the vault: 1. user_password with value of devops 2. database_password with value of devops Task 6: Users and Groups You have been provided with the list of users below. Use /home/automation/plays/vars/user_list.yml file to save this content. --- users: - username: alice uid: 1201 - username: vincent uid: 1202 - username: sandy uid: 2201 - username: patrick uid: 2202 Create a playbook /home/automation/plays/users.yml that uses the vault file /home/automation/plays/secret.yml to achieve the following: 1. Users whose user ID starts with 1 should be created on servers in the webservers host group. User password should be used from the user_password variable. 2. Users whose user ID starts with 2 should be created on servers in the database host group. User password should be used from the user_password variable. 3. All users should be members of a supplementary group wheel. 4. Shell should be set to /bin/bash for all users. 5. Account passwords should use the SHA512 hash format. After running the playbook, users should be able to SSH into their respective servers without passwords. # Solution - Task 6 --- - name: Create users hosts: all become: yes vars_files: - ./users_list.yml
  • 6. - ./secret.yml tasks: - name: Ensure group is exist group: name: wheel state: present - name: Create users user: name: "{{ item.username }}" group: wheel password: "{{ user_password | password_hash('sha512') }}" shell: /bin/bash update_password: on_create with_items: "{{ users }}" when: - ansible_fqdn in groups['webservers'] - "item.uid|string|first == '1'" - name: Create users in database user: name: "{{ item.username }}" group: wheel password: "{{ user_password | password_hash('sha512') }}" shell: /bin/bash uid: "{{ item.uid }}" update_password: on_create with_items: "{{ users }}" when: - ansible_fqdn in groups['database'] - "item.uid|string|first == '2'" Task 7: Scheduled Tasks Create a playbook /home/automation/plays/regular_tasks.yml that runs on servers in the proxy host group and does the following: 1. A root crontab record is created that runs every hour. 2. The cron job appends the file /var/log/time.log with the output from the date command. # Solution - Task 7 --- - name: Scheduled tasks hosts: all become: yes tasks: - name: Ensure file exists file: path: /var/log/time.log state: touch mode: 0644 - name: Create cronjob for root user cron: name: "check time"
  • 7. minute: "0" user: root job: "date >> /var/log/time.log" Task 8: Software Repositories Create a playbook /home/automation/plays/repository.yml that runs on servers in the database host group and does the following: 1. A YUM repository file is created. 2. The name of the repository is mysql56-community. 3. The description of the repository is “MySQL 5.6 YUM Repo”. 4. Repository baseurl is http://repo.mysql.com/yum/mysql-5.6-community/el/7/x86_64/. 5. Repository GPG key is at http://repo.mysql.com/RPM-GPG-KEY-mysql. 6. Repository GPG check is enabled. 7. Repository is enabled. # Solution - Task 8 --- - name: Software repositories hosts: database become: yes tasks: - name: Create msyql repository yum_repository: name: mysql56-community description: "MySQL 5.6 YUM Repo" baseurl: "http://repo.mysql.com/yum/mysql-5.6-community/el/7/x86_64/" enabled: yes gpgcheck: yes gpgkey: "http://repo.mysql.com/RPM-GPG-KEY-mysql" Task 9: Create and Work with Roles Create a role called sample-mysql and store it in /home/automation/plays/roles. The role should satisfy the following requirements: 1. A primary partition number 1 of size 800MB on device /dev/sdb is created. 2. An LVM volume group called vg_database is created that uses the primary partition created above. 3. An LVM logical volume called lv_mysql is created of size 512MB in the volume group vg_database. 4. An XFS filesystem on the logical volume lv_mysql is created. 5. Logical volume lv_mysql is permanently mounted on /mnt/mysql_backups. 6. mysql-community-server package is installed. 7. Firewall is configured to allow all incoming traffic on MySQL port TCP 3306. 8. MySQL root user password should be set from the variable database_password (see task #5).
  • 8. 9. MySQL server should be started and enabled on boot. 10. MySQL server configuration file is generated from the my.cnf.j2 Jinja2 template with the following content: [mysqld] bind_address = {{ ansible_default_ipv4.address }} skip_name_resolve datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock symbolic-links=0 sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid Create a playbook /home/automation/plays/mysql.yml that uses the role and runs on hosts in the database host group. # Solution - Task 9 Roles for this task is located in https://github.com/khamidziyo/ex407/tree/master/roles cat mysql.yml --- - name: Install mysql role hosts: database become: yes vars_files: - secret.yml roles: - sample-mysql Task 10: Create and Work with Roles (Some More) Create a role called sample-apache and store it in /home/automation/plays/roles. The role should satisfy the following requirements: 1. The httpd, mod_ssl and php packages are installed. Apache service is running and enabled on boot. 2. Firewall is configured to allow all incoming traffic on HTTP port TCP 80 and HTTPS port TCP 443. 3. Apache service should be restarted every time the file /var/www/html/index.html is modified. 4. A Jinja2 template file index.html.j2 is used to create the file /var/www/html/index.html with the following content: The address of the server is: IPV4ADDRESS
  • 9. IPV4ADDRESS is the IP address of the managed node. Create a playbook /home/automation/plays/apache.yml that uses the role and runs on hosts in the webservers host group. # Solution - Task 10 Roles for this task is located in https://github.com/khamidziyo/ex407/tree/master/roles cat apache.yml --- - name: Configure apache hosts: webservers become: yes roles: - sample-apache Task 11: Download Roles From an Ansible Galaxy and Use Them Use Ansible Galaxy to download and install geerlingguy.haproxy role in /home/automation/plays/roles. Create a playbook /home/automation/plays/haproxy.yml that runs on servers in the proxy host group and does the following: 1. Use geerlingguy.haproxy role to load balance request between hosts in the webservers host group. 2. Use roundrobin load balancing method. 3. HAProxy backend servers should be configured for HTTP only (port 80). 4. Firewall is configured to allow all incoming traffic on port TCP 80. If your playbook works, then doing “curl http://ansible2.hl.local/” should return output from the web server (see task #10). Running the command again should return output from the other web server. # Solution - Task 11 --- - name: Configure HAPROXY hosts: proxy become: yes roles: - geerlingguy.haproxy vars: haproxy_frontend_port: 80 haproxy_frontend_mode: 'http' haproxy_backend_balance_method: 'roundrobin'
  • 10. haproxy_backend_servers: - name: app1 address: 54.153.48.114:80 - name: app2 address: 18.144.27.107:80 tasks: - name: Ensure firewalld and its dependencies are installed yum: name: firewalld state: latest - name: Ensure firewalld is running service: name: firewalld state: started enabled: yes - name: Ensure firewalld is allowing to the traffic firewalld: port: 80/tcp permanent: yes immediate: yes state: enabled Task 12: Security Create a playbook /home/automation/plays/selinux.yml that runs on hosts in the webservers host group and does the following: 1. Uses the selinux RHEL system role. 2. Enables httpd_can_network_connect SELinux boolean. 3. The change must survive system reboot. --- - name: Security playbook hosts: webservers become: yes vars: selinux_booleans: - name: httpd_can_network_connect state: on persistent: yes roles: - linux-system-roles.selinux Task 13: Use Conditionals to Control Play Execution Create a playbook /home/automation/plays/sysctl.yml that runs on all inventory hosts and does the following: 1. If a server has more than 2048MB of RAM, then parameter vm.swappiness is set to 10. 2. If a server has less than 2048MB of RAM, then the following error message is displayed: Server memory less than 2048MB
  • 11. # Solution - Task 13 --- - name: Use Conditionals to Control Play Execution hosts: all become: yes tasks: - name: Change vm.swappiness sysctl: name: vm.swappiness value: 10 state: present when: ansible_memtotal_mb >= 2048 - name: Report not enough memory debug: msg: "Server memory less than 2048MB. RAM size: {{ ansible_memtotal_mb }}" when: ansible_memtotal_mb < 2048 Task 14: Use Archiving Create a playbook /home/automation/plays/archive.yml that runs on hosts in the database host group and does the following: 1. A file /mnt/mysql_backups/database_list.txt is created that contains the following line: dev,test,qa,prod. 2. A gzip archive of the file /mnt/mysql_backups/database_list.txt is created and stored in /mnt/mysql_backups/archive.gz. # Solution - Task 14 --- - name: Use Archiving hosts: database become: yes tasks: - name: Check if the directory is exist stat: path: /mnt/mysql_backups/ register: backup_directory_status - name: Create directory when not exist file: path: /mnt/mysql_backups/ state: directory mode: 0775 owner: root group: root when: backup_directory_status.stat.exists == false - name: Copy the content copy: content: "dev,test,qa,prod" dest: /mnt/mysql_backups/database_list.txt - name: Create archive
  • 12. archive: path: /mnt/mysql_backups/database_list.txt dest: /mnt/mysql_backups/archive.gz format: gz Task 15: Work with Ansible Facts Create a playbook /home/automation/plays/facts.yml that runs on hosts in the database host group and does the following: 1. A custom Ansible fact server_role=mysql is created that can be retrieved from ansible_local.custom.sample_exam when using Ansible setup module. # Solution - Task 15 --- - name: Work with Ansible Facts hosts: database become: yes tasks: - name: Ensure directory is exist file: path: /etc/ansible/facts.d state: directory recurse: yes - name: Copy the content to the file copy: content: "[sample_exam]nserver_role=mysqln" dest: /etc/ansible/facts.d/custom.fact Task 16: Software Packages Create a playbook /home/automation/plays/packages.yml that runs on all inventory hosts and does the following: 1. Installs tcpdump and mailx packages on hosts in the proxy host groups. 2. Installs lsof and mailx and packages on hosts in the database host groups. # Solution - Task 16 --- - name: Install packages hosts: all become: yes tasks: - name: Installs tcpdump and mailx packages on hosts in the proxy host groups yum: name: - tcpdump - mailx
  • 13. state: latest when: inventory_hostname in groups['proxy'] - name: Installs lsof and mailx and packages on hosts in the database host groups yum: name: - lsof - mailx state: latest when: inventory_hostname in groups['database'] Task 17: Services Create a playbook /home/automation/plays/target.yml that runs on hosts in the webserver host group and does the following: 1. Sets the default boot target to multi-user. # Solution - Task 17 --- - name: default boot target hosts: webservers become: yes tasks: - name: Set default boot target to multi-user file: src: /usr/lib/systemd/system/multi-user.target dest: /etc/systemd/system/default.target state: link Task 18. Create and Use Templates to Create Customised Configuration Files Create a playbook /home/automation/plays/server_list.yml that does the following: 1. Playbook uses a Jinja2 template server_list.j2 to create a file /etc/server_list.txt on hosts in the databasehost group. 2. The file /etc/server_list.txt is owned by the automation user. 3. File permissions are set to 0600. 4. SELinux file label should be set to net_conf_t. 5. The content of the file is a list of FQDNs of all inventory hosts. After running the playbook, the content of the file /etc/server_list.txt should be the following: ansible2.hl.local ansible3.hl.local ansible4.hl.local ansible5.hl.local
  • 14. Note: if the FQDN of any inventory host changes, re-running the playbook should update the file with the new values. # Solution - Task 18 cat server_list.j2 ################ {% for host in groups.all %} {{ hostvars[host].inventory_hostname }} {% endfor %} ################ cat server_list.yml --- - name: Create and Use Templates to Create Customised Configuration Files hosts: database become: yes tasks: - name: Create server list template: src: ./server_list.j2 dest: /etc/server_list.txt owner: cloud_user mode: '0600' setype: net_conf_t