SlideShare a Scribd company logo
1 of 24
Download to read offline
Munin Monitoring Program
ISYS-565
Professor Verma
Section 3 – Group A
Nadim Ebadi
Vincent Viet
Dennis Vivar
Kurtis Briggs
ISYS-565 Group Project Report: Munin Monitoring Program
Section 3 – Group A
Page 2
Table of Contents
Executive Summary........................................................................................................................ 3
Core Concepts................................................................................................................................. 4
Munin Monitoring – Step-By-Step Tutorial ................................................................................. 11
Munin Monitoring – Wireshark Evidence .................................................................................... 17
References..................................................................................................................................... 23
ISYS-565 Group Project Report: Munin Monitoring Program
Section 3 – Group A
Page 3
Executive Summary
Our Project: Munin Monitoring – What is Munin?
We have utilized the Munin Monitoring tool for our project. The Munin Monitoring tool
monitors and collects information across a managed device, such as a user’s computer system.
Munin is a open source network resource monitoring tool that can help analyze resource trends
and surveys all connected devices, documenting all traffic and monitoring connected devices.
Munin presents all the monitored information in graphs through a web interface under the
server’s IP address on the Apache web server, in which the web interface can be accessed
through the user’s web browser. Some of the monitored information that Munin can monitor
include: overall device/system performance, disk latency, disk usage, disk throughput, CPU
usage, memory usage, etc. Furthermore, having this monitored information can be useful to
determine when a performance problem may occur and to provide visibility, capacity, and
utilization of resources within monitored devices.
How Munin Functions
The Munin Monitoring program functions by using a Master-Nodes architecture. The
Munin Master is responsible for gathering data from Munin nodes. It stores this data in RRD (an
open source data logging and graphing system) files, and graphs the data on request on the user’s
web browser through a web server (Apache) under the server’s IP address. Munin also checks
whether the fetched values fall below or go over specific thresholds (warning, critical) and will
send alerts if this happens if configured to do so. The Munin Node is installed on all monitored
servers/devices. It accepts connections from the Munin Master and runs plugins on demand to
gather and report monitored data to the Munin Master. Munin has a big emphasis on plug and
play capabilities with over 500 monitoring plugins currently available. A plug and play device or
program is one with a specification that facilitates the discovery of a hardware component in a
system without the need for physical device configuration or user intervention in resolving
resource conflicts. A Munin plugin is a simple executable specialized program that is called by
Munin nodes to gather and report current data, and describe how the data should be presented.
Using Munin and Wireshark In Our Project (Including Findings)
In our project, we have utilized Munin through running two virtual machines on the same
computer system, with the Munin Master being installed on VM1 and the Munin Node being
installed on the cloned VM2. While we mainly used Munin to monitor and collect information
from VM2 (the Munin Node) via VM1 (the Munin Master) and the Munin plugins, we have also
used Wireshark to analyze the incoming and outgoing transmissions between the two VMs on
the computer system. Wireshark is a free, open source packet analyzer program that can be used
to intercept, log, and analyze traffic that passes over a network. By utilizing Wireshark, we were
able to observe the unencrypted monitored data of VM2 (Munin Node) (e.g. disk latency, disk
usage, disk throughput, CPU usage, memory usage, etc.) that was transmitted between the Munin
Master (VM1) and the Munin Node (VM2) by following the TCP stream in the Wireshark
capture log. After analyzing the Wireshark capture log, we also found that Munin uses TCP as its
transport protocol and observed how TCP data packets were transmitted between the VMs via
the TCP Three-Way Handshake. With Wireshark, we can also easily troubleshoot the network
and monitored device (VM2) if there is an issue, identify those issues, gather information from
the network and monitored device, as well as optimize performance and security.
ISYS-565 Group Project Report: Munin Monitoring Program
Section 3 – Group A
Page 4
Core Concepts
In our project, we have utilized the Munin Monitoring program in order to monitor,
collect, and display crucial information/data regarding a managed device, such as overall
device/system performance, disk usage, disk latency, disk throughput, CPU usage, memory
usage, etc. Munin is a free and open-source software application offers computer system
monitoring, network monitoring, and infrastructure monitoring.
The Munin Monitoring tool monitors, collects, and displays information from various
devices which have the Munin Node installed on them on an Apache web server, with the Munin
Master being installed on the main device for which the Munin Master allows the main device to
collect the monitored and gathered information from the devices that have the Munin Node
installed on them.
Note: The following items are required in order to successfully run this Munin Monitoring
project: a functioning computer system, VirtualBox (run under a NAT network), Kali Linux OS
(.OVA File) (or Debian-based OS/Debian 8 OS), latest Kali Linux packages (via sudo apt-get
update && apt-get upgrade), wireless or wired Internet access, Munin Monitoring Program
(Munin Master, Munin Node, and Munin Plugin packages), Apache Web Server, and Wireshark.
How Munin Functions
In regards to the utilization of Munin, note that the Munin monitoring tool is composed of
three main components which must be installed, the "Munin Master," the "Munin Node,” and the
“Munin Plugins” (the Munin plugins come preinstalled when the Munin Master and Munin Node
packages are installed). In the case of our project, the Munin Master and Munin Node
components were installed on our Kali Linux VMs, with the Munin Master being installed on
VM1 and the Munin Node being installed on VM2 via VirtualBox on the same computer
system. VirtualBox acts as our virtual machine virtualization software where we can set up VMs
and test the Munin monitoring tool on the computer system. When the Munin Master is installed
on a system, the Munin Master allows that system to collect monitored data from other systems
that have the Munin Node installed on them. When the Munin Node is installed on other
systems, the Munin Node allows those other systems to be monitored by Munin and the Munin
Node gathers and reports information to the system with the Munin Master installed on it via
Munin plugins. After collecting the reported monitored data from the systems that have the
Munin Node installed on them, the system with Munin Master installed on it is then able to graph
the collected data and display the data through a web server, such as Apache, which can be
viewed from a web browser under the server’s designated IP address.
ISYS-565 Group Project Report: Munin Monitoring Program
Section 3 – Group A
Page 5
How Munin Was Used In Our Project
For our project, we ran the Munin monitoring tool on two VMs utilizing the Kali Linux
OS (Debian-based) through VirtualBox, a main VM (called VM1) and a cloned version of the
main VM (called VM2), and both VMs were run on a single computer system. In order to utilize
the Munin monitoring tool on the same computer system across two VMs, we ran a “Munin
Master” instance of the Munin monitoring tool on VM1 and ran a “Munin Node” instance of the
Munin monitoring tool on VM2. Note that the “Munin Master” is installed on VM1 (which will
be the VM that will collect and graph/display the reported monitored data from VM2 that has the
Munin Node installed on it) and the “Munin Node” is installed on VM2 (which will be the VM
that is monitored by Munin and the Munin Node will gather and report information to the Munin
Master system: VM1 via Munin plugins). We also made sure to configure the Munin monitoring
tool so that both VMs could communicate with each other through utilizing the IP address of
each VM with Munin, and this was accomplished in the munin.conf (VM1) and munin-node.conf
(VM2) configuration files. In the munin.conf (VM1) and munin-node.conf (VM2) configuration
files, we routed and reflected the IP addresses of VM1 (Munin Master) and VM2 (Munin Node)
across each other so that both VMs were able to communicate with each other and use Munin.
This is also further clarified in the “Step-by-Step” section. We also set up a “NAT Network” in
VirtualBox for both VMs to use so that each VM can be assigned a different IP address and be
able to communicate with one another.
The IP address of VM1 (Munin Master) was assigned as 172.25.1.4 and the IP address of
VM2 (Munin Node) was assigned as 172.25.1.5. A ping to VM2 from VM1 was also executed to
ensure that communication between the two VMs was established.
(Please Zoom In On Screenshots Below)
VM1 (Munin Master: 172.25.1.4), VM2 (Munin Node: 172.25.1.5)
ISYS-565 Group Project Report: Munin Monitoring Program
Section 3 – Group A
Page 6
Ping to VM2 (Munin Node) from VM1 (Munin Master)
Once the Munin configuration file was configured properly, VM1 that had Munin Master
installed on it could be used to collect and graph/display the data from the monitored VM2 that
had the Munin Node installed on it. Apache was also the web server on which the Munin
monitoring tool was run through. Overall, the Munin Master (which is running on VM1) was
able to collect and graph/display the reported system information (of VM2) that was monitored
by the Munin Node (which is running on VM2) on the Apache web server. Note that the Apache
web server is run on the server’s IP address and this is where one can access and view the
collected information from Munin through a web browser. Wireshark was then used to view the
network traffic/TCP data packets and TCP stream as Munin monitored the VM (VM2) that had
the Munin Node installed on it, which then gathered and reported the information to the VM
(VM1) with the Munin Master installed on it via Munin plugins.
In addition, four core network management components are being utilized with Munin: a
manager, agent, managed device, and NMS (Network Management System). VM1, which has
the Munin Master installed on it, acts as the manager, which is the administrative computer
system that is responsible for monitoring and managing a device/system on the computer
network. The Munin Node, which is installed on VM2, acts as an agent, which is the software
component running on the managed device/system (VM2) and is responsible for gathering and
reporting information to the manager (VM1 – Munin Master) via Munin plugins. VM2 (which
has the Munin Node installed on it) also acts as the managed device as it is managed by the
manager (VM1 – Munin Master) and is utilized by the Munin Node agent. The Munin
Monitoring programs itself also acts as the NMS (Network Management System), which is
software that runs on the manager and is responsible for monitoring and controlling managed
devices throughout the network.
ISYS-565 Group Project Report: Munin Monitoring Program
Section 3 – Group A
Page 7
The picture below further illustrates how the Munin Monitoring program functions by
using a Master-Nodes architecture. In the picture above, we can see that the systems with the
Munin Node installed on them (VM2 in our project) act as agents and are running on port 4949.
These systems are being monitored by the Munin Node as information is gathered and reported
from them via the Munin Node and Munin plugins to the Munin Master system (VM1 in our
project). Then, we can see that the Munin Master system acts as the manager and is responsible
for collecting the reported monitored information from the Munin Node systems, ultimately
graphing the collected data on an Apache web server.
(Source: http://guide.munin-monitoring.org/en/latest/architecture/)
Overall, in our project, the Munin Monitoring program is used to monitor, collect/report,
and display information from a computer system (VM2 – Munin Node), such as information/data
regarding overall computer system performance, disk usage, disk latency, disk throughput, CPU
usage, memory usage, etc. (via the Munin plugins). Furthermore, by using Munin, we are then
able to view the monitored information about the computer system through a web interface (via
the Apache web server) under the server’s IP address which displays the monitored information
and uses graphs.
ISYS-565 Group Project Report: Munin Monitoring Program
Section 3 – Group A
Page 8
In our project, after VM1 (Munin Master) collects the reported monitored data from VM2
(Munin Node), this data regarding VM2 (Munin Node) is graphed and can be displayed through
the Apache web server on the user’s web browser. More specifically, we could observe the
Munin Monitoring program functioning by opening the Munin Monitoring program on VM1
(Munin Master) through opening VM1’s web browser and typing the following address in the
URL: server-ip-address/munin (e.g. 127.0.0.1/munin). This can be seen in the screenshots below.
(Please Zoom In On Screenshots Below)
Disk Usage of VM2 (Munin Node)
Disk Latency of VM2 (Munin Node)
ISYS-565 Group Project Report: Munin Monitoring Program
Section 3 – Group A
Page 9
Disk Throughput of VM2 (Munin Node)
Memory Usage of VM2 (Munin Node)
ISYS-565 Group Project Report: Munin Monitoring Program
Section 3 – Group A
Page 10
CPU Usage of VM2 (Munin Node)
In the screenshots above, when opening Munin on VM1 (Munin Master) (via
127.0.0.1/munin), we can see that VM2 (Munin Node) is constantly being monitored by Munin.
As a result, we can observe various information being reported from VM2 (Munin Node), such
as information/data regarding overall computer system performance, disk usage, disk latency,
disk throughput, CPU usage, memory usage, etc. (via the Munin plugins). Note that Munin’s
monitoring updates every 5 minutes by default.
In our Munin simulation, we ran the Munin Monitoring program for a total of 40 minutes.
As observed in the above screenshots, a “4k Resolution Test Video” on YouTube was also
running on VM2 (Munin Node) to act as a stress test while Munin was monitoring VM2 (Munin
Node). Furthermore, a Wireshark capture was also running on VM2 (Munin Node) during the
Munin monitoring.
By analyzing the Munin graphs in the screenshots above, the following is some notable
information that was recorded during the Munin monitoring test: disk usage for /run reached a
maximum of 5.87%, disk latency for device IO time reached a maximum of 37.28ms
(milliseconds), disk throughput reached a maximum of 1019.23kB (kilobytes) read/730.60kB
(kilobytes) write, memory usage for apps reached a maximum of 1.35GB (gigabytes), and CPU
usage for the system reached a maximum of 30.49%.
Overall, in our project, we ultimately utilized Munin to monitor, collect, and display a
variety of information/data from the monitored VM2 (Munin Node).
ISYS-565 Group Project Report: Munin Monitoring Program
Section 3 – Group A
Page 11
Munin Monitoring – Step-By-Step Tutorial
Below is a step by step tutorial on how a user can install and configure the Munin
Monitoring program on their Debian-based system, with the Munin Master being installed on
VM1 and the Munin Node being installed on VM2 (via VirtualBox), and both VMs being run on
the same computer system.
Note: The Kali Linux OS is used for the VMs. Furthermore, Kali Linux is a Debian-based OS
and these instructions can be exactly followed for a Debian-based OS (running Debian 8).
Setting up and configuring VirtualBox and Kali Linux
1. VirtualBox will be used as our virtual machine virtualization software for where we can run
the Kali Linux VMs and Munin in. First, we need to download and install VirtualBox so we can
utilize it as our virtual machine (VM). You can download and install VirtualBox from
here: https://www.virtualbox.org/wiki/Downloads
2. Then, we need to download the specific Kali Linux OS .OVA file for our systems and they
can be found here: https://www.offensive-security.com/kali-linux-vm-vmware-virtualbox-
hyperv-image-download/
3. Next, import the Kali Linux .OVA file into VirtualBox.
(Note: To login to the Kali Linux OS, the ID is: root and the P/W is: toor)
4. Now, since we need a cloned Kali Linux VM in addition to the original installed Kali Linux
VM, right click on the original installed Kali Linux VM in VirtualBox and select the "Clone"
option and perform a "Full Clone." Make sure to checkmark "Reinitialize the MAC address of all
network cards" during the cloning.
4a. We must also set up a “NAT Network” in VirtualBox so that each VM can be assigned a
different IP address and be able to communicate with one another as well as be able to utilize
Munin. To do so, open VirtualBox, click File → Preferences → Network → Add a NAT
Network.
4b. In the “NAT Network Details” dialog box, name the network name “Munin” and set the
Network CIDR to, for example, 172.25.1.0/24 and click OK.
4c. Now, open the settings of both VMs (original and cloned VM), go to the Network tab, and set
the “Attached to:” box to “NAT Network” and select the “Munin” NAT network that was
created in step 4b. Click OK. Both VMs should now have different IP addresses and be able to
communicate with one another. This can be verified by pinging the VMs as well. To check the IP
addresses of the VMs, open the terminal within the VMs and execute the ifconfig command.
ISYS-565 Group Project Report: Munin Monitoring Program
Section 3 – Group A
Page 12
5. Now, we need to update the VMs. To update the VMs, on both VMs, open the Kali Linux
Terminal and run the command: sudo apt-get update && apt-get upgrade
6. After we have the original Kali Linux VM and a cloned Kali Linux VM installed and updated,
we can install the Munin monitoring tool on each VM, with the Munin Master being installed on
the main VM (VM1) and the Munin Node being installed on the cloned VM (VM2).
Installing Munin
7. The installation instructions for the Munin Master and Munin Node for a Debian-based OS
(Kali Linux) can be found here: http://guide.munin-
monitoring.org/en/latest/installation/install.html
(Note: We will be installing the "Munin Master" on VM1 and the "Munin Node" on VM2.
7a. On the main VM (VM1) that will have the Munin Master engine installed on it, open the Kali
Linux Terminal and run the command: sudo apt-get install -y munin
7b. Next, on the cloned VM (VM2) that will have the Munin Node engine installed on it, open
the Kali Linux Terminal and run the command: sudo apt-get install -y munin-node
8. After installing the Munin Master and Munin Node engines, we must now install the required
packages/dependencies and configure the Munin Master and Munin Node so that they can
connect to and communicate with each other. Once configured and connected, the Munin Master
(which is running on VM1) will be able to collect and graph the reported system information (of
VM2) that was monitored by the Munin Node (which is running on VM2) on the Apache web
server. We must also make sure to properly install and configure the Apache web server on the
Munin Master.
Note that the Apache web server is run on the server’s IP address and this is where one can
access and view the collected information from Munin through a web browser.
This package installation and configuration for the Munin Master, Munin Node, and Apache for
a Debian-based OS (running Debian 8) can be set up by following the instructions below or by
navigating to this link: https://www.digitalocean.com/community/tutorials/how-to-install-the-
munin-monitoring-tool-on-debian-8
ISYS-565 Group Project Report: Munin Monitoring Program
Section 3 – Group A
Page 13
Configuring the Munin Master (VM1)
8a. Open the Kali Linux Terminal on the Munin Master (VM1) and run the following commands
to install the Apache packages and libraries. Apache will be the web server on which the Munin
monitoring tool will run on:
sudo apt-get install -y apache2
sudo apt-get install -y libcgi-fast-perl libapache2-mod-fcgid
8b. Check if the fcgid module is enabled by typing the following command in the Kali Linux
Terminal on the Munin Master (VM1):
/usr/sbin/apachectl -M | grep -i cgi
If the fcgid module is enabled, the output should be: fcgid_module (shared)
If the output is blank, then the fcgid module is not enabled. You can enable it using: sudo
a2enmod fcgid
8c. To start modifying the Munin Master on VM1, open the Kali Linux Terminal and run the
following commands to open the main configuration file (munin.conf):
cd /etc/munin
sudo nano munin.conf
In the munin.conf file, use /var/www/munin for the htmldir. Make sure to uncomment the below
lines by removing the # sign that precedes them. The munin.conf file should now look like this:
dbdir /var/lib/munin
htmldir /var/www/munin
logdir /var/log/munin
rundir /var/run/munin
...
tmpldir /etc/munin/templates
Create and chown the htmldir so that it is owned by the munin system user by running the
following commands in the Kali Linux Terminal of the Munin Master (VM1) (while still in the
/etc/munin directory):
sudo mkdir /var/www/munin
sudo chown munin:munin /var/www/munin
ISYS-565 Group Project Report: Munin Monitoring Program
Section 3 – Group A
Page 14
Change the name of the first host tree in the munin.conf to MuninMaster. The first host tree in
the munin.conf file should now look like this:
[MuninMaster]
address 127.0.0.1
use_node_name yes
Save and close the munin.conf file.
8d. Within the same /etc/munin directory, we will be modifying apache24.conf. To start
modifying, open the Kali Linux Terminal on the Munin Master (VM1) and run the following
command in the /etc/munin directory:
sudo nano apache24.conf
Modify the first line in this file to:
Alias /munin /var/www/munin
Delete the directory section in this file and replace it with:
<Directory /var/www/munin>
Require all granted
Options FollowSymLinks SymLinksIfOwnerMatch
</Directory>
With the last location section, remove the Require local line and replace it with:
<Location /munin-cgi/munin-cgi-graph>
Require all granted
Options FollowSymLinks SymLinksIfOwnerMatch
...
</Location>
Save and close the apache24.conf file
ISYS-565 Group Project Report: Munin Monitoring Program
Section 3 – Group A
Page 15
8e. Restart Munin and Apache by opening the Kali Linux Terminal on the Munin Master (VM1)
and running the commands:
sudo systemctl restart munin-node
sudo systemctl restart apache2
Configuring the Munin Node (VM2)
9. Open the munin-node.conf file by opening the Kali Linux Terminal on the Munin Node
(VM2) and running the command: sudo nano /etc/munin/munin-node.conf
9a. Towards the middle of the file, look for an allow ^127.0.0.1$ line and modify it so that it
reflects the IP address of the Munin Master (VM1). Note that the IP address should be inputted
in regex format (e.g. 172.25.1.4 translates to ^172.25.1.4$).
Save and close the munin-node.conf file
9b. Restart Munin by opening the Kali Linux Terminal on the Munin Node (VM2) and running
the command: sudo systemctl restart munin-node
10. Back on the Munin Master (VM1), open the munin.conf file by opening the Kali Linux
Terminal and running the command:
sudo nano /etc/munin/munin.conf
10a. Now, we need to insert a host tree in the munin.conf file for the (remote) node. The easiest
approach to that is to copy and modify the host tree of the master. Be sure to replace node-ip-
address with the IP address of the node you are adding:
[MuninNode]
address node-ip-address
use_node_name yes
Save and close the munin.conf file.
11. Then, restart Apache by opening the Kali Linux Terminal on the Munin Master (VM1) and
running the command: sudo systemctl restart apache2
ISYS-565 Group Project Report: Munin Monitoring Program
Section 3 – Group A
Page 16
Installing Munin Plugins
(Optional) 12. On the Munin Master (VM1) and Munin Node (VM2), we can also enable extra
plugins for Munin to use in order to monitor more data. A Munin plugins package should have
been installed when you install Munin. However, if it is not installed, run the following
command in the Terminal of the Munin Master (VM1) and Munin Node (VM2): sudo apt-get
install munin-plugins-extra
12a. To view suggested plugins to install and enable, run the following command in the
Terminal of the Munin Master (VM1) and Munin Node (VM2): sudo munin-node-configure --
suggest
12b. Then, to install a specific plugin that’s currently not in use, in the Terminal of the Munin
Master (VM1) and Munin Node (VM2), run the following command: sudo apt-get install
nameofplugin
12b. Next, create the symbolic link that enables the Munin plugin by running the following
command in the Terminal of the Munin Master (VM1) and Munin Node (VM2):
sudo ln -s /usr/share/munin/plugins/nameofplugin /etc/munin/plugins
12c. Lastly, restart the Munin Node by running the following command in the Terminal of the
Munin Master (VM1) and Munin Node (VM2): sudo systemctl restart munin-node
Once the above steps are completed, the Munin Master (which is running on VM1) will be able
to collect and graph the reported system information (of VM2) that was monitored by the Munin
Node (which is running on VM2) on the Apache web server. Note that the Apache web server is
run under the server’s IP address and this is where one can access and view the collected
information from Munin through a web browser.
Running Munin
13. Now, to observe the Munin Monitoring program functioning, open the Munin Monitoring
program on the Munin Master (VM1) by opening VM1’s web browser and typing the following
address in the URL: server-ip-address/munin (e.g. 127.0.0.1/munin). Note that Munin’s
monitoring updates every 5 minutes by default.
ISYS-565 Group Project Report: Munin Monitoring Program
Section 3 – Group A
Page 17
14. After a system shutdown or restart, note that you must run the following commands in order
to get the Munin Monitoring program functioning again:
Run the following commands in the Terminal of VM1 (Munin Master):
sudo systemctl restart munin-node
sudo systemctl restart apache2
Run the following command in the Terminal of VM2 (Munin Node):
sudo systemctl restart munin-node
Incorporating Wireshark
15. Wireshark can then be incorporated to monitor network traffic as Munin monitors, collects
data, and runs on the computer system VMs (VM1 and VM2) via Virtual Box. You can
download and install Wireshark from here: https://www.wireshark.org/download.html
15a. Once downloaded and installed, open Wireshark and start a capture (preferably on VM2
(Munin Node)) as Munin runs on the VirtualBox VMs (VM1: Munin Master and VM2: Munin
Node). As a result, you will be able to view and observe the network traffic as the Munin
Monitoring program is running on the VMs.
Munin Monitoring – Wireshark Evidence
We utilized Wireshark to analyze the network traffic and view the data packets
containing monitored data which were being sent across the network as Munin monitored VM2
through the Munin Node. We also observed that the data found within the network traffic was
being gathered and reported from VM2 (Munin Node – Agent) to VM1 (Munin Master –
Manager) via Munin plugins and ultimately being collected by VM1 (Munin Master – Manager).
Note that the Wireshark capture was run on VM2 (Munin Node) during the Munin monitoring.
This can be seen in the Wireshark capture log file screenshots below:
(Please Zoom In On Screenshots Below)
ISYS-565 Group Project Report: Munin Monitoring Program
Section 3 – Group A
Page 18
When observing the Wireshark capture log file, we can see that Munin ultimately uses
TCP as its transport protocol in order to transmit data packets between VM2 (Munin Node:
172.25.1.5) and VM1 (Munin Master: 172.25.1.4).
More specifically, by analyzing the Wireshark capture log file screenshots above, we can
see that when Munin updates the Munin Node (installed on VM2) every 5 minutes by default,
TCP data packets containing the monitored information by Munin are often being sent from the
VM2 (Munin Node) source (IP address: 172.25.1.5) to the VM1 (Munin Master) destination (IP
address: 172.25.1.4) through port 4949.
ISYS-565 Group Project Report: Munin Monitoring Program
Section 3 – Group A
Page 19
For example, in capture #1, in regards to using the TCP Three-Way Handshake, we can
see that VM1 (Munin Master) (172.25.1.4) sends a SYN (synchronize) request to VM2 (Munin
Node) (172.25.1.5) in order to attempt to establish a connection with VM2.
In capture #2, we can see that VM2 (Munin Node) (172.25.1.5) sends a SYN ACK
(synchronize acknowledge) response back to VM1 (Munin Master) (172.25.1.4), ultimately
acknowledging VM1's SYN request to connect with VM2.
Then, in capture #3, we can see that VM1 (Munin Master) (172.25.1.4) sends an ACK
(acknowledge) response back to VM2 (Munin Master) (172.25.1.5), which acknowledges that
VM1 has received the SYN ACK response from VM2. This ultimately completes the
establishment of a connection between VM1 and VM2.
Lastly, when we observe later captures, like captures #49-51, we can see that monitored
data by Munin in the form of TCP data packets are being sent from VM2 (Munin Node)
(172.25.1.5) to VM1 (Munin Master) (172.25.1.4) through port 4949.
Overall, throughout the Wireshark capture, Munin is using TCP as its transport protocol
to transmit data packets between VM2 (Munin Node: 172.25.1.5) and VM1 (Munin Master:
172.25.1.4) during the Munin monitoring.
Furthermore, if we right click on the TCP protocol in the Wireshark capture log file and
follow the TCP stream, we can ultimately see all of the monitored data by Munin that was
collected from VM2 (Munin Node), which was transmitted between VM2 (Munin Node:
172.25.1.5) and VM1 (Munin Master: 172.25.1.4) during the Munin monitoring. Note that we
are actually able to read this data as the monitored data collected from VM2 (Munin Node),
which was transmitted between VM2 (Munin Node) and VM1 (Munin Master) during the Munin
monitoring, is not encrypted, which can result in a lack of information security. This unencrypted
data can be seen in the screenshots below:
(Please Zoom In On Screenshots Below)
ISYS-565 Group Project Report: Munin Monitoring Program
Section 3 – Group A
Page 20
TCP Stream of VM2 (Munin Node)
TCP Stream: Fetching Disk Usage of VM2 (Munin Node)
ISYS-565 Group Project Report: Munin Monitoring Program
Section 3 – Group A
Page 21
TCP Stream: Fetching Disk Latency and Disk Throughput of VM2 (Munin Node)
TCP Stream: Fetching Memory Usage of VM2 (Munin Node)
ISYS-565 Group Project Report: Munin Monitoring Program
Section 3 – Group A
Page 22
TCP Stream: Fetching CPU Usage of VM2 (Munin Node)
ISYS-565 Group Project Report: Munin Monitoring Program
Section 3 – Group A
Page 23
References
Arora, Himanshu. “Wireshark – The Best Open Source Network Packet Analyzer.” ibm.com.
IBM DeveloperWorks, 23 Sept. 2012. Web. 29 Apr. 2018.
<https://www.ibm.com/developerworks/community/blogs/6e6f6d1b-95c3-46df-8a26-
b7efd8ee4b57/entry/wireshark_the_best_open_source_network_packet_analyzer_part_i6
0?lang=en>.
Cisco Certified Expert. “Management Agent – Network Management.” www.ccexpert.us. Cisco
Certified Expert, 6 Mar. 2017. Web. 12 May 2018. <https://www.ccexpert.us/network-
management/management-agent.html>.
Decima, Fini. “How to Install the Munin Monitoring Tool on Debian 8.” digitalocean.com.
DigitalOcean, Inc., 20 June 2015. Web. 28 Apr. 2018.
<https://www.digitalocean.com/community/tutorials/how-to-install-the-munin-
monitoring-tool-on-debian-8>.
Inet Daemon. “TCP 3-Way Handshake (SYN, SYN-ACK, ACK).” www.inetdaemon.com. Inet
Daemon, 8 Jan. 2016. Web. 11 May 2018.
<http://www.inetdaemon.com/tutorials/internet/tcp/3-way_handshake.shtml>.
Mishra, Atul. “Role of Agents in Distributed Network Management: A Review.”
www.omicsonline.org. International Journal of Advancements in Technology, 1 Oct.
2010. Web. 11 May 2018. <https://www.omicsonline.org/open-access/role-of-agents-in-
distributed-network-management-a-review-0976-4860-1-284-295.pdf.php?aid=35462>.
Munin Monitoring. “Installing Munin” guide.munin-monitoring.org. Munin, 13 Aug. 2008. Web.
28 Apr. 2018. <http://guide.munin-monitoring.org/en/latest/installation/install.html#>.
Munin Monitoring. “Munin’s Architecture” guide.munin-monitoring.org. Munin, 13 Aug. 2008.
Web. 5 May 2018. <http://guide.munin-
monitoring.org/en/latest/architecture/index.html#components>.
Munin Monitoring. “Welcome to the Munin Guide” guide.munin-monitoring.org. Munin, 13
Aug. 2008. Web. 28 Apr. 2018. <http://guide.munin-
monitoring.org/en/latest/index.html>.
ISYS-565 Group Project Report: Munin Monitoring Program
Section 3 – Group A
Page 24
O’Connor, Tom. “Monitoring with Munin.” dzone.com. DevOps Zone, 6 Jan. 2012. Web. 10
May 2018. <https://dzone.com/articles/monitoring-munin>.
Wallen, Jack. “Review: Munin Network Resource Monitoring Tool.” techrepublic.com.
TechRepublic, 9 Dec. 2009. Web. 5 May 2018.
<https://www.techrepublic.com/blog/product-spotlight/review-munin-network-resource-
monitoring-tool/>.

More Related Content

What's hot

OpenShift Meetup - Tokyo - Service Mesh and Serverless Overview
OpenShift Meetup - Tokyo - Service Mesh and Serverless OverviewOpenShift Meetup - Tokyo - Service Mesh and Serverless Overview
OpenShift Meetup - Tokyo - Service Mesh and Serverless OverviewMaría Angélica Bracho
 
HA Deployment Architecture with HAProxy and Keepalived
HA Deployment Architecture with HAProxy and KeepalivedHA Deployment Architecture with HAProxy and Keepalived
HA Deployment Architecture with HAProxy and KeepalivedGanapathi Kandaswamy
 
Présentation de git
Présentation de gitPrésentation de git
Présentation de gitJulien Blin
 
Anypoint platform architecture and components
Anypoint platform architecture and componentsAnypoint platform architecture and components
Anypoint platform architecture and componentsD.Rajesh Kumar
 
Docker and kubernetes
Docker and kubernetesDocker and kubernetes
Docker and kubernetesDongwon Kim
 
Organiser son CI/CD - présentation
Organiser son CI/CD - présentation Organiser son CI/CD - présentation
Organiser son CI/CD - présentation Julien Garderon
 
Getting Started with Kubernetes
Getting Started with Kubernetes Getting Started with Kubernetes
Getting Started with Kubernetes VMware Tanzu
 
Introduction to Helm
Introduction to HelmIntroduction to Helm
Introduction to HelmHarshal Shah
 
Docker Registry V2
Docker Registry V2Docker Registry V2
Docker Registry V2Docker, Inc.
 
Containers Anywhere with OpenShift by Red Hat
Containers Anywhere with OpenShift by Red HatContainers Anywhere with OpenShift by Red Hat
Containers Anywhere with OpenShift by Red HatAmazon Web Services
 
Kubernetes fundamentals
Kubernetes fundamentalsKubernetes fundamentals
Kubernetes fundamentalsVictor Morales
 
Red Bend Software: Optimizing the User Experience with Over-the-Air Updates
Red Bend Software: Optimizing the User Experience with Over-the-Air UpdatesRed Bend Software: Optimizing the User Experience with Over-the-Air Updates
Red Bend Software: Optimizing the User Experience with Over-the-Air UpdatesRed Bend Software
 
Patna MuleSoft Meetup Anypoint Cloudhub 2.0
Patna MuleSoft Meetup Anypoint Cloudhub 2.0Patna MuleSoft Meetup Anypoint Cloudhub 2.0
Patna MuleSoft Meetup Anypoint Cloudhub 2.0shyamraj55
 
Introduction of Kubernetes - Trang Nguyen
Introduction of Kubernetes - Trang NguyenIntroduction of Kubernetes - Trang Nguyen
Introduction of Kubernetes - Trang NguyenTrang Nguyen
 

What's hot (20)

Cloudhub 2.0
Cloudhub 2.0Cloudhub 2.0
Cloudhub 2.0
 
OpenShift Meetup - Tokyo - Service Mesh and Serverless Overview
OpenShift Meetup - Tokyo - Service Mesh and Serverless OverviewOpenShift Meetup - Tokyo - Service Mesh and Serverless Overview
OpenShift Meetup - Tokyo - Service Mesh and Serverless Overview
 
Git l'essentiel
Git l'essentielGit l'essentiel
Git l'essentiel
 
HA Deployment Architecture with HAProxy and Keepalived
HA Deployment Architecture with HAProxy and KeepalivedHA Deployment Architecture with HAProxy and Keepalived
HA Deployment Architecture with HAProxy and Keepalived
 
What is Docker
What is DockerWhat is Docker
What is Docker
 
Présentation de git
Présentation de gitPrésentation de git
Présentation de git
 
Anypoint platform architecture and components
Anypoint platform architecture and componentsAnypoint platform architecture and components
Anypoint platform architecture and components
 
Docker and kubernetes
Docker and kubernetesDocker and kubernetes
Docker and kubernetes
 
Organiser son CI/CD - présentation
Organiser son CI/CD - présentation Organiser son CI/CD - présentation
Organiser son CI/CD - présentation
 
Getting Started with Kubernetes
Getting Started with Kubernetes Getting Started with Kubernetes
Getting Started with Kubernetes
 
Introduction to Helm
Introduction to HelmIntroduction to Helm
Introduction to Helm
 
Docker Registry V2
Docker Registry V2Docker Registry V2
Docker Registry V2
 
Containers Anywhere with OpenShift by Red Hat
Containers Anywhere with OpenShift by Red HatContainers Anywhere with OpenShift by Red Hat
Containers Anywhere with OpenShift by Red Hat
 
Kubernetes fundamentals
Kubernetes fundamentalsKubernetes fundamentals
Kubernetes fundamentals
 
Red Bend Software: Optimizing the User Experience with Over-the-Air Updates
Red Bend Software: Optimizing the User Experience with Over-the-Air UpdatesRed Bend Software: Optimizing the User Experience with Over-the-Air Updates
Red Bend Software: Optimizing the User Experience with Over-the-Air Updates
 
Patna MuleSoft Meetup Anypoint Cloudhub 2.0
Patna MuleSoft Meetup Anypoint Cloudhub 2.0Patna MuleSoft Meetup Anypoint Cloudhub 2.0
Patna MuleSoft Meetup Anypoint Cloudhub 2.0
 
Vagrant
VagrantVagrant
Vagrant
 
Introduction of Kubernetes - Trang Nguyen
Introduction of Kubernetes - Trang NguyenIntroduction of Kubernetes - Trang Nguyen
Introduction of Kubernetes - Trang Nguyen
 
Helm 3
Helm 3Helm 3
Helm 3
 
Virtualbox
VirtualboxVirtualbox
Virtualbox
 

Similar to Munin Monitoring Project

Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
 
Big Brother: Kubernetes Edition
Big Brother: Kubernetes EditionBig Brother: Kubernetes Edition
Big Brother: Kubernetes EditionKnox Anderson
 
Power point presentation on cyber security
Power point presentation on cyber securityPower point presentation on cyber security
Power point presentation on cyber securityVaishnaviGunji
 
Review on Honeypot Security
Review on Honeypot SecurityReview on Honeypot Security
Review on Honeypot SecurityIRJET Journal
 
Eucalyptus on Xen - Build Enterprise Private Cloud | Torry Harris Whitepaper
Eucalyptus on Xen - Build Enterprise Private Cloud | Torry Harris WhitepaperEucalyptus on Xen - Build Enterprise Private Cloud | Torry Harris Whitepaper
Eucalyptus on Xen - Build Enterprise Private Cloud | Torry Harris WhitepaperTorry Harris Business Solutions
 
ICALEPCS 2011: Testing Environments using Virtualization
ICALEPCS 2011: Testing Environments using VirtualizationICALEPCS 2011: Testing Environments using Virtualization
ICALEPCS 2011: Testing Environments using VirtualizationOmer Khalid
 
Android Implementation using MQTT Protocol
Android Implementation using MQTT ProtocolAndroid Implementation using MQTT Protocol
Android Implementation using MQTT ProtocolFatih Özlü
 
Business Case - SSD.pptx
Business Case - SSD.pptxBusiness Case - SSD.pptx
Business Case - SSD.pptxPritam Yadav
 
Network Security Using IDS, IPS & Honeypot
Network Security Using IDS, IPS & HoneypotNetwork Security Using IDS, IPS & Honeypot
Network Security Using IDS, IPS & Honeypotpaperpublications3
 
U2020 virtualization solution overview (eular os, taishan)
U2020 virtualization solution overview (eular os, taishan)U2020 virtualization solution overview (eular os, taishan)
U2020 virtualization solution overview (eular os, taishan)kpphelu
 
Classification of Virtualization Environment for Cloud Computing
Classification of Virtualization Environment for Cloud ComputingClassification of Virtualization Environment for Cloud Computing
Classification of Virtualization Environment for Cloud ComputingSouvik Pal
 
OpenStack Murano introduction
OpenStack Murano introductionOpenStack Murano introduction
OpenStack Murano introductionVictor Zhang
 
The EternalBlue Exploit: how it works and affects systems
The EternalBlue Exploit: how it works and affects systemsThe EternalBlue Exploit: how it works and affects systems
The EternalBlue Exploit: how it works and affects systemsAndrea Bissoli
 
Princeton-NJ-Meetup-Troubleshooting-with-AnyPoint-Monitoring
Princeton-NJ-Meetup-Troubleshooting-with-AnyPoint-MonitoringPrinceton-NJ-Meetup-Troubleshooting-with-AnyPoint-Monitoring
Princeton-NJ-Meetup-Troubleshooting-with-AnyPoint-MonitoringSravan Lingam
 
Part 1: DRS and DPM Implementation in Virtualized Environment, Part 2: Large ...
Part 1: DRS and DPM Implementation in Virtualized Environment, Part 2: Large ...Part 1: DRS and DPM Implementation in Virtualized Environment, Part 2: Large ...
Part 1: DRS and DPM Implementation in Virtualized Environment, Part 2: Large ...Akshay Wattal
 

Similar to Munin Monitoring Project (20)

Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)
 
Big Brother: Kubernetes Edition
Big Brother: Kubernetes EditionBig Brother: Kubernetes Edition
Big Brother: Kubernetes Edition
 
Power point presentation on cyber security
Power point presentation on cyber securityPower point presentation on cyber security
Power point presentation on cyber security
 
Review on Honeypot Security
Review on Honeypot SecurityReview on Honeypot Security
Review on Honeypot Security
 
001 implementation nms_software
001 implementation nms_software001 implementation nms_software
001 implementation nms_software
 
Eucalyptus on Xen - Build Enterprise Private Cloud | Torry Harris Whitepaper
Eucalyptus on Xen - Build Enterprise Private Cloud | Torry Harris WhitepaperEucalyptus on Xen - Build Enterprise Private Cloud | Torry Harris Whitepaper
Eucalyptus on Xen - Build Enterprise Private Cloud | Torry Harris Whitepaper
 
Manual Sophos
Manual SophosManual Sophos
Manual Sophos
 
ICALEPCS 2011: Testing Environments using Virtualization
ICALEPCS 2011: Testing Environments using VirtualizationICALEPCS 2011: Testing Environments using Virtualization
ICALEPCS 2011: Testing Environments using Virtualization
 
Android Implementation using MQTT Protocol
Android Implementation using MQTT ProtocolAndroid Implementation using MQTT Protocol
Android Implementation using MQTT Protocol
 
Business Case - SSD.pptx
Business Case - SSD.pptxBusiness Case - SSD.pptx
Business Case - SSD.pptx
 
Network Security Using IDS, IPS & Honeypot
Network Security Using IDS, IPS & HoneypotNetwork Security Using IDS, IPS & Honeypot
Network Security Using IDS, IPS & Honeypot
 
U2020 virtualization solution overview (eular os, taishan)
U2020 virtualization solution overview (eular os, taishan)U2020 virtualization solution overview (eular os, taishan)
U2020 virtualization solution overview (eular os, taishan)
 
Nsm overview
Nsm overviewNsm overview
Nsm overview
 
Classification of Virtualization Environment for Cloud Computing
Classification of Virtualization Environment for Cloud ComputingClassification of Virtualization Environment for Cloud Computing
Classification of Virtualization Environment for Cloud Computing
 
OpenStack Murano introduction
OpenStack Murano introductionOpenStack Murano introduction
OpenStack Murano introduction
 
lect 1TO 5.pptx
lect 1TO 5.pptxlect 1TO 5.pptx
lect 1TO 5.pptx
 
The EternalBlue Exploit: how it works and affects systems
The EternalBlue Exploit: how it works and affects systemsThe EternalBlue Exploit: how it works and affects systems
The EternalBlue Exploit: how it works and affects systems
 
Wissbi osdc pdf
Wissbi osdc pdfWissbi osdc pdf
Wissbi osdc pdf
 
Princeton-NJ-Meetup-Troubleshooting-with-AnyPoint-Monitoring
Princeton-NJ-Meetup-Troubleshooting-with-AnyPoint-MonitoringPrinceton-NJ-Meetup-Troubleshooting-with-AnyPoint-Monitoring
Princeton-NJ-Meetup-Troubleshooting-with-AnyPoint-Monitoring
 
Part 1: DRS and DPM Implementation in Virtualized Environment, Part 2: Large ...
Part 1: DRS and DPM Implementation in Virtualized Environment, Part 2: Large ...Part 1: DRS and DPM Implementation in Virtualized Environment, Part 2: Large ...
Part 1: DRS and DPM Implementation in Virtualized Environment, Part 2: Large ...
 

Recently uploaded

Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...apidays
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherRemote DBA Services
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesrafiqahmad00786416
 
Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024The Digital Insurer
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MIND CTI
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Zilliz
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsNanddeep Nachan
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAndrey Devyatkin
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CVKhem
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businesspanagenda
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodJuan lago vázquez
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProduct Anonymous
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...DianaGray10
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...Zilliz
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoffsammart93
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 
Ransomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdfRansomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdfOverkill Security
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyKhushali Kathiriya
 

Recently uploaded (20)

Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 
Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Ransomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdfRansomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdf
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 

Munin Monitoring Project

  • 1. Munin Monitoring Program ISYS-565 Professor Verma Section 3 – Group A Nadim Ebadi Vincent Viet Dennis Vivar Kurtis Briggs
  • 2. ISYS-565 Group Project Report: Munin Monitoring Program Section 3 – Group A Page 2 Table of Contents Executive Summary........................................................................................................................ 3 Core Concepts................................................................................................................................. 4 Munin Monitoring – Step-By-Step Tutorial ................................................................................. 11 Munin Monitoring – Wireshark Evidence .................................................................................... 17 References..................................................................................................................................... 23
  • 3. ISYS-565 Group Project Report: Munin Monitoring Program Section 3 – Group A Page 3 Executive Summary Our Project: Munin Monitoring – What is Munin? We have utilized the Munin Monitoring tool for our project. The Munin Monitoring tool monitors and collects information across a managed device, such as a user’s computer system. Munin is a open source network resource monitoring tool that can help analyze resource trends and surveys all connected devices, documenting all traffic and monitoring connected devices. Munin presents all the monitored information in graphs through a web interface under the server’s IP address on the Apache web server, in which the web interface can be accessed through the user’s web browser. Some of the monitored information that Munin can monitor include: overall device/system performance, disk latency, disk usage, disk throughput, CPU usage, memory usage, etc. Furthermore, having this monitored information can be useful to determine when a performance problem may occur and to provide visibility, capacity, and utilization of resources within monitored devices. How Munin Functions The Munin Monitoring program functions by using a Master-Nodes architecture. The Munin Master is responsible for gathering data from Munin nodes. It stores this data in RRD (an open source data logging and graphing system) files, and graphs the data on request on the user’s web browser through a web server (Apache) under the server’s IP address. Munin also checks whether the fetched values fall below or go over specific thresholds (warning, critical) and will send alerts if this happens if configured to do so. The Munin Node is installed on all monitored servers/devices. It accepts connections from the Munin Master and runs plugins on demand to gather and report monitored data to the Munin Master. Munin has a big emphasis on plug and play capabilities with over 500 monitoring plugins currently available. A plug and play device or program is one with a specification that facilitates the discovery of a hardware component in a system without the need for physical device configuration or user intervention in resolving resource conflicts. A Munin plugin is a simple executable specialized program that is called by Munin nodes to gather and report current data, and describe how the data should be presented. Using Munin and Wireshark In Our Project (Including Findings) In our project, we have utilized Munin through running two virtual machines on the same computer system, with the Munin Master being installed on VM1 and the Munin Node being installed on the cloned VM2. While we mainly used Munin to monitor and collect information from VM2 (the Munin Node) via VM1 (the Munin Master) and the Munin plugins, we have also used Wireshark to analyze the incoming and outgoing transmissions between the two VMs on the computer system. Wireshark is a free, open source packet analyzer program that can be used to intercept, log, and analyze traffic that passes over a network. By utilizing Wireshark, we were able to observe the unencrypted monitored data of VM2 (Munin Node) (e.g. disk latency, disk usage, disk throughput, CPU usage, memory usage, etc.) that was transmitted between the Munin Master (VM1) and the Munin Node (VM2) by following the TCP stream in the Wireshark capture log. After analyzing the Wireshark capture log, we also found that Munin uses TCP as its transport protocol and observed how TCP data packets were transmitted between the VMs via the TCP Three-Way Handshake. With Wireshark, we can also easily troubleshoot the network and monitored device (VM2) if there is an issue, identify those issues, gather information from the network and monitored device, as well as optimize performance and security.
  • 4. ISYS-565 Group Project Report: Munin Monitoring Program Section 3 – Group A Page 4 Core Concepts In our project, we have utilized the Munin Monitoring program in order to monitor, collect, and display crucial information/data regarding a managed device, such as overall device/system performance, disk usage, disk latency, disk throughput, CPU usage, memory usage, etc. Munin is a free and open-source software application offers computer system monitoring, network monitoring, and infrastructure monitoring. The Munin Monitoring tool monitors, collects, and displays information from various devices which have the Munin Node installed on them on an Apache web server, with the Munin Master being installed on the main device for which the Munin Master allows the main device to collect the monitored and gathered information from the devices that have the Munin Node installed on them. Note: The following items are required in order to successfully run this Munin Monitoring project: a functioning computer system, VirtualBox (run under a NAT network), Kali Linux OS (.OVA File) (or Debian-based OS/Debian 8 OS), latest Kali Linux packages (via sudo apt-get update && apt-get upgrade), wireless or wired Internet access, Munin Monitoring Program (Munin Master, Munin Node, and Munin Plugin packages), Apache Web Server, and Wireshark. How Munin Functions In regards to the utilization of Munin, note that the Munin monitoring tool is composed of three main components which must be installed, the "Munin Master," the "Munin Node,” and the “Munin Plugins” (the Munin plugins come preinstalled when the Munin Master and Munin Node packages are installed). In the case of our project, the Munin Master and Munin Node components were installed on our Kali Linux VMs, with the Munin Master being installed on VM1 and the Munin Node being installed on VM2 via VirtualBox on the same computer system. VirtualBox acts as our virtual machine virtualization software where we can set up VMs and test the Munin monitoring tool on the computer system. When the Munin Master is installed on a system, the Munin Master allows that system to collect monitored data from other systems that have the Munin Node installed on them. When the Munin Node is installed on other systems, the Munin Node allows those other systems to be monitored by Munin and the Munin Node gathers and reports information to the system with the Munin Master installed on it via Munin plugins. After collecting the reported monitored data from the systems that have the Munin Node installed on them, the system with Munin Master installed on it is then able to graph the collected data and display the data through a web server, such as Apache, which can be viewed from a web browser under the server’s designated IP address.
  • 5. ISYS-565 Group Project Report: Munin Monitoring Program Section 3 – Group A Page 5 How Munin Was Used In Our Project For our project, we ran the Munin monitoring tool on two VMs utilizing the Kali Linux OS (Debian-based) through VirtualBox, a main VM (called VM1) and a cloned version of the main VM (called VM2), and both VMs were run on a single computer system. In order to utilize the Munin monitoring tool on the same computer system across two VMs, we ran a “Munin Master” instance of the Munin monitoring tool on VM1 and ran a “Munin Node” instance of the Munin monitoring tool on VM2. Note that the “Munin Master” is installed on VM1 (which will be the VM that will collect and graph/display the reported monitored data from VM2 that has the Munin Node installed on it) and the “Munin Node” is installed on VM2 (which will be the VM that is monitored by Munin and the Munin Node will gather and report information to the Munin Master system: VM1 via Munin plugins). We also made sure to configure the Munin monitoring tool so that both VMs could communicate with each other through utilizing the IP address of each VM with Munin, and this was accomplished in the munin.conf (VM1) and munin-node.conf (VM2) configuration files. In the munin.conf (VM1) and munin-node.conf (VM2) configuration files, we routed and reflected the IP addresses of VM1 (Munin Master) and VM2 (Munin Node) across each other so that both VMs were able to communicate with each other and use Munin. This is also further clarified in the “Step-by-Step” section. We also set up a “NAT Network” in VirtualBox for both VMs to use so that each VM can be assigned a different IP address and be able to communicate with one another. The IP address of VM1 (Munin Master) was assigned as 172.25.1.4 and the IP address of VM2 (Munin Node) was assigned as 172.25.1.5. A ping to VM2 from VM1 was also executed to ensure that communication between the two VMs was established. (Please Zoom In On Screenshots Below) VM1 (Munin Master: 172.25.1.4), VM2 (Munin Node: 172.25.1.5)
  • 6. ISYS-565 Group Project Report: Munin Monitoring Program Section 3 – Group A Page 6 Ping to VM2 (Munin Node) from VM1 (Munin Master) Once the Munin configuration file was configured properly, VM1 that had Munin Master installed on it could be used to collect and graph/display the data from the monitored VM2 that had the Munin Node installed on it. Apache was also the web server on which the Munin monitoring tool was run through. Overall, the Munin Master (which is running on VM1) was able to collect and graph/display the reported system information (of VM2) that was monitored by the Munin Node (which is running on VM2) on the Apache web server. Note that the Apache web server is run on the server’s IP address and this is where one can access and view the collected information from Munin through a web browser. Wireshark was then used to view the network traffic/TCP data packets and TCP stream as Munin monitored the VM (VM2) that had the Munin Node installed on it, which then gathered and reported the information to the VM (VM1) with the Munin Master installed on it via Munin plugins. In addition, four core network management components are being utilized with Munin: a manager, agent, managed device, and NMS (Network Management System). VM1, which has the Munin Master installed on it, acts as the manager, which is the administrative computer system that is responsible for monitoring and managing a device/system on the computer network. The Munin Node, which is installed on VM2, acts as an agent, which is the software component running on the managed device/system (VM2) and is responsible for gathering and reporting information to the manager (VM1 – Munin Master) via Munin plugins. VM2 (which has the Munin Node installed on it) also acts as the managed device as it is managed by the manager (VM1 – Munin Master) and is utilized by the Munin Node agent. The Munin Monitoring programs itself also acts as the NMS (Network Management System), which is software that runs on the manager and is responsible for monitoring and controlling managed devices throughout the network.
  • 7. ISYS-565 Group Project Report: Munin Monitoring Program Section 3 – Group A Page 7 The picture below further illustrates how the Munin Monitoring program functions by using a Master-Nodes architecture. In the picture above, we can see that the systems with the Munin Node installed on them (VM2 in our project) act as agents and are running on port 4949. These systems are being monitored by the Munin Node as information is gathered and reported from them via the Munin Node and Munin plugins to the Munin Master system (VM1 in our project). Then, we can see that the Munin Master system acts as the manager and is responsible for collecting the reported monitored information from the Munin Node systems, ultimately graphing the collected data on an Apache web server. (Source: http://guide.munin-monitoring.org/en/latest/architecture/) Overall, in our project, the Munin Monitoring program is used to monitor, collect/report, and display information from a computer system (VM2 – Munin Node), such as information/data regarding overall computer system performance, disk usage, disk latency, disk throughput, CPU usage, memory usage, etc. (via the Munin plugins). Furthermore, by using Munin, we are then able to view the monitored information about the computer system through a web interface (via the Apache web server) under the server’s IP address which displays the monitored information and uses graphs.
  • 8. ISYS-565 Group Project Report: Munin Monitoring Program Section 3 – Group A Page 8 In our project, after VM1 (Munin Master) collects the reported monitored data from VM2 (Munin Node), this data regarding VM2 (Munin Node) is graphed and can be displayed through the Apache web server on the user’s web browser. More specifically, we could observe the Munin Monitoring program functioning by opening the Munin Monitoring program on VM1 (Munin Master) through opening VM1’s web browser and typing the following address in the URL: server-ip-address/munin (e.g. 127.0.0.1/munin). This can be seen in the screenshots below. (Please Zoom In On Screenshots Below) Disk Usage of VM2 (Munin Node) Disk Latency of VM2 (Munin Node)
  • 9. ISYS-565 Group Project Report: Munin Monitoring Program Section 3 – Group A Page 9 Disk Throughput of VM2 (Munin Node) Memory Usage of VM2 (Munin Node)
  • 10. ISYS-565 Group Project Report: Munin Monitoring Program Section 3 – Group A Page 10 CPU Usage of VM2 (Munin Node) In the screenshots above, when opening Munin on VM1 (Munin Master) (via 127.0.0.1/munin), we can see that VM2 (Munin Node) is constantly being monitored by Munin. As a result, we can observe various information being reported from VM2 (Munin Node), such as information/data regarding overall computer system performance, disk usage, disk latency, disk throughput, CPU usage, memory usage, etc. (via the Munin plugins). Note that Munin’s monitoring updates every 5 minutes by default. In our Munin simulation, we ran the Munin Monitoring program for a total of 40 minutes. As observed in the above screenshots, a “4k Resolution Test Video” on YouTube was also running on VM2 (Munin Node) to act as a stress test while Munin was monitoring VM2 (Munin Node). Furthermore, a Wireshark capture was also running on VM2 (Munin Node) during the Munin monitoring. By analyzing the Munin graphs in the screenshots above, the following is some notable information that was recorded during the Munin monitoring test: disk usage for /run reached a maximum of 5.87%, disk latency for device IO time reached a maximum of 37.28ms (milliseconds), disk throughput reached a maximum of 1019.23kB (kilobytes) read/730.60kB (kilobytes) write, memory usage for apps reached a maximum of 1.35GB (gigabytes), and CPU usage for the system reached a maximum of 30.49%. Overall, in our project, we ultimately utilized Munin to monitor, collect, and display a variety of information/data from the monitored VM2 (Munin Node).
  • 11. ISYS-565 Group Project Report: Munin Monitoring Program Section 3 – Group A Page 11 Munin Monitoring – Step-By-Step Tutorial Below is a step by step tutorial on how a user can install and configure the Munin Monitoring program on their Debian-based system, with the Munin Master being installed on VM1 and the Munin Node being installed on VM2 (via VirtualBox), and both VMs being run on the same computer system. Note: The Kali Linux OS is used for the VMs. Furthermore, Kali Linux is a Debian-based OS and these instructions can be exactly followed for a Debian-based OS (running Debian 8). Setting up and configuring VirtualBox and Kali Linux 1. VirtualBox will be used as our virtual machine virtualization software for where we can run the Kali Linux VMs and Munin in. First, we need to download and install VirtualBox so we can utilize it as our virtual machine (VM). You can download and install VirtualBox from here: https://www.virtualbox.org/wiki/Downloads 2. Then, we need to download the specific Kali Linux OS .OVA file for our systems and they can be found here: https://www.offensive-security.com/kali-linux-vm-vmware-virtualbox- hyperv-image-download/ 3. Next, import the Kali Linux .OVA file into VirtualBox. (Note: To login to the Kali Linux OS, the ID is: root and the P/W is: toor) 4. Now, since we need a cloned Kali Linux VM in addition to the original installed Kali Linux VM, right click on the original installed Kali Linux VM in VirtualBox and select the "Clone" option and perform a "Full Clone." Make sure to checkmark "Reinitialize the MAC address of all network cards" during the cloning. 4a. We must also set up a “NAT Network” in VirtualBox so that each VM can be assigned a different IP address and be able to communicate with one another as well as be able to utilize Munin. To do so, open VirtualBox, click File → Preferences → Network → Add a NAT Network. 4b. In the “NAT Network Details” dialog box, name the network name “Munin” and set the Network CIDR to, for example, 172.25.1.0/24 and click OK. 4c. Now, open the settings of both VMs (original and cloned VM), go to the Network tab, and set the “Attached to:” box to “NAT Network” and select the “Munin” NAT network that was created in step 4b. Click OK. Both VMs should now have different IP addresses and be able to communicate with one another. This can be verified by pinging the VMs as well. To check the IP addresses of the VMs, open the terminal within the VMs and execute the ifconfig command.
  • 12. ISYS-565 Group Project Report: Munin Monitoring Program Section 3 – Group A Page 12 5. Now, we need to update the VMs. To update the VMs, on both VMs, open the Kali Linux Terminal and run the command: sudo apt-get update && apt-get upgrade 6. After we have the original Kali Linux VM and a cloned Kali Linux VM installed and updated, we can install the Munin monitoring tool on each VM, with the Munin Master being installed on the main VM (VM1) and the Munin Node being installed on the cloned VM (VM2). Installing Munin 7. The installation instructions for the Munin Master and Munin Node for a Debian-based OS (Kali Linux) can be found here: http://guide.munin- monitoring.org/en/latest/installation/install.html (Note: We will be installing the "Munin Master" on VM1 and the "Munin Node" on VM2. 7a. On the main VM (VM1) that will have the Munin Master engine installed on it, open the Kali Linux Terminal and run the command: sudo apt-get install -y munin 7b. Next, on the cloned VM (VM2) that will have the Munin Node engine installed on it, open the Kali Linux Terminal and run the command: sudo apt-get install -y munin-node 8. After installing the Munin Master and Munin Node engines, we must now install the required packages/dependencies and configure the Munin Master and Munin Node so that they can connect to and communicate with each other. Once configured and connected, the Munin Master (which is running on VM1) will be able to collect and graph the reported system information (of VM2) that was monitored by the Munin Node (which is running on VM2) on the Apache web server. We must also make sure to properly install and configure the Apache web server on the Munin Master. Note that the Apache web server is run on the server’s IP address and this is where one can access and view the collected information from Munin through a web browser. This package installation and configuration for the Munin Master, Munin Node, and Apache for a Debian-based OS (running Debian 8) can be set up by following the instructions below or by navigating to this link: https://www.digitalocean.com/community/tutorials/how-to-install-the- munin-monitoring-tool-on-debian-8
  • 13. ISYS-565 Group Project Report: Munin Monitoring Program Section 3 – Group A Page 13 Configuring the Munin Master (VM1) 8a. Open the Kali Linux Terminal on the Munin Master (VM1) and run the following commands to install the Apache packages and libraries. Apache will be the web server on which the Munin monitoring tool will run on: sudo apt-get install -y apache2 sudo apt-get install -y libcgi-fast-perl libapache2-mod-fcgid 8b. Check if the fcgid module is enabled by typing the following command in the Kali Linux Terminal on the Munin Master (VM1): /usr/sbin/apachectl -M | grep -i cgi If the fcgid module is enabled, the output should be: fcgid_module (shared) If the output is blank, then the fcgid module is not enabled. You can enable it using: sudo a2enmod fcgid 8c. To start modifying the Munin Master on VM1, open the Kali Linux Terminal and run the following commands to open the main configuration file (munin.conf): cd /etc/munin sudo nano munin.conf In the munin.conf file, use /var/www/munin for the htmldir. Make sure to uncomment the below lines by removing the # sign that precedes them. The munin.conf file should now look like this: dbdir /var/lib/munin htmldir /var/www/munin logdir /var/log/munin rundir /var/run/munin ... tmpldir /etc/munin/templates Create and chown the htmldir so that it is owned by the munin system user by running the following commands in the Kali Linux Terminal of the Munin Master (VM1) (while still in the /etc/munin directory): sudo mkdir /var/www/munin sudo chown munin:munin /var/www/munin
  • 14. ISYS-565 Group Project Report: Munin Monitoring Program Section 3 – Group A Page 14 Change the name of the first host tree in the munin.conf to MuninMaster. The first host tree in the munin.conf file should now look like this: [MuninMaster] address 127.0.0.1 use_node_name yes Save and close the munin.conf file. 8d. Within the same /etc/munin directory, we will be modifying apache24.conf. To start modifying, open the Kali Linux Terminal on the Munin Master (VM1) and run the following command in the /etc/munin directory: sudo nano apache24.conf Modify the first line in this file to: Alias /munin /var/www/munin Delete the directory section in this file and replace it with: <Directory /var/www/munin> Require all granted Options FollowSymLinks SymLinksIfOwnerMatch </Directory> With the last location section, remove the Require local line and replace it with: <Location /munin-cgi/munin-cgi-graph> Require all granted Options FollowSymLinks SymLinksIfOwnerMatch ... </Location> Save and close the apache24.conf file
  • 15. ISYS-565 Group Project Report: Munin Monitoring Program Section 3 – Group A Page 15 8e. Restart Munin and Apache by opening the Kali Linux Terminal on the Munin Master (VM1) and running the commands: sudo systemctl restart munin-node sudo systemctl restart apache2 Configuring the Munin Node (VM2) 9. Open the munin-node.conf file by opening the Kali Linux Terminal on the Munin Node (VM2) and running the command: sudo nano /etc/munin/munin-node.conf 9a. Towards the middle of the file, look for an allow ^127.0.0.1$ line and modify it so that it reflects the IP address of the Munin Master (VM1). Note that the IP address should be inputted in regex format (e.g. 172.25.1.4 translates to ^172.25.1.4$). Save and close the munin-node.conf file 9b. Restart Munin by opening the Kali Linux Terminal on the Munin Node (VM2) and running the command: sudo systemctl restart munin-node 10. Back on the Munin Master (VM1), open the munin.conf file by opening the Kali Linux Terminal and running the command: sudo nano /etc/munin/munin.conf 10a. Now, we need to insert a host tree in the munin.conf file for the (remote) node. The easiest approach to that is to copy and modify the host tree of the master. Be sure to replace node-ip- address with the IP address of the node you are adding: [MuninNode] address node-ip-address use_node_name yes Save and close the munin.conf file. 11. Then, restart Apache by opening the Kali Linux Terminal on the Munin Master (VM1) and running the command: sudo systemctl restart apache2
  • 16. ISYS-565 Group Project Report: Munin Monitoring Program Section 3 – Group A Page 16 Installing Munin Plugins (Optional) 12. On the Munin Master (VM1) and Munin Node (VM2), we can also enable extra plugins for Munin to use in order to monitor more data. A Munin plugins package should have been installed when you install Munin. However, if it is not installed, run the following command in the Terminal of the Munin Master (VM1) and Munin Node (VM2): sudo apt-get install munin-plugins-extra 12a. To view suggested plugins to install and enable, run the following command in the Terminal of the Munin Master (VM1) and Munin Node (VM2): sudo munin-node-configure -- suggest 12b. Then, to install a specific plugin that’s currently not in use, in the Terminal of the Munin Master (VM1) and Munin Node (VM2), run the following command: sudo apt-get install nameofplugin 12b. Next, create the symbolic link that enables the Munin plugin by running the following command in the Terminal of the Munin Master (VM1) and Munin Node (VM2): sudo ln -s /usr/share/munin/plugins/nameofplugin /etc/munin/plugins 12c. Lastly, restart the Munin Node by running the following command in the Terminal of the Munin Master (VM1) and Munin Node (VM2): sudo systemctl restart munin-node Once the above steps are completed, the Munin Master (which is running on VM1) will be able to collect and graph the reported system information (of VM2) that was monitored by the Munin Node (which is running on VM2) on the Apache web server. Note that the Apache web server is run under the server’s IP address and this is where one can access and view the collected information from Munin through a web browser. Running Munin 13. Now, to observe the Munin Monitoring program functioning, open the Munin Monitoring program on the Munin Master (VM1) by opening VM1’s web browser and typing the following address in the URL: server-ip-address/munin (e.g. 127.0.0.1/munin). Note that Munin’s monitoring updates every 5 minutes by default.
  • 17. ISYS-565 Group Project Report: Munin Monitoring Program Section 3 – Group A Page 17 14. After a system shutdown or restart, note that you must run the following commands in order to get the Munin Monitoring program functioning again: Run the following commands in the Terminal of VM1 (Munin Master): sudo systemctl restart munin-node sudo systemctl restart apache2 Run the following command in the Terminal of VM2 (Munin Node): sudo systemctl restart munin-node Incorporating Wireshark 15. Wireshark can then be incorporated to monitor network traffic as Munin monitors, collects data, and runs on the computer system VMs (VM1 and VM2) via Virtual Box. You can download and install Wireshark from here: https://www.wireshark.org/download.html 15a. Once downloaded and installed, open Wireshark and start a capture (preferably on VM2 (Munin Node)) as Munin runs on the VirtualBox VMs (VM1: Munin Master and VM2: Munin Node). As a result, you will be able to view and observe the network traffic as the Munin Monitoring program is running on the VMs. Munin Monitoring – Wireshark Evidence We utilized Wireshark to analyze the network traffic and view the data packets containing monitored data which were being sent across the network as Munin monitored VM2 through the Munin Node. We also observed that the data found within the network traffic was being gathered and reported from VM2 (Munin Node – Agent) to VM1 (Munin Master – Manager) via Munin plugins and ultimately being collected by VM1 (Munin Master – Manager). Note that the Wireshark capture was run on VM2 (Munin Node) during the Munin monitoring. This can be seen in the Wireshark capture log file screenshots below: (Please Zoom In On Screenshots Below)
  • 18. ISYS-565 Group Project Report: Munin Monitoring Program Section 3 – Group A Page 18 When observing the Wireshark capture log file, we can see that Munin ultimately uses TCP as its transport protocol in order to transmit data packets between VM2 (Munin Node: 172.25.1.5) and VM1 (Munin Master: 172.25.1.4). More specifically, by analyzing the Wireshark capture log file screenshots above, we can see that when Munin updates the Munin Node (installed on VM2) every 5 minutes by default, TCP data packets containing the monitored information by Munin are often being sent from the VM2 (Munin Node) source (IP address: 172.25.1.5) to the VM1 (Munin Master) destination (IP address: 172.25.1.4) through port 4949.
  • 19. ISYS-565 Group Project Report: Munin Monitoring Program Section 3 – Group A Page 19 For example, in capture #1, in regards to using the TCP Three-Way Handshake, we can see that VM1 (Munin Master) (172.25.1.4) sends a SYN (synchronize) request to VM2 (Munin Node) (172.25.1.5) in order to attempt to establish a connection with VM2. In capture #2, we can see that VM2 (Munin Node) (172.25.1.5) sends a SYN ACK (synchronize acknowledge) response back to VM1 (Munin Master) (172.25.1.4), ultimately acknowledging VM1's SYN request to connect with VM2. Then, in capture #3, we can see that VM1 (Munin Master) (172.25.1.4) sends an ACK (acknowledge) response back to VM2 (Munin Master) (172.25.1.5), which acknowledges that VM1 has received the SYN ACK response from VM2. This ultimately completes the establishment of a connection between VM1 and VM2. Lastly, when we observe later captures, like captures #49-51, we can see that monitored data by Munin in the form of TCP data packets are being sent from VM2 (Munin Node) (172.25.1.5) to VM1 (Munin Master) (172.25.1.4) through port 4949. Overall, throughout the Wireshark capture, Munin is using TCP as its transport protocol to transmit data packets between VM2 (Munin Node: 172.25.1.5) and VM1 (Munin Master: 172.25.1.4) during the Munin monitoring. Furthermore, if we right click on the TCP protocol in the Wireshark capture log file and follow the TCP stream, we can ultimately see all of the monitored data by Munin that was collected from VM2 (Munin Node), which was transmitted between VM2 (Munin Node: 172.25.1.5) and VM1 (Munin Master: 172.25.1.4) during the Munin monitoring. Note that we are actually able to read this data as the monitored data collected from VM2 (Munin Node), which was transmitted between VM2 (Munin Node) and VM1 (Munin Master) during the Munin monitoring, is not encrypted, which can result in a lack of information security. This unencrypted data can be seen in the screenshots below: (Please Zoom In On Screenshots Below)
  • 20. ISYS-565 Group Project Report: Munin Monitoring Program Section 3 – Group A Page 20 TCP Stream of VM2 (Munin Node) TCP Stream: Fetching Disk Usage of VM2 (Munin Node)
  • 21. ISYS-565 Group Project Report: Munin Monitoring Program Section 3 – Group A Page 21 TCP Stream: Fetching Disk Latency and Disk Throughput of VM2 (Munin Node) TCP Stream: Fetching Memory Usage of VM2 (Munin Node)
  • 22. ISYS-565 Group Project Report: Munin Monitoring Program Section 3 – Group A Page 22 TCP Stream: Fetching CPU Usage of VM2 (Munin Node)
  • 23. ISYS-565 Group Project Report: Munin Monitoring Program Section 3 – Group A Page 23 References Arora, Himanshu. “Wireshark – The Best Open Source Network Packet Analyzer.” ibm.com. IBM DeveloperWorks, 23 Sept. 2012. Web. 29 Apr. 2018. <https://www.ibm.com/developerworks/community/blogs/6e6f6d1b-95c3-46df-8a26- b7efd8ee4b57/entry/wireshark_the_best_open_source_network_packet_analyzer_part_i6 0?lang=en>. Cisco Certified Expert. “Management Agent – Network Management.” www.ccexpert.us. Cisco Certified Expert, 6 Mar. 2017. Web. 12 May 2018. <https://www.ccexpert.us/network- management/management-agent.html>. Decima, Fini. “How to Install the Munin Monitoring Tool on Debian 8.” digitalocean.com. DigitalOcean, Inc., 20 June 2015. Web. 28 Apr. 2018. <https://www.digitalocean.com/community/tutorials/how-to-install-the-munin- monitoring-tool-on-debian-8>. Inet Daemon. “TCP 3-Way Handshake (SYN, SYN-ACK, ACK).” www.inetdaemon.com. Inet Daemon, 8 Jan. 2016. Web. 11 May 2018. <http://www.inetdaemon.com/tutorials/internet/tcp/3-way_handshake.shtml>. Mishra, Atul. “Role of Agents in Distributed Network Management: A Review.” www.omicsonline.org. International Journal of Advancements in Technology, 1 Oct. 2010. Web. 11 May 2018. <https://www.omicsonline.org/open-access/role-of-agents-in- distributed-network-management-a-review-0976-4860-1-284-295.pdf.php?aid=35462>. Munin Monitoring. “Installing Munin” guide.munin-monitoring.org. Munin, 13 Aug. 2008. Web. 28 Apr. 2018. <http://guide.munin-monitoring.org/en/latest/installation/install.html#>. Munin Monitoring. “Munin’s Architecture” guide.munin-monitoring.org. Munin, 13 Aug. 2008. Web. 5 May 2018. <http://guide.munin- monitoring.org/en/latest/architecture/index.html#components>. Munin Monitoring. “Welcome to the Munin Guide” guide.munin-monitoring.org. Munin, 13 Aug. 2008. Web. 28 Apr. 2018. <http://guide.munin- monitoring.org/en/latest/index.html>.
  • 24. ISYS-565 Group Project Report: Munin Monitoring Program Section 3 – Group A Page 24 O’Connor, Tom. “Monitoring with Munin.” dzone.com. DevOps Zone, 6 Jan. 2012. Web. 10 May 2018. <https://dzone.com/articles/monitoring-munin>. Wallen, Jack. “Review: Munin Network Resource Monitoring Tool.” techrepublic.com. TechRepublic, 9 Dec. 2009. Web. 5 May 2018. <https://www.techrepublic.com/blog/product-spotlight/review-munin-network-resource- monitoring-tool/>.