1. CHAPTER II
Related Literature
In this chapter, the researchers discussed the different information from art icles,
books, internet and theses which were used as references for the completion of this thesis.
Conceptual Literature
A network monitoring and diagnosis system periodically records values of
network performance metrics in order to measure network performance, to identify
performance anomalies, and to determine the root causes for the problems, preferably
before customers’ performance is affected. These monitoring and diagnostic capabilities
are critical to today’s computer networks, since their effectiveness determines the quality
of the network service delivered to customers. The most important performance metrics
that are monitored include connectivity, delay, packet loss rate, and available bandwidth.
According to Bradley Mitchell, network monitoring refers to the practice of
overseeing the operation of a computer network using specialized management software
tools. Network monitoring systems are used to ensure availability and overall
performance of computers (hosts) and network services. These systems are typically
employed on larger scale corporate and university IT networks. [Bradley Mitchel, 1999]
In addition to Bradley Mitchell, a network monitoring system is capable of
detecting and of reporting failures of devices or connections. It normally measures the
processor (CPU) utilization of hosts, the network bandwidth utilization of links, and other
aspects of operation. It will often send messages (sometimes called watchdog messages)
over the network to each host to verify it is responsive to requests. When failures,
unacceptably slow response, or other unexpected behaviour is detected, these systems
send additional messages called alerts to designated locations (such as a management
server, an email address, or a phone number) to notify system administrators. [Bradley
Mitchel, 1999]
In computer networking and computer science according to the Wikipedia, the
free encyclopaedia, the network bandwidth and data bandwidth are terms used to refer to
2. 7
various bit-rate measures, representing the available or consumed data communication
resources expressed in bits per second or multiples of it (bit/s, kbit/s, Mbit/s, Gbit/s, etc.)
[Wikipedia, 2000]
Fig. 2.1 Bandwidth
By Paessler, the term "uptime" is used to determine the time a computer system
has been functional. In network terms, it is defined by the availability of a server, of a
device or of a site. In individual computer terms, it is defined by the reliability and
stability of the individual system. Uptime is most often measured in percentages, so an
uptime of 90% for a day would mean that the system worked properly for 1296 minutes
(21.6 hours).[Paessler, 1997]
The term downtime in Wikipedia Foundation, Inc. is used to refer to periods when
a system is unavailable. Downtime or outage duration refers to a period of time that
a system fails to provide or perform its primary function. Reliability, availability,
recovery, and unavailability are related concepts. The unavailability is the proportion of a
timespan that a system is unavailable or is offline. This is usually a result of the
system failing to function because of an unplanned event, or because of
routine maintenance.[Wikipedia ,1998]
3. 8
Fig.2.2 Uptime and Downtime
In computer networks, to download means to receive data to a local system from a
remote system, or to initiate such a data transfer. Examples of a remote system from
which a download might be performed include a web server, FTP server, email server, or
other similar systems.
A download can mean either any file which is offered for downloading or which
has been downloaded, or the process of receiving such a file.
It has become more common to be mistake and to be confused with the meaning
of downloading and installing or simply combine them incorrectly together.
The inverse operation, uploading, can refer to the sending of data from a local
system to a remote system such as a server or another client with the intent that the
remote system should store a copy of the data being transferred, or the initiation of such a
process. The words first came into popular usage among computer users with the
increased popularity of bulletin board systems (BBS), facilitated by the widespread
distribution and implementation of dial-up internet access in the 1970s.[Wikipedia, 1975]
4. 9
Fig.2.3 Upload and Download
In tabulating statistics for Web site usage,by Margaret Rouse, a user session
(sometime referred to as a visit) is the presence of a user with a specific IP address who
has not visited the site recently (typically, anytime within the past 30 minutes). The
number of user sessions per day is one measure of how much traffic a Web site has. A
user who visits a site at noon and then again at 3:30 pm would count as two user visits.
Other measures of Web site traffic in a given time period are the number of hits
(the number of individual files served), the number of pages served, the number of ad
views, and the number of unique visitors. [Margaret Rouse, 2006]
Fig.2.4 User Sessions and details
5. 10
According to the PC Guide, whenever a hard disk is transferring data over the
interface to the rest of the system, it uses some of the system's resources. One of the more
critical of these resources is how much CPU time is required for the transfer. This is
called the CPU utilization of the transfer. CPU utilization is important because the higher
the percentage of the CPU used by the data transfer, the less power the CPU can devote
to other tasks. When multitasking, too high a CPU utilization can cause slowdowns in
other tasks when doing large data transfers. Of course, if you are only doing a large file
copy or similar disk access, then CPU utilization is less important.[PC Guide, 1996]
Fig.2.5 Cpu Utilizations
Value of CPU Usage Criticality of the Problem
80 and above Very High
70<80 High
60 <70 Low
60 and above Nil
Table 2.1 CPU UTILIZATION
Whenever the CPU utilization of a device goes beyond 60 percent, it indicates a fault in
the device.
6. 11
According to Farlex, Fault Detection is discovering a failure in hardware or
software. Fault Detection Methods, such as Built – in Tests, typically log time that the
error occurred and either triggers alarms for manual intervention or initiate automatic
recovery. [Farlex, 1981]
Fig.2.6 Fault Detections
In addition to Bradley Mitchell about peer to peer networking, he said that peer to
peer networking is an approach to computer networking where all computers share
equipment responsibility for processing data. P2P Networking (also known simply as
peer networking) differs from client-server networking where certain devices have
responsibility for “ serving” data and other devices consume or otherwise acts as “client”
of those servers.[Bradley Mitchell, 2004]
7. 12
Fig.2.7 Peer to peer Connection
According to Jayson Alvich, Hendrik Brink and Kevin Williams a network
interface card is a computer circuit board or card that is installed in a computer so that it
can be connected to a network .Personal computers and workstations on a local area
network typically contain a NIC specially designed for the LAN transmission technology,
such as Ethernet or token ring. NIC provide a dedicated, full- time connection to a
network.[Jayson Alvich, Hendrik Brink, and Kevin Williams, 2001]
A router is a device that forwards data packets between computer networks,
creating an overlay internetwork. A router is connected to two or more data lines from
different networks. When a data packet comes in one of the lines, the router reads the
address information in the packet to determine its ultimate destination. Then, using
information in its routing table or routing policy, it directs the packet to the next network
on its journey. Routers perform the "traffic directing" functions on the Internet. A data
packet is typically forwarded from one router to another through the networks which
constitute the internetwork until it reaches its destination node.[Wikipedia, 1999]
According to Jay Botelho, director of product management for WildPackets,a
network performance company, the flow is a sequence of packets that has seven identical
characteristics: source IP address, destination IP address, source port, destination port,
layer 3 protocol type, TOS (type of service) byte, and input logical interface. By
8. 13
providing this specific network usage data and expanding on measurements such as
overall throughput, flow-based data can fill in the gaps left by SNMP.
In packet-based monitoring, the packet traffic is decoded and is analysed as it
passes through a network, yielding more information about the traffic. Botelho explains
that enterprises should establish objectives and should use a network monitoring
technology which meets them: For example, if you just need simple device status, SNMP
may be the right fit. But if your enterprise needs all of the details about what is happening
on the network, a packet-based solution is what you need. [Jay Botelho, 1986]
According to Kim S. Nash , Alyson Behr Network monitoring is far more
strategic than its name implies. It involves watching for problems 24/7, but it's also about
optimizing data flow and access in a complex and a changing environment. Tools and
services are as numerous and as varied as the environments they guard and analyse. You
might think that if the network is up and is running, there is no reason to mess with it.
Why should you care about adding another project for your network managers to scribble
across their whiteboards, already crammed floor-to-ceiling? The reasons to insist on
network monitoring can be summarized on a high level into maintaining the network's
current health, ensuring availability and improving performance. An NMS also can help
you build a database of critical information which you can use to plan for future
growth. [Kim S. Nash , Alyson Behr, 2004]
Modern computer networks tend to be large heterogeneous collections of
computers, switches, routers and a large assortment of other devices. To a large degree,
the growth of such networks is ad-hoc and is based on the current and perceived future
needs of the users. As networks get larger and faster, the job of monitoring and managing
them gets more complex. However, the job of managing computer networks becomes
increasingly more important as society becomes more dependent on computers and the
Internet for everyday business tasks. Network downtime now costs significant amount of
money [CPR, 2001] so it is important that network and system managers are aware of
everything that is happening on the networks for which they are responsible. Fortunately,
9. 14
computers are fairly good at watching other computers which means we can
automate this task to some extent.
In their discussion on the basics of network management, Cisco Systems point out
that the term "network management" means different things to different people [Cisco,
2002]. They give two examples at opposite ends of the spectrum to illustrate this
diversity: A solitary network consultant monitoring network activity and high end
workstations generating graphical views of network topologies and traffic. Both of these
examples employ some form of tool to gather, to analyse and to represent information
about a computer network; therefore, in general, network management involves a set of
tools to aid people to monitor and to maintain computer networks.
The International Telecommunications Union (ITU) proposed a network
management model aimed at understanding the major functions of network management
and monitoring software. This management model forms part of the X.700 series of
documents from the ITU and is based on the Open Systems Interconnect (OSI) reference
model. It is in the process of being standardized by the International Standards
Organisation (ISO). It addresses five conceptual areas, being: performance management,
configuration management, accounting management, fault management and security
management [Rose, 1991].
These conceptual areas are useful in understanding the goals of network
monitoring and management. For the purposes of this document the
term "monitoring" will be used to refer to systems that simply observe and report on a
network, without taking any corrective action of their own accord.
Technology
A computer system is comprised of hardware and of software. Computer
hardware is the physical part of a computer as distinguish from the computer software
that executes within the hardware, while computer software is a computer designed to
perform a specific task.
10. 15
The proponents have used the software:
VisualBasic.NET (VB.NET) is an object-oriented computer programming language that
can be viewed as an evolution of the classic Visual Basic (VB), implemented on
the .NET Framework. Microsoft currently supplies two main editions of IDEs for
developing in Visual Basic: Microsoft Visual Studio 2012, which is commercial
software and Visual Basic Express Edition 2012, which isfree of charge. The command-line
compiler, VBC.EXE, is installed as part of the freeware .NET Framework
SDK. Mono also includes a command-line VB.NET compiler.
Microsoft SQL Server is a relational database management system developed
by Microsoft. As a database, it is a software product whose primary function is to store
and retrieve data as requested by other software applications, be it those on the same
computer or those running on another computer across a network (including the Internet).
Research Literature
This portion contains the foreign and local studies, the synthesis, the technical
background and also the definition of terms.
Local Studies
The study entitled, “Network Monitoring System for Laboratory of Trinity
University of Asia” [September 08, 2012], conducted by Trinity University of Asia
College of Computing and Information Sciences, stated that in this information age the
network is essential to the organization. Information and the rate at which it can be
obtained and distributed, is key to the economic success of companies in the information
age. This is the reason why the computer network is the central nervous system of most
organizations today. Organizations must have a network that is available and reliable.
Since networks consist of a complicated set of software and hardware components,
reliability comes at the cost of redundancy, diligence, man power and management.
Trinity University of Asia is one of universities in Quezon City dealing with
many functions in its daily network management and administration. These many
11. 16
functions such as network management, network administrat ion and system
administration are mostly delegated from campus to department level to which a network
or subnet has been assigned. But the proposed system only focused to implemention of
localized area network connection mainly for computer laboratories in the university.
Limited current awareness at the computer laboratory’s network changes in resource
inventory, in resource configuration and in the number of hosted applications at the
laboratory can place unexpected loads on the system/server which will eventually result
to a reduced performance and availability. This is due to the packets to be dropped that
makes fault detection and correction more difficult.In adopting a system like this, it will
help the school’s computer laboratories in terms of security and order.
The study “Pawikan Network Management System weathermap-admin-2.0.2” is
an open-source highly scalable network management system for small, medium, and
large-scale networks. It aims to ease up network management of a complex network
using its cool features such as network discovery and automatic configuration.
Pawikan Network Management System is an open source software that will let you
perform Network and Internet Other tasks. It is free for both personal and commercial
use, thus the perfect choice for those who want an alternative for Network & Internet
Other programs.
Foreign Studies
In the research entitled:” NETWORK MONITORING : Using Nagios as an
Example Tool”, conducted by Yusuff, Afeez, the aim of which to implement a network
monitoring using an open source network management utility to check the state of
network elements and associated services. Such management tools must have capability
to detect and to respond to faults in the network by generating appropriate alert to notify
the system administrator accordingly.
Nagios core was used as the network management utility for the network for
demonstration of monitoring exercise. Theoretical functions of the Nagios Core were
presented and a concise description of SNMP was addressed in relation to the Nagios
12. 17
functionalities. Nagios was configured with its plug- ins and was used against a test-laboratory
network run in the Linux environment. The test network comprised of two
switches, one router and the Nagios server. The results from the Laboratory
demonstration exercises are presented in the framework. Furthermore, the
implementations of Nagios for optimal performance can be laborious, but researchers’
experiences with Nagios and its resourceful outcomes proved to be worthwhile. Nagios is
therefore recommended for use in companies and institutions for monitoring their
networks. Also, the laboratory part of this thesis could be used as a learning module for
students to acquire skills and to identify the importance of network monitoring
The Study Entitled: “Rice University Design and Evaluation of FPGA- Gigabit-
Ethernet/PCI Network Interface Card by [2004], conducted by the College of Information
Science and technology in the Pennsylvania State University, stated that the continuing
advances in the performance of network servers make it essential for network interface
cards (NICs) to provide more sophisticated services and more data processing. Modern
network interfaces provide fixed functionality and are optimized for sending and
receiving large packets. One of the key challenges for researchers is to find effective
ways to investigate novel architectures of these new services and to evaluate their
performance characteristics in a real network interface platform. This thesis presents the
design and the evaluation of a flexible and configurable Gigabit Ethernet /PCI network
interface cards using FPGAs-based NIC includes multiple memories, including SDRAM
SODIMM, for adding new network services. The experimental results at Gigabit Ethernet
received interface indicate that the NIC can receive all packet sizes and store them at
SDRAM at Gigabit Ethernet line rate. This is promising since no existing NIC use
SDRAM due to the SDRAM latency.
And in another study with the title ”INCREASING EFFICIENCY OF
NETWORK INTERFACE CARD”, (AmitUppal, in the Mississippi State, Mississippi,
December 2007) it stated that a Network Interface Card (NIC) is used for receiving the
packets, processing the packets, passing the packets to the host processor. NIC uses the
buffer management algorithm to distribute the buffer space among different applications.
This thesis proposes two buffer management algorithms: 1) Fairly Shared Dynamic
13. 18
Algorithm (FSDA) for UDP-based applications; 2) Evenly Based Dynamic Algorithm
(EBDA) for both UDP and TCP-based applications. For the average network traffic
load, the FSDA improves the packet loss ratio by 18.5 % over the dynamic algorithm
(DA) and by 13.5% over the DADT, while EBDA improves by 16.7 % over the DA and
by 11.8% over the DADT. For the heavy network traffic load, the FSDA improves the
packet loss ratio by 16.8 % over the DA and by 12.5% over the DADT while EBDA
improves the packet loss ratio by 16.8 % over the DA and by 12.6% over the DADT.
Synthesis
“Network monitoring System for laboratory of Trinity University of Asia” is
relevant to the Local Area Network Performance Monitoring because it the same uses
localized area network connection and because it has also the concept about on packet-based
and a fault detection.
The research “Pawikan Network Management System weathermap-admin-2.0.2”
is related to the researchers monitoring system in terms of using localized monitoring.
The difference is that the system is an open source and it aims to ease up network
management of a complex network using its cool features such as network discovery and
automatic configuration.
Regarding the “Network Monitoring: Using Nagios as an example tool”, it has
also a relation to the system in terms of monitoring fault detection for packet- loss but the
difference is the Nagios system used two switch device and a router, and it was designed
for a more advanced way of network monitoring.
The proposed software differs from foreign studies “Rice University design and
evaluation of FPGA-GIGABIT-ETHERNET/PCI Network Interface Card” because it
presents the design and evaluation of a flexible and configurable gigabit Ethernet/PCI
Network Interface Cards using FPGAs- based NIC includes multiple memories, including
SDRAM SODIMM, for adding multiple new network services. Thus, it is about on
packet–based.
14. 19
“Increasing Efficienc y of Network Interface Card” is also differs to the
researchers system because the foreign studies proposed two buffer management
algorithm that they used for the average network traffic load to improve the packet loss
load.
Technical Background
The following technical terms were commonly used in developing the software.
Fig.2.8 Router Device
Complies with IEEE 802.11n and IEEE 802.11g/b standards for 2.4GHz Wireless
LAN
Up to 150Mbps wireless speed.
Supports PPPoE,Dynamic IP and static IP broadband functions
Supports 64/128-bit WEP, WPA/WPA2 and 802.1x encryption
Supports Virtual Server, Special Application and DMZ host
Supports IP, MAC,URL filtering and Port forwarding.
Built- in DHCP server/client
WDS mode makes it simple for WLAN expansion
Supports WMM for improved audio and video streaming
15. 20
Connects to secure network easily and fast using WPS
Supports port bandwidth control
Easy to install and configure
Specifications:
Interface:
o 4*100BaseTX (Audio MD/MDIX) LAN Ports
o 1*100BaseTX (Audio MD/MDIX) WAN Port
Power Supply: 5-9V DC/0.5A
LED: 1*Power,1*CPU,1*WAN,4*LAN,
Antenna: 1*5dBi external antenna
Environment:
Operating Tenperature: 0°C-40°C (32°F-104°F)
Storage Temperature: -40°-70°C (-40°F-158°F)
Operating Humidity: 10%~90% non-condensing
Standards: IEEE 802.11n,IEEE 802.11g, IEEE 802.11b
Frequency: 2.4-2.4835GHz
Data Rate:
802.11n: up to 150Mbps
802.11g: up to 54Mbps (dynamic)
802.11b: up to 11Mbps (dynamic)
Wireless Security: 64/128-bit WEP, WPA/WPA2 and 802.1x
Output Power: 20dBm (Max)
Channels: 1-11(North America), 1-13 (General Europe), 1-14 (Japan)
Modulation Type: DBPSK, DQPSK, CCK and OFDM (BPSK, QPSK, 16-
QAM/64-QAM)
Receiver Sensitivity:
135M: -65 dBm@10% PER
54M: -68 dBm@10% PER
11M: -85 dBm@8% PER
16. 21
6M: -88 dBm@10% PER
11M: -90 dBm@8% PER
WAN Type: Dynamic IP/Static IP/ PPPoE
Wireless: Virtual Server/ WPS/WDS/ Repeater
Default IP Address: 192.168.1.1
Username: admin
Password: admin
Fig.2.9Network Interface Card
Network interface controller (NIC) (also known as a network interface card, network
adapter, LAN adapter and by similar terms) is a computer hardware component that
connects a computer to a computer network.
17. 22
Fig.2.10 Straight-Through
Straight-Through cables are used when connecting Data terminating Equipment (DTE)
to Data Communication Equipment (DCE), such as computers and routers to modems
(gateways) or hubs (Ethernet Switches).
Operating systems: Windows 7
Supported Architectures:32-Bit (x86),64-Bit (x64) (WOW)
Hardware Requirements : Pentium 4 or higher processor, 1.6GHz or faster processor,
2.00 MB RAM (1.5 GB if running in a virtual machine), 5.5 GB of available hard-disk
space,5400 RPM hard drive, DirectX 9 capable video card running at 1024 x 768 or
higher-resolution displayDVD-ROM drive, UTP cable and RJ 45
Definition of Terms
To fully understand the system several, the following terms are defined:
Bandwidth. The amount of space for the passage way of a files.
Computer networking. A system in which computers are connected
to share information and resources.
18. 23
Communication. The transmission of data from one computer to another, or from
one device to another.
Computer.A programmable machine.
CPU Utilization.Refers to a computer's usage of processing resources, or the amount of
work handled by a CPU.
Data.Symbols or signals which are input, stored, and processed by a computer
for output as usable information.
Download.This is the process in which data is sent to your computer.
Downtime.Period during which an equipment or machine is not functional or
cannot work.
Fault detection.Discovering a failure in hardware or software.
Hardware.Refers to the physical parts of a computer and related devices. Internal
hardware devices include motherboards, hard drives, and RAM. External hardware
devices include monitors, keyboards, mice, printers, and scanners.
IP Address.Is an identifier for a computer or device on a TCP/IP network. Networks
using the TCP/IP protocol route messages based on the IP address of the destination.
Local Area network (LAN). Supplies networking capability to a group of computers in
close proximity to each other such as in an office building, a school, or a home. A LAN is
useful for sharing resources like files, printers, games or other applications.
Media Access Control. Address a hardware address that uniquely identifies each node of
a network.
Visual Basic .NET (VB.NET) is an object-oriented computer programming
language that can be viewed as an evolution of the classic Visual Basic (VB),
implemented on the .NET Framework. Microsoft currently supplies two main editions
of IDEs for developing in Visual Basic: Microsoft Visual Studio 2012, which
is commercial software and Visual Basic Express Edition 2012, which isfree of charge.
19. 24
The command- line compiler, VBC.EXE, is installed as part of the freeware .NET
Framework SDK. Mono also includes a command-line VB.NET compiler.
Microsoft SQL Server 2005. Is a relation database management system developed by
Microsoft. It is a software product whose primary function is to store and retrieve data as
requested by other software applications, be it those on the same computer or those
running on another across a network (including the Internet).
Monitoring.The a systematic process of observing, tracking, and recording activities or
data for the purpose of measuring program or project implementation and its progress
towards achieving objectives. Information gathered through monitoring is used to
analyze, evaluate the all of the components of a project or a department in order to
measure its effectiveness and adjust inputs where necessary.
Networking.The practice of linking two or more computing devices together for the
purpose of sharing data.
Network Interface Card (NIC). A computer circuit board or card that is installed in a
computer so that it can be connected to a network.
Networks. A group of two or more computer systems linked together.
Packets.The unit of data that is routed between an origin and a destination on the Internet
or any other packet-switched network.
Packet-based.A method of data transmission in which small blocks of data are
transmitted rapidly over a channel dedicated to the connection only for the duration of the
packet's transmission.
Peer to Peer. A communications model in which each party has the same capabilities
and either party can initiate a communication session
Router.A device that forwards data packets along networks. A router is connected to at
least two networks, commonly LANs or WANs.
20. 25
Sessions & Details. A semi-permanent interactive information interchange, also known
as a dialogue, a conversation or a meeting, between two or more communicating devices,
or between a computer and user.
SDRAM SODIMM( small outline dual in-line memory module). Is a type of computer
memory built using integrated circuits.A smaller alternative to a DIMM, being roughly
half the size of regular DIMMs. SO-DIMMs are often used in systems that have limited
space.
Software.A general term that describes computer programs.
SNMP(Simple Network Management Protocol).An internet-standard protocol that
allows you to retrieve management into from a remote device or to set configuration
settings on a remote device.
Upload.It is sending a file from your computer to another system.
Uptime.Part of active time during which an equipment, machine, or system is either fully
operational or is ready to perform its intended function.