1.17 Ethernet Operations
Ethernet is one of the Internet’s key technologies. Despite its
advanced age, Ethernet continues to power many of the world’s LANs (local
area networks) and is continually improving to meet future needs for high-
performance networking. Ethernet was developed by engineers Bob
Metcalfe and D.R. Boggs in 1972. Ethernet is a local area network (LAN)
technology that transmits information between computers at speeds of 10
and 100 million bits per second (Mbps). Ethernet has proven itself as a
relatively inexpensive, reasonable, fast and popular LAN technology, making
it the most popular LAN technology currently in use. Other popular LAN
types include Token Ring, Fast Ethernet, Fiber Distributed Data Interface
(FDDI), Localtalk, Ethertalk, and Arcnet. Ethernet is popular due to its low
cost, its multitude of wiring types, and its market acceptance. Ethernet
strikes a good balance between speed, cost, and ease of installation. These
benefits, combined with wide acceptance in the marketplace and the ability
to support a wide variety of network protocols, make Ethernet an ideal
Ethernet is one of the most commonly accepted set of standards for
the control of network signaling, cable access, cable configuration. The
Ethernet standard is defined by the Institute for Electrical and Electronic
Engineers (IEEE). This industry standard was based on the work of
Metcalfe and Boggs. The IEEE Standard 802.3 defines the rules for
configuring Ethernet as well as specifying how elements in an Ethernet
network interact with each other. Generally, Ethernet specifications define
data transmission protocols and the technical details manufacturers need to
know to build Ethernet products like cards and cables. By adhering to the
IEEE standard, network equipment and network protocols can communicate
efficiently. Over time, Ethernet technology has evolved and matured to the
point of becoming a commercial product. Today’s consumer can expect to
rely on off-the-shelf Ethernet compatible products that work well together.
In an Ethernet network, the adapters share the common cable by
listening before they transmit and transmitting only during a break in the
traffic when the channel is quiet—a technique called Carrier-Sense Multiple
Access with Collision Detection (CSMA/CD). Under the collision detection
part of the scheme, if two stations begin to transmit at the same time, they
detect the collision, sto, and retry after a sufficient time interval. Ethernet
is a shared media, so there are rules for sending data packets to avoid
conflicts. Devices (nodes) determine when the network is available for
sending packets by checking the wire to see if any other device (node) is
already sending data. When two devices (nodes) are transferring a data
packet at the same time, a collision will result. Minimizing collisions is a
critical part of the design and operation of network. Overcrowded networks
can result in competition for network bandwidth. This slows performance of
the network from the user’s viewpoint.
The CSMA/CD allows only one device on the LAN cabling to transmit
at any given time. When a device wants to transmit, it must first “listen” to
determine whether there is any data being transmitted at that time. The
device will continue to monitor until it is free. When the network is clear it
will go ahead with the transmission. When a collision occurs, both devices
re-transmit. Ethernet uses an algorithm based on random delay times to
determine the proper waiting period between re-transmissions.
In the OSI model, Ethernet technology operates at the physical and
data link layers. Traditional Ethernet supports the data transfers at the
rate of 10 megabits per second (Mbps). Over time, as the performance
needs of LANs increased, the industry created additional Ethernet
specifications for Fast Ethernet and Gigabit Ethernet.
Fast Ethernet. For Ethernet networks that need higher transmission
speeds, the Fast Ethernet standard (IEEE 802.3u) has been established.
This standard raises the Ethernet speed from 10 Mbps to 100 Mbps with
only minimal changes to the existing cable structure. The 100 Base-TX
standard has become the most popular due to its close compatibility with the
10 BASE-T Ethernet standard. Fast Ethernet provides increased
performance of traditional Ethernet while avoiding the need to completely
re-cable existing Ethernet networks.
Gigabit Ethernet. Gigabit Ethernet is a future technology that
promises a migration path beyond Fast Ethernet so the next generation of
networks will support even higher data transfer speeds.
A hub is a small rectangular box that joins multiple computers (or
other devices) together to form a single network segment allowing all
devices to communicate directly with each other. “In larger designs, signal
quality begins to degrade as segments exceed their maximum length. Hubs
provide the signal amplification required to allow a segment to be extended
greater distance. A hub takes any incoming signal and repeats it out all
ports.” Ethernet hubs are by far the most common type, but hubs for other
types of networks (such as USB) also exist.
A hub includes s series of ports that each accept a network cable.
Small 4-port hub networks four computers, larger hubs can contain 8, 12, 16,
and even 24 ports. The hub functions as a place of convergence where data
arrives from one or more directions and is forwarded out in one or more
Hubs occupy Layer 1 in the OSI model. At the physical layer, hubs can
support little in the way of sophisticated networking. Hubs do not read any
of the data passing through them and are not aware of a packet’s source or
destination. Essentially, a hub simply receives incoming packets and
broadcasts these packets out to all devices on the network (including the
one that sent the packet.).
Star topologies, such as 10BASE-T, requires Ethernet hubs. By using
a multi-port twisted pair hub, several point-to-point (PTP) segments can be
joined into one network. One end of the PTP is attached to the hub, the
other to the computer. The hub can also be connected to the backbone
thereby allowing all the twisted pair segments to communicate with all the
hosts on the backbone.
A hub allows users to share Ethernet. “Shared Ethernet” means that
all members of the network compete for bandwidth, and each will only get a
percentage of the available network bandwidth. Ethernet rules limit the
number and type of hubs. The following chart provides information
regarding the number of type of hubs for a 10Mbps Ethernet.
Network Max Nodes Max Distance
Type Per Segment Per Segment
10BASE-T 2 100m
10BASE2 30 185m
10BASE5 100 500m
10BASE-FL 2 2000m
Most hubs are stackable. A stackable hub has a special port that can
connect it to another hub to increase the capacity of your network. If you
start with a four-port hub, but eventually have more than four computers on
your network, you can add another four-port hub and connect it to the one
you already have. This ability increases the potential number of computer on
Technically speaking, there are three types of hubs:
• Passive hubs. These do not amplify the signal of incoming
packets before broadcasting them out to the network.
• Active hubs. These hubs amplify the signal of incoming packets
before broadcasting to the network. This type of hub is
sometimes called a concentrator.
• Intelligent hubs. These hubs have added features in addition
to the features of the active hub. Intelligent hubs are usually
stackable and can usually support remote management
capabilities via SNMP (Simple Network Management Protocol)
and VLAN (Virtual LAN).
A switch is a small device that joins multiple computers together.
Switches operate at layer two (Data Link Layer) of the OSI model and like
hubs, they feature multiple Ethernet ports. But switches also contain some
intelligence that allows them to make decisions on where to send LAN
traffic. As each computer transmits data, the switch examines the
destination address of the data. The switch forwards the data to the
appropriate port on the switch, without sending the same data to the rest of
the devices connected to the switch. This simple action speeds LAN
throughput and greatly reduces LAN congestion. By delivering messages
only to the connected device that it was intended for, switches conserve
network bandwidth and offer generally better performance than hubs.
A hub usually includes a switch of some kind. It should be noted that
products called switches can also be considered hubs. The difference
between a hub and a switch: a hub is the place where data comes together;
a switch determines how and where the data is forwarded from the place
where it comes together.
Switches are available in a variety of port configurations and support
10 Mbps Ethernet, 100 Mbps Ethernet (Fast Ethernet), or both. There are
two major types of switches. Layer 2 switches operate by examining the
Ethernet address of the data packets, whereas more sophisticated Layer 3
switches examine the destination IP address of the data.
Bridges are simple devices that are typically used to connect two
separate LANs over a private communications link. Bridges read the
destination address of each Ethernet packet—the outermost envelope
around the data—to determine where the data is headed, but they do not
look inside the packed or frame to read IP addresses. If the destination
address isn’t on the local LAN, the bridge hands the data off to the LAN at
the other end of the communications. A bridge differs from a repeater in
that a repeater simply amplifies the data signal where as, a bridge manages
traffic between the segments.
Bridges connect different network types (Ethernet and Fast
Ethernet) or networks of the same type. As mentioned earlier, bridges map
the Ethernet addresses of the devices (nodes) residing on each network
segment and allow only necessary traffic to pass through the bridge. When
the segments are the same, the packet is dropped (“filtered”), if the
segment is different, then the packet is “forwarded” to the correct
Bridges are also called “store and forward” devices because they look
at the whole Ethernet packet before making filtering or forwarding
decisions. Filtering packets and regenerating forwarded packets enables
bridging technology to split a network into separate collision domains. This
allows for greater distances in the total network design.
Most bridges are self-learning task bridges—they determine the user
Ethernet addresses on the segment by building a table as packets are passed
through the network. This self-learning capability dramatically raises the
potential of network loops in networks that have many bridges. A loop
presents conflicting information on which segment a specific address is
located and forces the device to forward all traffic. The Spanning Tree
Algorithm is a software standard (IEEE 902.1d) that describes how switches
and bridges can communicate to avoid loops.
Remote bridges can also be used to connect geographically remote
LANs together. To accomplish this, two bridges would sit connected to their
respective LANs. These bridges would be connected via a leased line (like a
T-1) or fiber optic link.
The Internet is a vast and intricate world. E-mail, downloads,
etc. are all made possible through one technology that is considered the
backbone of the Internet: the router. The Internet is made up of a variety
of network types: LANs, WANs, MANs, etc. A router is a tool that connects
a LAN to a larger Wide Area Network (WAN) such as the Internet or
another remote LAN. A router is a more complex portal device than a
bridge and has a greater capability to examine and direct the traffic it
carries. Routers are somewhat more expensive to buy and require more
attention than bridges, but routers have more robust features that make
them the best choice for a portal between a LAN and a long-distance link.
Routers are specialized computers that send your messages, and those of
every other Internet user, speeding to their destinations along thousands of
pathways. Routers operate at the network layers of the OSI model.
Routers act as a safety barrier between segments and often contain
firewall services to protect the LAN from hackers, snoopers, and other
intruders. When information needs to travel between networks, routers
determine how to get it there. A router has two separate but related jobs:
• It ensures that information doesn’t go where it’s not needed. This is
crucial for keeping large volumes of data from clogging the
connections of non-intended receipents.
• Since every device on the network has its own individual and unique
address, it makes sure that information makes it to the intended
In performing these two jobs, a router is extremely useful in dealing with
two separate computer networks. It joins the two networks, your school or
home network and another network like the Internet, passing information
from one to the other. It also protects the networks from one another,
preventing the traffic on one from unnecessarily spilling over to the other.
Regardless of how many networks are attached, the basic operation and
function of the router remains the same. Since the Internet is one huge
network made up of tens of thousands of smaller networks, routers are an
Routing is the process of finding appropriate paths for data packets
across networks as it traverses a LAN or WAN. A router reads the
destination address of the network packet and determines whether it is on
the same segment of the network cable as the originating station. The
router reads the information contained in each packet or frame, uses
complex network addressing procedures to determine the appropriate
network destination, discards the outer packet or frame, and then
repackages and retransmits the data.
Routers come in a large variety of sizes, ranging from small units
designed for home users to large units designed for hundreds of users.
Small routers often include an Ethernet switch. Routers also include one or
more Ethernet ports (called a WAN port) for connection to the Internet or
to a private IP network. Most routers include a built in DHCP (Dynamic Host
Configuration Protocol) server to automatically assign IP addresses to the
computers attached to the LAN. The router acts as a gateway for traffic
between the LAN and the external IP network. The router keeps track of
all traffic and reroutes incoming traffic back to the appropriate client PC.
A common protocol used for the routing of IP packets over an LAN
(internetwork) is called the RIP (Routing Information Protocol). A routing
table is maintained internally by a router about specific routes data may
take. The routing table may be static or dynamic depending on the routing
protocols and configuration in use. Static routing is where routes for data
are determined in advance and are part of the configuration of a router,
rather than being determined dynamically in real-time.
One of the most important tasks of a router is determine when a
packet should stay on the LAN or go outside the LAN. Using subnet masks,
routers look at the first three groups of the address (255.255.255.0). If
the sender and receiver share a subnet mask’s first three groups of
numbers, the data stays on the same network. If the sender and receiver do
not share a subnet mask, the data is allowed outside the network to other
Knowing where and how to send a message is the job of a router. Some
simple routers do this and nothing more. Other routers add additional
functions to the jobs they perform. Modern networks, including the
Internet, could not exist without the router.
Brouter: A brouter is a combined bridge and router that operates
without protocol restrictions. It routes data using a protocol it supports
and it bridges data that it cannot route. For example, if the brouter
supports TCP/IP packets, it will route those, but forwarded any other type
of packet to other networks connected to the device (this is the bridging
Transceiver: A transceiver is a combination transmitter/receiver in a
single device. In Ethernet networks, a transceiver is also called a Medium
Access Unit (MAU).
Repeater: A repeater receives a signal and repeats the signal along
the next segment. They connect one segment to another. Repeaters remove
unwanted noise in an incoming signal. As the length or number of devices
(nodes) exceeds the maximum, signal quality weakens. Using repeaters,
digital signal, even weak ones, can be clearly perceived and restored, and
analog signals can be strengthened.
Gateway: A gateway is a point in the network that acts as an entrance
(gate) to another network. The gateway is equipped for interfacing with
another network that may use different protocols. On the Internet, a
device can be either a gateway node or a host node. Both the computers of
Internet users and the computers that serve pages to users are host nodes.
The computers that control traffic within the school are gateway nodes.
Many times a router and switch provide the gateway function by
sending data destined for locations outside the local network to an outside
system for further processing. A computer server acting as a gateway node
can also act as a proxy server and a firewall server (depending on network
Using the default gateway address under TCP/IP settings, allows
users to specify the IP address of the designated default router. This
provides a route to use in case there isn’t a more specific route available in
the routing table. This allows all workstations to access services and
resources that exist outside of the LAN.
There are many types of Ethernet networks. One way that Ethernet
networks differ is in the types of cables they require and the speed they
10BaseT/10BaseFL is an older form of Ethernet and is still quite
common. It uses twisted pair copper wire and transmits information at a
rate of 10 Mbps. 10BaseT requires unshielded twisted pair copper wires at
or above category 3. 10BaseT Ethernet protocol defines how the pins in the
RJ-45 connector must be connected to the four pairs of copper wires in the
cable. All four pairs should be connected even though only two pairs are
used; this facilitates future upgrades that may use all pairs.
ANSI/EIA/TIA 568A and568B are the standard methods for
connecting the pairs of wires to the pins. 10BaseT requires a star topology
meaning that each computer should be connected to a central point in the
network. The 10BaseT cable between the central wiring point and the wall
jack may be no longer than 90 meters, and the cable from the wall jack to
each computer may be no more than 10 meters (100 meters total). 10BaseFL
uses fiber optics, and has a maximum cable length of 2,000 meters.
100BaseTX/100BaseFX provides transmission rates of 100 Mpbs (10
times that of 10BaseT). It is rapidly replacing 10BaseT in school networks.
100BaseTX provides sufficient speed for large databases, applications used
simultaneously, voice, and video. The cable required for 100BaseTX must be
two twisted pairs of unshielded copper wire rated at or above Category 5
for performance, or Category 1 shielded twisted pair. The total length of
cable connecting two client computers, including the switch cannot exceed
205 meters with each cable not exceeding 100 meters. 100BaseTX is closely
compatible with 10BaseT and both versions may be run on the same network
provided that the hubs and switches are designed to accommodate both
protocols simultaneously. 100BaseFX uses fiber optics to deliver
transmission rates of 100 mbps.
1000BaseT/CX and 1000BaseSX/LX provide transmission rates of
1000 Mbps (1,000 gigabit per second) or 100 times faster than 10BaseT
networks. Many schools take advantage of this speed in areas of the
network where it is most needed: between servers or between buildings.
1000BaseT/CX use twisted pair copper wires, and 1000BaseSX/LX use fiber
optic cable. 1000BaseT uses four pairs of twisted pair copper wires (only
two pairs are used by 100BaseTX) at or above Category 5, and full duplex
communications. Full duplex allows two computers to simultaneously transmit
and receive data on each pair of wires. Half duplex allows only one computer
to transmit or receive at a time. New installations should use Category 5E
cable or higher when available. 1000BaseT covers distances as great as
1000 meters. 1000BaseCX is used for connecting central equipment over
distances of 25 meters or less. 1000BaseSX uses fiber optic cable for
distances between 220 and 550 meters. 1000BaseLX uses fiber optic cable
for distances up to 550 meters.
Common Types of Ethernet:
Name Speed Signal Cable Acronyms
10BaseT 10 Mbps Electrical Unshielded 10BaseT and
twisted pair 10BaseFL are
(UTP) also termed
10BaseFL 10 Mbps Light waves Fiber optic IEEE 802.3
100BaseTX 100 Mbps Electrical Unshielded IEEE 802.3u
twisted pair (Fast
100BaseFX 100 Mbps Light waves Fiber optic IEEE 802.3u
1000Base-T 1000 Mbps Electrical Unshielded IEEE 802.3ab
1000Base-CX 1000 Mbps Electrical Unshielded IEEE 802.3z
1000Base-LX 1000 Mbps Light waves Fiber optic IEEE 802.3z
1000Base-SX 1000 Mbps Light waves Fiber optic IEEE 802.3z
Categories of Performance:
Rating Type of Cable Rating
10BaseT Unshielded Category 3 Found in older
twisted pair network
copper (UTP) installations.
Should not be
used for new
10BaseT Unshielded Category 5 Suitable for
100BaseTX twisted pair 100BaseTX. If
1000BaseT copper (UTP) used for higher
speeds such as
for signal loss
10BaseT Unshielded Category5e Use for
100BaseTX twisted pair 1000BaseT
1000BaseT copper (UTP)
10BaseT Unshielded Category 6 These standards
100BaseTX twisted pair have not been
1000BaseT copper (UTP) finalized.
Just as cities have post offices and mail boxes, local area networks
have specific places where the local and non-local services meet. Bridges,
routers, hubs, switches, and cables are tools that are used to connect Local
Area Network into larger Wide Area Networks (WAN) such as the Internet.
Operating Systems. An operating system performs tasks for
application programs. An application program provides commands to save a
document, but it does not carry out the task itself. The operating system
actually writes the data to the hard disk.
The most common operating systems are:
• Windows 3.11, 95, 98, ME, NT, 2000, XP
• Apple MAC OS 7, OS 8, OS 9, OS X, OS X Server
• Unix: Linux, Berkley Software Distribution (BSD),Solaris (Sun
Microsystems, IRIX (SGI)
In the world of education, Windows and Macintosh seem to dominate the
desktop. Operating systems also differ in the manner that data is handled
on disk. Because of these differences, an operating system on one computer
generally cannot read disks formatted on differing operating systems unless
it has been fitted with special translation capabilities. Modern Macintoshes
automatically include such special conversion software. Windows PC cannot
read Macintosh disks without installing additional software such as
Conversions Plus. This software allows the user to see the contents of the
disk and open and edit its document. Some software emulators allow foreign
operating systems to run on a computer. VirtualPC is an example of a
software program that allows Windows applications to run on a Macintosh
computer. These programs are used when a Macintosh user must run a
particular Windows-only application program, but otherwise does not need
The operating system is tightly coupled with its hardware. Windows
require special central processing units (for example: Intel Pentium III or
AMD K6), circuit boards, and other components. The Macintosh operating
system requires different central processing units (PowerPC), circuits
boards, and components than those used by Windows. Some versions of
UNIX run of computers with Intel or compatible processors, while others
require different processors.
Network Operating Systems. Network operating system (NOS) is
computer operating system software that is designed to support
workstations that are connected on a LAN. A network operating system
provides printer sharing, common file system and database sharing,
application sharing, and the ability to manage security, naming system, etc.
The term platform is often used synonymously with operating system. A
platform is the underlying hardware or software for a system and can be
thought of as the engine that drives them. The server needs server
software to serve out programs to the computers on the network. A network
operating system differs from a standard operating system in that standard
operating systems generally lack the capabilities to provide network services
for hundreds or thousands of people, they may not allow the range of options
for protecting documents and folders, and they often slow down to a crawl
when moving large amounts of data. Network servers always run network
operating systems instead of standard systems. The NOS manages the same
basic services as standard operating systems, but they are different in that
they are designed to allow large numbers of people to share resources, and
provide the tools for managing individuals and groups of users by creating
separate user and group accounts. Network operating systems require fast
computers with large amounts of disk space and random access memory
(RAM). For network management, there are several operating system
• Windows NT, 2000, XP
• Apple MAC OS X Server
• AppleShare IP
• Unix: Linux
• Novell NetWare
Each NOS requires a specific type of hardware. Windows NT/2000/XP and
Netware all run on computers with processors from Intel and compatibles.
Macintosh OS X Server and AppleShare IP runs on Apple’s PowerPC-based
computers. Linux runs on both types.
Which operating system you choose will depend on several variables:
• Staff training, experience, and familiarity
• Specific software applications necessary to the organization may
not function with all available operating systems.
• Functionality of the operating systems to handle the type of
processing needs of the organization.
Windows NT/2000 are powerful operating systems that may be used
by individuals, but that are primarily intended to manage shared resources
on a large network and are generally classified as network operating
systems. Although Windows 2000 Server is the most recent addition to the
Windows Server line, NT 4.0 still forms the core of many networks. It
organizes users and servers into domains (arbitrary groups of people,
printers, or other resources located in a single physical area of the
network). An organization can have one or many domains. A primary domain
controller (PDC) is a special server that holds a single central database of
information for a single domain. This database is called the Security
Accounts Manager (SAM). The SAM contains the usernames and passwords,
other computers within the domain, the groups names (Science Department)
created by the network administrator, and the members of each group. A
copy of the SAM is also stored on one or more backup domain controllers
(BDC). All changes to the SAM are periodically copied to the backup domain
controllers. When a user logs in to an NT domain, the nearest primary or
backup domain controller authenticates their username and password and
allows them to log in to the network. Domains also include resource servers
Some network administrators use a user profile to store the way that
each user’s desktop is organized and the application programs that are
available. User profiles are downloaded from the network server to each
user’s computer (Windows PC) only when the user logs on. This provides a
way for teachers and student to move among different computers and still
see the same desktop. These profiles are often used in conjunction with
system policies—a powerful tool for controlling users’ access to their
computers. System policies control whether users can access the Start
menu’s Run command, add items to the Start menu, access the Control Panel,
view available drives on the computer, browse the network, etc. User
profiles and system policies provide a starting place for reducing the
complexity and the support costs associated with desktop computers.
Windows NT 4.0 also allows Macintosh and Windows clients to share
documents and printers; allows people to dial into the network from remote
locations; enforces security to protect documents, client computers, and
other resources; and provides tools to monitor and optimize network and
server performance. Windows NT 4.0 can also host Web sites, automatically
assign Internet Protocol (IP) addresses to clients, provide Domain Name
System (DNS) services, and host large databases.
When multiple domains are present in a network, the domains share
information using something called trust relationships. A trust relationship
defines the resources on one domain that may be accessed by another. For
example: We have three domains with one school assigned to each of the
domains: School A, School B, and School C. If School A domain trusts the
School B domain, then the network administrator at School A can confer
access to its network resources on any members of School B’s networks.
This does not however mean that School B automatically trusts School A.
Trust relationships are not transitive: if School A trust School B, and
School B trusts School C, the network administrator at School A cannot
grant access to School C.
Overtime the domain controller and trust systems of NT 4.0 proved
difficult to manage. NT 4.0 developed a reputation for instability and
security problems. Windows 2000 replaced NT 4.0 and has tried to address
Windows 2000 replaced the domain controller and trust relationships
with Active Directory (global directories). Active Directory provides a
single, centralized, network-wide listing of all resources on the network. This
listing includes users, servers, clients, and printers. This allows users to log
in to the network once and to see all users, printers, and computer for which
they are authorized no matter where on the network those resources reside.
Under NT 4.0 each login only connected users to one domain or server and
its particular resources. As mentioned earlier, organizations can keep a
single list of all users, servers, and resources instead of separate partial
lists on different servers. Active Directory allows organizations to organize
their resources according to a logical structure rather than by domain. This
logical structure provides for a simpler way to distribute the burdens of
network management. It is no longer necessary to create a separate domain
to assign control of one building’s computer and printers to a local
administrator. Additionally, Active Directory allows the management of
client computers based on their directory entries. Software can be installed
or removed from a client computer, user documents can be required to be
stored on the server rather than the client, what buttons and other options
are available on a user’s desktop can be determined based on the directory
The Active Directory information about each resource is not however
stored centrally. Each domain controller stores a portion of the Active
Directory representing the objects within its particular domain. These
different portions are synchronized routinely in a process known as
replication. While Windows 2000 still organizes its resources into domains,
it does not use primary and backup domain controllers to store the domain
information. Instead, the information is distributed among one or more
standard domain servers.
Besides the addition of Active Directory services, Windows 2000
provides improved services for managing disks, system settings, users, and
security policies, and monitoring system usage.
Windows XP Professional is the newest operating system from
Microsoft. The professional edition includes all the features of Windows XP
Home with the added benefits of improved networking, security, and
centralized management features. XP Professional provides the several
enhancements such as a system restore that returns the system to a
previous, stable state without the loss of data; includes support for
standards for hardware devices such as DVD disks, infrared connections,
and high-speed connections such as FireWire; allows users to make an exact
duplicate of the operating system and applications on one machine and install
them on another machine; enables technical support personnel to view and
control another’s user’s screen (with permission); assigns identical settings
for security, appearance, and management options to groups of users; and
provides security and enhanced performance for wireless networks.
Novell NetWare 6 is a full Web-based network operating system.
This newest version is a major revision of previous versions. Novell’s web
approach allows the system to support Windows, Unix, Linux, and MAC
platforms. For organizations already using Novell servers, the elimination of
the client program is worth the upgrade. Novell uses Netware Directory
Service (NDS) to provide a single list of all network resources regardless of
where they are located on the network. NDS functions similar to Active
Directory in that it collects information about network resources in a
centralized location and then distributes portions of the information across
servers at strategic locations on the network, these are routinely
synchronized. NDS also maintains a hierarchical organization that makes it
easy for users to locate the resources they need and for network
administrators to keep information current and consistent. NDS has several
advantages over Active Directory: it is older and therefore well tested, it
runs on many different network operating systems (NT, 2000, UNIX)
whereas only Windows 2000 clients can participate in Active Directory,
being built into Windows 2000. Windows 95 & 98 clients can fully
participate once the appropriate update has been applied. Netware servers
provide approximately the same range of capabilities as do NT, 2000, XP
servers: document and print sharing; Web serving; database, DNS, an DHCP
(Dynamic Host Computer Protocol) services; and management of Windows
client computer desktops.
AppleShare Ip and MAC 0S X Server. AppleShare IP is an easy-to-
use server based on Mac OS 9 client operating system. It provides
document and printer sharing as well as
Web and mail services for both Windows and Macintosh clients.
Mac OS X Server extends the capabilities of AppleShare IP. This
server is based on open-source technology with some Apple additions. This
version of Apple’s server is the most ambitious attempt to bring open-source
technology to the mass marketplace.
Mac OS X Server has capabilities to host Web sites and provides
Web site development tools. Many will find the Web server easy to set up
and to maintain. When the Mac OS X is combined with a Mac G3 server, it
becomes one of the best server platforms you can buy on a limited budget.
If you are familiar with UNIX or Linux you’ll have no problem navigating
through this operating system. It includes the Apache HTTP Server, Perl
and Tcl for scripting, and administrators will find several shells like bash
available. The Mac OS X Server supports Sun’s Java and includes a virtual
machine based on Sun’s JDK. Uses of this server for anything other than as
a Web server may provide disappointing results. User management tools are
minimal and there is no support for RADIUS authentication. Additionally,
this server is lacking the major directory services (LDAP, Active Directory,
NetWare Directory Services, or User Database), this means that users must
be entered by hand, one by one. Macintosh OS X also serves QuickTime
streaming video (movies) to clients over the Web and manages client
desktops—so long as they are Macintosh clients—by downloading standard
configurations for users no matter where they log in to the network.
Macintosh OS X servers are especially designed for environments
where servers are managed by non-technical personnel and ease of use is
paramount or where there are small groups of clients. OS X server is easier
to install and manage than Windows or NetWare, but it lacks global
directories and other management features required by large networks.
A server is generally a computer or other device on a network that manages
the resources of the network. There are many different types of servers:
file servers, web servers, mail servers, etc. Severs are often dedicated to
performing no other task except network server tasks.
Ethernet networks require at least one server. The server contains
the network operating system (NOS). Depending on an organizations size
and the demand for network services, many networks have numerous servers
available to balance work load, complete specific processing tasks, etc.
Redundancy is important because failure of the server interrupts network
services, therefore, backup servers, battery back-up (power supplies) , and
system files back-up procedures are crucial.
Hardware requirements vary from organization to organization.
Generally, the serve will run 24/7 so it is important to purchase quality
components. Purchase of hardware should be done in accordance with the
type of operating system software that the organization will be using. Not
all hardware components (and some servers) are compatible with all
operating system software. Therefore, it is important to verify that the
system/components that you are purchasing are compatible with the
operating system(s) you will be running. Generally, you will want to purchase
the fastest processor that the budget will allow (gigahertz speed and higher
for production usage servers), large storage drives (300 gigabytes or
higher) with back-up drives for failures, dual 10/100/1000 NIC cards for
load balancing, and as much RAM as possible (at least 512 MB, preferably 1
GB of RAM), finally a 10/100 Ethernet connector for remote access and
system management would be desirable. Budget considerations and other
technology needs generally determine the specifications and configuration of
the options for servers.
When people refer to a piece of hardware as a “server,” they typically
mean that it is running one or more pieces of server software, may or may
not be dedicated to that role, and is possibly made up of higher-grade
components that tolerate long periods of availability.
Web Server. At its core, a Web sever serves static content to a Web
browser by loading a file from a disk and serving it across the network to a
user’s Web browser. This entire exchanged is accomplished by the browser
and server using the Hypertext Transfer Protocol (HTTP). Web servers
grew “brains” by relying on additional technologies so they could process
pages before delivering results to the client. The common gateway
interface, or CGI, was the first popular technology that allowed a Web
server to interact with an external computer program. Microsoft’s Active
Server Pages (ASP) is another example.
FTP Server. File Transfer Protocol (FTP) is one of the oldest of Internet
services. FTP makes it possible to move one or more files securely between
computers while providing file security, data integrity controls, and
organization as well as transfer control. From downloading the newest
software to transferring document, a significant percentage of Internet
traffic consists of file transfers. FTP operates as a client and server. The
FTP server does the file security, file organization, and transfer control.
The client, sometimes built into a browser and sometimes a specialized
program, receives the files and places them onto the local hard drive.
Mail Server. E-mail is generally considered the most important service
provided by the Internet, this makes servers that move and store mail a
crucial. Although many people thing of mail servers in terms of the internet,
this service was originally developed for corporate networks (LANs and
Application Server. Application servers connect database information
(usually coming from a database server) and the client program (many times
a Web browser). Application servers generally decrease the size and
complexity of client program, cache and control the data flow for better
performance, and provide security for both the data and user.
Proxy Server. Proxy servers filter requests, improve performance, and
share connections. Filtering requests is the security function and the
original reason for having a proxy server. Proxy servers can inspect all
traffic (in and out) over an Internet connection and determine if there is
anything that should be denied transmission or access. A proxy server can
be used to keep users out of a particular Web site or restrict unauthorized
access to the internal network by authenticating users. Proxy servers also
improve performance through caching. The proxy server analyzes user
requests and determines which, if any, should have the content stored
temporarily for immediate access. Some proxy servers provide a means for
sharing a single Internet connection among a number of workstations. While
this has practical limits, it can be very effective and inexpensive for small
News Server. News servers are the delivery source for thousands of public
news groups. The servers utilize the Network News Transport Protocol
(NNTP) to interface with other USENET news servers and to distribute
news to anyone using a standard NNTP newsreader. Newsgroups are
notorious for containing offensive and inappropriate materials for schools.
Consequently, few schools, if any, offer this type of server.
Firewall Server. As noted in previous sections, this service may be combined
with router functions and/or proxy functions. This server provides a
measures of protection between the LAN and the WAN.
Administrative Server. The network operating systems should be installed
here along with user accounts, security policies, etc. This server is usually
the main server in the domain. It may offer Domain Name Services.
Student/Production Server. This server would be accessible by students
for housing projects, files, etc.
Backup Server. This contains drives that are mirrors of the Administrative
Server. It can also contain important backup files from other servers as
Not all servers are accessible to all users. Accessibility is determined
through security levels. Security levels are generally established by the