2. INTRODUCTION
Introduction to Network:
1. A network is an interconnected set of autonomous computers.
2. Devices often referred to as nodes can be a computer, printer, or any other devices capable of
sending/ receiving data.
Data Communications:
Data Communications are the exchange of data between two devices via some form of transmission
medium such as wire cable.
Fundamental Characteristics of DCS
1. Delivery(The system must deliver data to the correct destination)
2. Accuracy(The system must deliver the data accurately)
3. Timeliness(The system must deliver data in a timely manner)
4. Jitter(variation in the packet arrival time)
3. Components of Data Communication
Data communication systems are made up of five components.
1. Message
2. sender
3. Receiver
4. Medium
5. Protocol
Sender Receiver
Step 1:
Step 2:
Step 3:
…………
…………
Protocol Protocol
Medium
message
Fig.1.1 Five components of data communication
Step 1:
Step 2:
Step 3:
…………
…………
4. 1. Message:
This is the information to be communicated. It includes text,
numbers, pictures, audio and video.
2. Sender:
It is the device that sends the data message. It can be a computer,
workstation, telephone handset, video camera and so on.
3. Receiver:
It is the device that receives the message. It can be a computer,
workstation, telephone handset, television and so on.
4. Medium:
It is the physical path by which a message travels from sender to
receiver. Some examples of transmission media include twisted pair
wire, coaxial cable, fiber optic cable, laser or radio waves.
5. Protocol:
It is a set of rules that govern data communications. It represents an
agreement between the communication devices.
5. Data Representation
Information today comes in different forms.
They are
– Text
– Numbers
– Images
– Audio
– Video
Data Flow:
Communication between two devices can be
– simplex
– half-duplex
– full-duplex
6. 1. Simplex
• In simplex mode, the communication is unidirectional, as on a one-way
street.
• Only one of the two devices on a link can transmit; the other can only
receive.
• Keyboards and traditional monitors are examples of simplex devices.
Fig. Simplex
2. Half-Duplex
• In half-duplex mode, each station can both transmit and receive, but not
at the same time.
• When one device is sending, the other can only receive, and vice versa.
• Walkie-talkies and CB (citizens band) radios are both half-duplex systems.
Mainf
rame
Direction of data
Monitor
7. Fig. Half-Duplex
3. Full-Duplex
• In full-duplex, both stations can transmit and receive simultaneously.
• One common example of full-duplex communication is the telephone
network.
• When two people are communicating by a telephone line, both can talk
and listen at the same time.
Fig. Full-Duplex
station station
Direction of data at time 2
Direction of data at time 1
station station
Direction of data all the time
8. NETWORKS
• A network is a set of devices (often referred to as nodes)
connected by communication links.
• A node can be a computer, printer, or any other device
capable of sending and/or receiving data generated by
other nodes on the network.
Network Criteria:
A network must be able to meet a certain number of
criteria.
The most important of these are
– Performance
– Reliability
– Security
9. 1. Performance
• Performance can be measured in many ways, including
transit time and response time.
– Transit time is the amount of time required for a message to
travel from one device to another.
– Response time is the elapsed time between an inquiry and a
response.
• The performance of a network depends on a number of
factors, they are number of users
– type of transmission medium
– capabilities of the connected hardware
– efficiency of the software
• Performance is often evaluated by two networking metrics
– throughput
– delay
10. 2. Reliability
Network reliability is measured by
– frequency of failure
– time it takes a link to recover from a failure
– network's robustness in a catastrophe
3. Security
Network security issues include
– protecting data from unauthorized access
– protecting data from damage and development
– implementing policies and procedures for recovery
from breaches and data losses
11. Type of Connection
• A network has two or more devices connected through links.
• A link is a communications pathway that transfers data from
one device to another as in fig.1.3(a).
There are two possible types of connections:
1. point-to-point:
A point-to-point connection provides a dedicated link
between two devices.
Figure 1.3(a)Types of connections: point-to-point
station station
Link
12. 2. multipoint:
A multipoint (also called multidrop) connection is one in
which more than two specific devices share a single link as in
fig.1.3(b).
The capacity of the channel is shared, either spatially or
temporary.
Figure1.3 (b)Types of connections: multipoint
Mainframe
Link
station
station
station
13. Physical Topology
• Physical Topology refers to the way in which a network is laid
out either physically or logically.
• Two or more devices connect to a link; two or more links form
a topology.
• The topology of a network is the geometric representation of
the relationship of all the links and linking devices to one
another.
• There are four basic topologies possible:
─ Mesh
─ Star
─ Bus
─ Ring
14. Mesh Topology
• Every device has a dedicated point-to-point link to every other
device as in fig.1.4.
• In a mesh topology, we need n (n-1)/2 duplex-mode links.
• To accommodate that many links, every device on the
network must have n-1 input/output(I/O) ports.
Fig.1.4 a fully connected mesh topology
station
station station
station station
15. Advantages:
• The use of dedicated links guarantees that each connection can carry its
own data load. So traffic problem can be avoided.
• It is robust. If one link becomes unusable, it does not affect the entire
system.
• It gives privacy and security.
• Fault identification and fault isolation are easy.
Disadvantages:
• Since every device must be connected to every other device, installation
and reconnection are difficult.
• The bulk of the wiring is greater than the available space.
• Hardware required to connect each link can be prohibitively expensive
16. Example:
• One practical example of a mesh topology is the connection of
telephone regional offices in which each regional office needs
to be connected to every other regional office.
• A mesh network has 8 devices. Calculate total number of
cable links and IO ports needed.
Solution:
Number of devices = 8
Number of links = n (n-1)/2
= 8(8-1)/2
= 28
Number of port/device = n-1
= 8-1 = 7
17. 2. Star topology
• In a star topology as in fig.1.5, each device has a dedicated
point-to-point link only to a central controller, usually called a
hub.
• The devices are not directly linked to one another.
• Unlike a mesh topology, a star topology does not allow direct
traffic between devices.
Fig. 1.5 5 A star topology connecting four stations
station station station station
Hub
18. Advantages:
• Less expensive than mesh since each device is connected only
to the hub.
• Installation and reconfiguration are easy.
• Less cabling is needed than mesh.
• This includes robustness. If one link fails, only that link is
affected. All other links remain active. This factor also lends
itself to easy to fault identification & isolation.
Disadvantages:
• If the hub goes down, the whole system is dead.
19. 3. Bus Topology
• A bus topology is multipoint. One long cable acts as a backbone to link all
the devices in a network.
• Nodes are connected to the bus cable by drop lines and taps. A drop line is
a connection between the device and the main cable.
• A tap is a connector that either splices into the main cable ,or punctures
the sheathing of a cable to create a contact with the metallic core as
shown in fig.1.6.
Figure 1.6 A bus topology connecting three stations
station station station
Tap Tap Tap
Drop Line
Drop Line Drop Line
Cable end Cable end
20. Advantages:
• Ease of installation
• Less cabling
Disadvantages:
• Difficult reconnection and fault isolation.
• Difficult to add new devices.
• Signal reflection at the taps can cause degradation in quality.
• If any fault in the bus cable stops all transmission.
21. 4. Ring Topology
• Each device has a dedicated point-to-point connection with
only the two devices on either side of it as in fig.1.7.
• A signal is passed along the ring in one direction, from device
to device, until it reaches its destination.
• Each device incorporates a repeater (regenerates the signal
energy).
Figure 1.7 A ring topology connecting six stations
22. Advantages:
• Easy to install and reconfigure.
• Fault identification is easy.
Disadvantages:
• Unidirectional traffic.
• A break in the simple ring can disable the entire network.
Categories of Networks:
There are three primary categories, they are
– Local area network
– Metropolitan area network
– Wide area network
23. 1. Local Area Network
• A local area network (LAN) is usually privately owned and links
the devices in a single office, building, or campus as shown in
fig.1.8.
• Currently, LAN size is limited to a few kilometers.
• LANs are designed to allow resources to be shared between
personal computers or workstations.
• The resources to be shared can include hardware (e.g., a
printer), software (e.g., an application program), or data.
• The most common LAN topologies are bus, ring, and star.
Early LANs had data rates in the 4 to 16 megabits per second
(Mbps) range. Today, however, speeds are normally 100 or
1000 Mbps.
24.
25. Figure 1.8 An isolated LAN connecting 12 computers to a hub in a closet
26. 2. Metropolitan Area Network
• A metropolitan area network (MAN) is a network with a size
between a LAN and a WAN.
• It normally covers the area inside a town or a city.
• It is designed for customers who need a high-speed
connectivity, and have endpoints spread over a city or part of
city.
• A good example of a MAN is the part of the telephone
company network that can provide a high-speed DSL line to
the customer.
• Another example is the cable TV network.
28. 3. Wide Area Network
• A wide area network (WAN) provides long-distance transmission of data, image,
audio, and video information over large geographic areas (like country, continent
or even the whole world).
• The switched WAN connects the end systems, which usually comprise a router
(internetworking connecting device) that connects to another LAN or WAN as in
fig.1.9.a.
• The point-to-point WAN is normally a line, leased from a telephone or cable TV
provider that connects a home computer, or a small LAN to an Internet service
provider (ISP) as shown in fig.1.9.b.This type of WAN is often used to provide
Internet access.
• An early example of a switched WAN is X.25, a network designed to provide
connectivity between end users.
• A good example of a switched WAN is the asynchronous transfer mode (ATM)
network, which is a network with fixed-size data unit packets called cells.
30. Internetwork
Interconnection of Networks: Internetwork
• When two or more networks are connected, they become an
internetwork, or internet as in fig.1.10.
• As an example, assume that an organization has two offices, one on
the east coast and the other on the west coast.
• The established office on the west coast has a bus topology LAN;
the newly opened office on the east coast has a star topology LAN.
• The president of the company lives somewhere in the middle and
needs to have control over the company from her home.
• To create a backbone WAN for connecting these three entities (two
LANs and the president's computer), a switched WAN (operated by
a service provider such as a telecom company) has been leased.
• To connect the LANs to this switched WAN, however, three point-
to-point WANs are required.
31. Figure 1.10 A heterogeneous network made off our WANs and two LANs
32. Internet
• The Internet is a structured, organized system.
• Private individuals as well as various organizations such as
government agencies, schools, research facilities, corporations,
and libraries in more than 100 countries use the Internet. Millions
of people are users.
• Today most end users who want Internet connection use the
services of Internet service providers (ISPs).
• There are international service providers, national service
providers, regional service providers, and local service providers.
• The Internet today is run by private companies, not the
government.
33. Protocols and Standards
• In computer networks, communication occurs between entries in different
systems.
• An entity is anything capable of sending or receiving information.
However, two entities cannot simply send bit streams to each other and
expect to be understood.
• For communication occur, the entities must agree on a protocol.
Protocols
• A protocol is a set of rules that govern data communications.
• A protocol defines what is communicated, how it is communicated, and
when it is communicated.
• The key elements of a protocol are
1. Syntax
2. Semantics
3. Timing
34. 1. Syntax:
Syntax refers to the structure or format of the data, meaning the order in
which they are presented.
2. Semantics:
─ Semantics refers to the meaning of each section of bits.
─ It refers that how is a particular pattern to be interpreted
─ what action is to be taken based on that interpretation?
3. Timing:
Timing refers to two characteristics. They are,
─ When data should be sent
─ How fast they can be sent
35. Standards
-Standards provide guidelines to manufacturers, vendors, government
agencies, and other service providers
Data communication standards fall into two categories:
1. de facto (meaning "by fact" or "by convention")
2. de jure (meaning "by law" or "by regulation")
1. De facto:
– Standards that have not been approved by an organized body but have
been adopted as standards through widespread use are de facto
standards.
– De facto standards are often established originally by manufacturers who
seek to define the functionality of a new product or technology.
2. De jure:
– Those standards that have been legislated by an officially recognized body
are, de jure standards.
36. NETWORK MODELS
Network Models:
• Computer networks are created by different entities.
• Standards are needed so that these heterogeneous networks can communicate with one another.
• Two standards are
– OSI (Open Systems Interconnection) Model->It defines a seven-layer network.
– Internet Model->It defines a five-layer network.
Layered Tasks:
• We use the concept of layers in our daily life.
• As shown in fig.1.11, let us consider two friends who communicate through postal mail.
• The process of sending a letter to a friend would be complex if there were no services available
from the post office.
Hierarchy:
• According to our analysis, there are three different activities at the sender site and another three
activities at the receiver site.
• The task of transporting the letter between the sender and the receiver is done by the
carrier.
• At the sender site, the letter must be written and dropped in the mailbox before being
picked up by the letter carrier and delivered to the post office.
• At the receiver site, the letter must be dropped in the recipient mailbox before being picked
up and read by the recipient.
38. Services:
• Each layer at the sending site uses the services of the layer
immediately below it. The sender at the higher layer uses the
services of the middle layer.
• The middle layer uses the services of the lower layer. The lower
layer uses the services of the carrier.
• The layered model that dominated data communications and
networking literature before 1990 was the Open Systems
Interconnection (OSI) model.
• Everyone believed that the OSI model would become the ultimate
standard for data communications, but this did not happen.
• The TCP/IP protocol suite became the dominant commercial
architecture because it was used and tested extensively in the
Internet; the OSI model was never fully implemented.
39. OSI MODEL
• Established in 1947, the International Standards Organization (ISO) is a
multinational body dedicated to worldwide agreement on international
standards.
• An ISO standard that covers all aspects of network communications is the
Open Systems Interconnection model. It was first introduced in the late
1970s.
• An open system is a set of protocols that allows any two different systems
to communicate regardless of their underlying architecture.
• The purpose of the OSI model is to show how to make easy
communication between different systems without requiring changes to
the logic of the underlying hardware and software.
• The OSI model is not a protocol; it is a model for understanding and
designing a network architecture that is flexible, robust, and
interoperable. ISO is the organization. OSI is the model.
• The OSI model is a layered framework for the design of network systems
that allows communication between all types of computer systems.
• It consists of seven separate but related layers, each of which defines a
part of the process of moving information across a network.
40. Layered Architecture
Why Layered architecture?
• To make the design process easy by breaking
unmanageable tasks into several smaller and
manageable tasks (by divide-and-conquer approach).
• Modularity and clear interfaces, so as to provide
comparability between the different providers'
components.
• Ensure independence of layers, so that
implementation of each layer can be changed or
modified without affecting other layers.
• Each layer can be analyzed and tested independently
of all other layers.
41. • The OSI model is composed of seven ordered layers: they are
– Physical layer
– Data link layer
– Network layer
– Transport layer
– Session layer
– Presentation layer
– Application layer
• Figure 1.12 shows the layers involved when a message is sent from device A to device B.
• As the message travels from A to B, it may pass through many intermediate nodes. These
intermediate nodes usually involve only the first three layers of the OSI model.
• Within a single machine, each layer calls upon the services of the layer just below it.
• Layer 3, for example, uses the services provided by layer 2 and provides services for layer
4.
• Between machines, layer x on one machine communicates with layer x on another
machine. This communication is governed by an agreed-upon series of rules and
conventions called protocols.
• The processes on each machine that communicate at a given layer are called peer-to-peer
processes.
• Communication between machines is therefore a peer-to-peer process using the
protocols appropriate to a given layer.
43. Peer-to-Peer Processes
• At the physical layer, communication is direct. At the higher layers, however,
communication must move down through the layers on device A back up through the
layers.
• Each layer in the sending device adds its own information to the message it receives
from the layer just above it and passes the whole package to the layer just below it.
• Entire package is converted to a form that can be transmitted to the receiving device.
At the receiving machine, the message is unwrapped layer by layer, with each process
receiving and removing the data meant for it.
• For example, layer 2 removes the data meant for it, and then passes the rest to layer
3. Layer 3 then removes the data meant for it and passes the rest to layer 4, and so
on.
Interfaces between Layers:
• The passing of the data and network information down through the layers of the
sending device and back up through the layers of the receiving device is made
possible by an interface between each pair of adjacent layers.
• Each interface defines the information and services a layer must provide for the layer
above it.
• Well-defined interfaces and layer functions provide modularity to a network
44. Organization of the Layers:
The seven layers are arranged by three sub groups.
– Network Support Layers
– User Support Layers
– Intermediate Layer
1. Network Support Layers
– Physical, Data link and Network layers come under the group.
– They deal with the physical aspects of the data such as electrical
specifications, physical connections, physical addressing, and
transport timing and reliability.
2. User Support Layers
– Session, Presentation and Application layers comes under the
group.
– They deal with the interoperability between the software systems.
3. Intermediate Layer
The transport layer is the intermediate layer between the network
support and the user support layers.
46. • The upper OSI layers are almost always implemented in software; lower layers are a
combination of hardware and software, except for the physical layer, which is mostly
hardware.
An exchange using the OSI Model
• In fig.1.13, D7 means the data unit at layer 7, D6 means the data unit at layer 6, and so on.
• The process starts at layer 7 (the application layer), then moves from layer to layer in
descending, sequential order.
• At each layer, a header, or possibly a trailer, can be added to the data unit.
• Commonly, the trailer is added only at layer 2. When the formatted data unit passes through
the physical layer (layer 1), it is changed into an electromagnetic signal and transported along
a physical link.
• Upon reaching its destination, the signal passes into layer 1 and is transformed back into
digital form. The data units then move back up through the OSI layers.
• As each block of data reaches the next higher layer, the headers and trailers attached to it at
the corresponding sending layer are removed, and actions appropriate to that layer are
taken.
• By the time it reaches layer 7, the message is again in a form appropriate to the application
and is made available to the recipient.
Encapsulation:
• A packet (header and data) at level 7 is encapsulated in a packet at level 6. The whole packet
at level 6 is encapsulated in a packet at level 5, and so on.
• In other words, the data portion of a packet at level N - 1 carries the whole packet (data and
header and maybe trailer) from level N. The concept is called encapsulation.
47. LAYERS IN THE OSI MODEL
1. Physical Layer
The physical layer is responsible for
movements of individual bits from one hop (node) to
the next.
Figure 1.14 Physical layer
48. Other responsibilities are,
Physical characteristics of interfaces and medium:
• It defines the characteristics of the interface between the devices and the
transmission medium as in fig.1.14.
• It also defines the type of transmission medium.
Representation of Bits:
• To transmit the stream of bits, they must be encoded into signals
(electrical or optical).
• It defines the type of encoding.
Fig. Physical layer
49. Data Rate:
• It defines the transmission rate i.e. the number of
bits sent per second(physical layer defines the
duration of a bit, which is how long it lasts)
Synchronization of Bits:
• The sender and receiver must be synchronized at
the bit level.
• In other words, the sender and the receiver
clocks must be synchronized.
.
50. Line Configuration:
It defines the type of connection between the devices. Two types of
connection are,
– point to point
– multipoint
Physical Topology:
It defines how devices are connected to make a network.
Devices can be connected by
– mesh
– star
– bus
– ring
– hybrid
Transmission Mode
It defines the direction of transmission between devices: simplex,
half duplex and full duplex
51. 2. Data link Layer
The data link layer is responsible for
moving frames from one hop (node) to the
next.
Figure 1.15 Data link layer
53. Other responsibilities are,
Framing
• It divides the stream of bits received from network layer into manageable data units called
frames as in fig.1.15.
Physical Addressing
• If frames are to be distributed to different systems on the network, the data link layer adds a
header to the frame to define the sender and/or receiver of the frame.
• If the frame is intended for a system outside the sender's network, the receiver address is
the address of the device that connects the network to the next one.
Flow control
• If the rate at which the data are absorbed by the receiver is less than the rate at which data
are produced in the sender, the data link layer imposes a flow control
mechanism to avoid overwhelming the receiver.
Error control
• The data link layer adds reliability to the physical layer by adding mechanisms to detect and
retransmit damaged or lost frames.
• It also uses a mechanism to recognize duplicate frames. Error control is normally achieved
through a trailer added to the end of the frame.
Access control
• When two or more devices are connected to the same link, data link layer protocols are
necessary to determine which device has control over the link at any given time.
54. 3. Network Layer
• The network layer is responsible for the
delivery of individual packets as in fig.1.16,
from the source host to the destination host.
• If two systems are connected to the same link,
there is usually no need for a network layer.
• However, if the two systems are attached to
different networks (links) with connecting
devices between the networks (links), there is
often a need for the network layer to
accomplish source-to-destination delivery.
55. Figure 1.16 Network layer
The responsibilities are,
Logical Addressing
─If a packet passes the network boundary, we need another addressing system
to help distinguish the source and destination systems.
─The network layer adds a header to the packet coming from the upper layer
that, among other things, includes the logical addresses of the sender and
receiver.
Routing
─When independent networks or links are connected to create a large network,
the connecting devices (called router or switches) route or switch the packets to
their final destination.
56. 4. Transport Layer
• The transport layer is responsible for process to process delivery of
the entire message.
• A process is an application program running on a host.
• It ensures that the whole message arrives in order and intact. It
ensures both the error control and flow control at source to
destination level.
Figure 1.17 Transport Layer
57. The responsibilities are,
Service point Addressing
• Computers often run several programs at the same time.
• The transport layer header must therefore include a type of address called a service-
point address (or port address).
• The transport layer gets each packet to the correct computer; the transport layer gets
the entire message to the correct process on that computer.
Segmentation and Reassembly
• A message is divided into transmittable segments, with each segment containing a
sequence number as shown in fig.1.17.
• These numbers enable the transport layer to reassemble the message correctly upon
arriving at the destination and to identify and replace packets that were lost in
transmission.
Connection control
• The transport layer can be either connectionless or connection-oriented.
• A connectionless transport layer treats each segment as an independent packet and
delivers it to the transport layer at the destination machine.
• A connection- oriented transport layer makes a connection with the transport layer
at the destination machine first before delivering the packets. After all the data are
transferred, the connection is terminated.
58. Flow control
• Like the data link layer, the transport layer is responsible for
flow control. However, flow control at this layer is performed
end to end rather than across a single link.
Error control
• Like the data link layer, the transport layer is responsible for
error control. However, error control at this layer is performed
process-to- process rather than across a single link.
• The sending transport layer makes sure that the entire
message arrives at the receiving transport layer without error
(damage, loss, or duplication).
• Error correction is usually achieved through retransmission.
59. 5. Session Layer
• It is the network dialog controller.
• It establishes, maintains and synchronizes the
interaction between the communicating
systems.
Figure 1.18 Session Layer
60. Specific responsibilities are,
Dialog Control
• The session layer allows two systems to enter into a
dialog.
• It allows the communication between two processes to
take place in either half-duplex (one way at a time) or
full-duplex (two ways at a time) mode.
Synchronization
• It allows a process to add checkpoints, or
synchronization point as in fig.1.18, to a stream of data.
61. 6. Presentation Layer
• The presentation layer is responsible for the syntax and
semantics of the information exchanged between two
systems as shown in fig.1.19.
Specific responsibilities are,
Translation
• The processes (running programs) in two systems are
usually exchanging information in the form of character
strings, numbers, and so on.
• The information must be changed to bit streams before
being transmitted. Because different computers use
different encoding systems.
• The presentation layer is responsible for interoperability
between these different encoding methods.
62. Encryption
• To carry sensitive information, a system must be able to ensure privacy.
• Encryption means that the sender transforms the original information to another
form and sends the resulting message out over the network.
Decryption
• It reverses the original process to transform the message back to its original form.
Compression
• Data compression reduces the number of bits contained in the information.
• Data compression becomes particularly important in the transmission of
multimedia such as text, audio, and video.
Figure 1.19 Presentation Layer
63. 7. Application Layer
• The application layer enables the user, whether human or software,
to access the network as in fig.1.20.
• It provides user interfaces and support for services such as
electronic mail, remote file access and transfer, shared database
management, and other types of distributed information services.
• X.400 (message-handling services), X.500 (directory services), and
file transfer, access, and management (FTAM).
Figure 1.20 Application Layer
64. Specific responsibilities are,
Network Virtual Terminal (NVT)
• It is a software version of a physical terminal, and it
allows a user to log on to a remote host.
File Transfer, Access, and Management (FTAM)
• It allows a user to access files, to retrieve files from
remote host, and to manage or control files in a
remote computer locally.
Mail Services
• It provides the basis for e-mail forwarding and storage.
Directory Services
• It provides distributed database sources and access for
global information about various objects and services.
65. TCP/IP PROTOCOL SUITE
• The TCP/IP protocol suite was developed prior to the OSI model.
Therefore, the layers in the TCP/IP protocol suite do not exactly match
those in the OSI model.
• The original TCP/IP protocol suite was defined as having four layers:
– host-to-network
– internet
– transport
– application
• TCP/IP protocol suite is made of five layers:
– physical
– data link
– network
– transport
– application
• The first four layers provide physical standards, network interfaces,
internetworking, and transport functions that correspond to the first four
layers of the OSI model as in fig.1.21.
• The three topmost layers in the OSI model are represented in TCP/IP by a
single layer called the application layer.
67. • TCP/IP is a hierarchical protocol made up of interactive modules. The term
hierarchical means that each upper-level protocol is supported by one or more
lower-level protocols.
• At the transport layer, TCP/IP defines three protocols: Transmission Control
Protocol (TCP), User Datagram Protocol (UDP), and Stream Control Transmission
Protocol (SCTP).
• At the network layer, the main protocol defined by TCP/IP is the Internetworking
Protocol (IP).
• At the physical and data link layers, TCP/IP does not define any specific protocol.
It supports all the standard and proprietary protocols.
• A network in a TCP/IP internetwork can be a local-area network or a wide-area
network.
• The transport layer was represented in TCP/IP by two protocols: TCP and UDP.
• IP is a host-to-host protocol, meaning that it can deliver a packet from one
physical device to another.
• UDP and TCP are transport level protocols responsible for delivery of a message
from a process (running program) to another process.
• A new transport layer protocol, SCTP(Stream Control Transmission Protocol) has
been devised to meet the needs of some newer applications.
• The application layer in TCP/IP is equivalent to the combined session, presentation,
and application layers in the OSI model. Many protocols are defined at this layer.
68. ADDRESSING
Four levels of addresses are used in an
internet employing the TCP/IP protocols:
– Physical (link) addresses
– Logical (IP) addresses
– Port addresses
– Specific addresses
69. 1. Physical address(MAC address)
• In fig.1.22,the Physical addresses, also known as
the link address, is the address of a node as
defined by its LAN or WAN.
• For example, Ethernet uses a 6-byte (48-bit)
physical address that is imprinted on the Network
interface card (NIC).
Figure 1.22 Physical addresses
70. MAC address
• The first half of a MAC address contains the ID
number of the adapter manufacturer.
• The second half of a MAC address represents
the serial number assigned to the adapter by
the manufacturer.
In the example,
00:A0:C9:14:C8:29
The prefix00A0C9 indicates the manufacturer
is Intel Corporation.
71. 2. Logical addresses (IP addresses)
• Logical addresses are necessary for universal
communications that are independent of
underlying physical networks as in fig.1.23.
• A logical address in the Internet is currently a 32-
bit address that can uniquely define a host
connected to the Internet.
• No two publicly addressed and visible hosts on
the Internet can have the same IP address.
• The numbers are separated by a dot. 132.24.75.9
is an example of such an IP address.
73. 3. Port addresses
• Today, computers are devices that can run
multiple processes at the same time.
• The end objective of Internet communication is a
process communicating with another process.
• For these processes to receive data
simultaneously, we need a method to label the
different processes.
• In the TCP/IP architecture, the label assigned to
a process is called a port address as in fig.1.24. A
port address in TCP/IP is 16 bits in length.
74. Fig 1.24 Port addresses
4. Specific Addresses
• Some applications have user-friendly addresses that are designed for that specific address.
• Examples include the e-mail address (for example, forouzan@fhda.edu) and the Universal
Resource Locator (URL) (for example, www.mhhe.com).
• The first defines the recipient of an e-mail the second is used to find a document on the World
Wide Web.
76. Physical Links
Transmission media can be defined as physical path between transmitter and
receiver in a data transmission system.
1. Guided:
̶ Transmission capacity depends on the medium and the length.
̶ Also it is based on whether the medium is point-to-point or multipoint (e.g.
LAN).
̶ Examples are co-axial cable, twisted pair, and optical fiber.
2. Unguided:
̶ provides a means for transmitting electro-magnetic signals but do not guide
them. Example wireless transmission.
– Characteristics and quality of data transmission are determined by medium and
signal characteristics.
– For guided media, the medium is more important in determining the limitations of
transmission.
– For unguided media, the bandwidth of the signal produced by the transmitting
antenna and the size of the antenna is more important than the medium.
– Signals at lower frequencies are omni-directional (propagate in all directions).
– For higher frequencies, focusing the signals into a directional beam is possible.
– These properties determine what kind of media one should use in a particular
application.
77. Figure 1.19 Classification of the transmission media
Guided transmission media
The most commonly used guided transmission media such as
– Twisted-pair of cable
– Coaxial cable
– Optical fiber
v
78. 1.Twisted Pair Cable
̶ The two wires are typically ``twisted'' together to reduce interference(like crosstalk)
between the two conductors as shown in Fig.1.20.
̶ Typically, a number of pairs are bundled together into a cable by wrapping them in a
tough protective sheath.
Figure 1.20 CAT5 cable (twisted cable)
– Data rates of several Mbps common.
– Spans distances of several kilometers.
– Data rate determined by wire thickness and length.
– In addition, shielding to eliminate interference from other wires impacts signal-to-noise
ratio, and ultimately, the data rate.
– Good, low-cost communication. In reality, many sites already have twisted pair installed
in offices -- existing phone lines!
79. Typical characteristics:
1. Twisted-pair can be used for both analog and digital communication.
2. The data rate that can be supported over a twisted-pair is inversely proportional
to the square of the line length.
3. Maximum transmission distance of 1 Km can be achieved for data rates up to 1
Mb/s.
4. For analog voice signals, amplifiers are required about every 6 Km and for digital
signals, repeaters are needed for about 2 Km.
5. To reduce interference, the twisted pair can be shielded with metallic braid.
This type of wire is known as Shielded Twisted-Pair (STP) and the other form is
known as Unshielded Twisted-Pair (UTP).
Uses:
1. The oldest and the most popular use of twisted pair are in telephony. (Category 6
cable-6th generation of twisted pair cable)
2. In LAN it is commonly used for point-to-point short distance communication (say,
100m) within a building or a room.
80. 2. Coaxial Cable
2.1 Base band Coaxial
1. With ``coax'', the medium consists of a copper core surrounded by insulating
material and a braided outer conductor as shown in Fig. 1.21.
2. The term base band indicates digital transmission (as opposed to broadband
analog).
Figure 1.21 Co-axial cable
3. Physical connection consists of metal pin touching the copper core.
4. There are two common ways to connect to a coaxial cable:
– With vampire taps, a metal pin is inserted into the copper core. A special tool drills a hole into
the cable, removing a small section of the insulation, and a special connector is screwed into
the hole. The tap makes contact with the copper core.
– With a T-junction, the cable is cut in half, and both halves connect to the T-junction. A T-
connector is analogous to the signal splitters used to hook up multiple TVs to the same cable
wire.
81. Characteristics:
1. Co-axial cable can be used for both analog and digital signaling.
2. In baseband LAN, the data rates lies in the range of 1 KHz to 20 MHz over a distance in the
range of 1 Km.
3. Co-axial cables typically have a diameter of 3/8".
4. Coaxial cables are used both for baseband and broadband communication.
5. In broadband signaling, signal propagates only in one direction, in contrast to propagation in
both directions in baseband signaling.
6. Broadband cabling uses either dual-cable scheme or single -cable scheme with a head end
to facilitate flow of signal in one direction. Because of the shielded, concentric construction,
co-axial cable is less susceptible to interference and cross talk than the twisted-pair.
7. For long distance communication, repeaters are needed for every kilometer.
8. Data rate depends on physical properties of cable, but 10 Mbps is typical.
Uses:
1. One of the most popular uses of co-axial cable is in cable TV (CATV) for the distribution of
TV signals. RG-6 is cable television (CATV) distribution coax cable. RG 59 is good for lower
frequency signals (anything under about 50 MHz). That makes it a good choice for a closed
circuit television (CCTV) video surveillance system
2. Another importance use of co-axial cable is in LAN.
82. 2.2 Broadband Coaxial
1. The term broadband refers to analog
transmission over coaxial cable.
2. The technology:
– Typically bandwidth of 300 MHz, total data rate of
about 150 Mbps.
– Operates at distances up to 100 km (metropolitan
area!).
– Uses analog signaling.
– Requires amplifiers to boost signal strength; because
amplifiers are one way, data flows in only one
direction.
83. 3. Fiber Optics
1. In fiber optic technology, the medium consists of a hair-width strand(filament) of silicon or glass, and the
signal consists of pulses of light.
2. For instance, a pulse of light means ``1'', lack of pulse means ``0''.
3. It has a cylindrical shape and consists of three concentric sections: the core, the cladding, and the jacket as
shown in Fig.1.22.
Figure 1.22 Optical Fiber
4. The core, innermost section consists of a single solid dielectric cylinder of diameter d1 and of refractive index
n1.
5. The core is surrounded by a solid dielectric cladding of refractive index n2 that is less than n1.
6. As a consequence, the light is propagated through multiple total internal reflections.
7. The core material is usually made of ultra pure fused silica or glass and the cladding is either made of glass or
plastic.
8.The cladding is surrounded by a jacket made of plastic. The jacket is used to protect against moisture,
abrasion, crushing and other environmental hazards.
84. Three components are required:
1. Fiber medium: Current technology carries light pulses for tremendous distances (e.g., 100s of
kilometers) with virtually no signal loss.
2. Light source: typically a Light Emitting Diode (LED) or laser diode. Running current through
the material generates a pulse of light.
3. A photo diode light detector, which converts light pulses into electrical signals.
Advantages:
1. Very high data rate, low error rate.
2. Difficult to tap, which makes it hard for unauthorized taps as well? This is responsible for
higher reliability of this medium.
3. Much thinner (per logical phone line) than existing copper circuits.
4. Not susceptible to electrical interference (lightning) or corrosion (rust).
5. Greater repeater distance than coax.
Disadvantages:
1. Expensive.
2. One-way channel. Two fibers needed to get full duplex (both ways) communication.
85. Optical Fiber works in different types of modes:
They are
1. Multi-Mode Fiber (MMF) - the core and cladding diameter lies in the range 50-
200μm(1mm=1000μm) and 125-400μm, respectively.
2. Single-Mode Fiber (SMF) - the core and cladding diameters lie in the range 8-12μm and 125μm,
respectively. Single-mode fibers are also known as Mono-Mode Fiber.
Moreover, both single-mode and multi-mode fibers can have two types;
1. Step index - the refractive index of the core is uniform throughout and at the core cladding
boundary there is an abrupt change in refractive index
2. Graded index- the refractive index of the core varies radically from the centre to the core-
cladding boundary from n1 to n2 in a linear manner. Fig 1.23 shows the optical fiber transmission
modes.
86. Characteristics:
1. When light is applied at one end of the optical fiber core, it reaches the other end
by means of total internal reflection because of the choice of refractive index of
core and cladding material (n1 > n2).
2. The light source can be either light emitting diode (LED) or injection laser diode
(ILD). These semiconductor devices emit a beam of light when a voltage is applied
across the device.
3. At the receiving end, a photodiode can be used to detect the signal- encoded light.
4. Single-mode fiber allows longer distances without repeater.
5. For multi- mode fiber, the typical maximum length of the cable without a repeater
is 2km, whereas for single-mode fiber it is 20km.
Fiber Uses:
1. It has smaller diameter, lighter weight, low attenuation, immunity to
electromagnetic interference.
2. useful for long-distance telecommunications.
3. Fiber optic cables are also used in high-speed LAN applications.
4. Multi-mode fiber is commonly used in LAN.
87. Unguided Transmission
1. Unguided transmission is used when running a physical cable (either fiber or copper) between two end
points is not possible.
2. Radio frequency (RF) is a rate of oscillation in the range of around 3 kHz to 300 GHz, which corresponds
to the frequency of radio waves.
3. Infrared signals typically used for short distances(frequency range 430 THz to 300 GHz)
4. Microwave signals -an electromagnetic wave in the range between 300 MHz (0.3 GHz) and 300 GHz,
shorter than that of a normal radio wave but longer than those of infrared radiation.
– Sender and receiver use some sort of dish antenna as shown in Fig.1.24.
– Microwaves are used in radar, in communications, and for heating in microwave ovens and in various
industrial processes.
Figure 1.24 Communication using Terrestrial Microwave
Difficulties:
1. Weather interferes with signals. For instance, clouds, rain, lightning, etc. may adversely affect
communication.
2. Radio transmissions easy to tap.
88. • The atmosphere is divided into five layers. It is thickest near the surface and thins out with height until it
eventually merges with space.
• 1) The troposphere is the first layer above the surface and contains half of the Earth's
atmosphere. Weather occurs in this layer.
2) Many jet aircrafts fly in the stratosphere because it is very stable. Also, the ozone layer absorbs harmful
rays from the Sun.
3) Meteors or rock fragments burn up in the mesosphere.
4) The thermosphere is a layer with auroras. It is also where the space shuttle orbits.
5) The atmosphere merges into space in the extremely thin exosphere. This is the upper limit of our
atmosphere.
89. Satellite Communication
1. A communication satellite is essentially a big
microwave repeater or relay station in the sky.
2. Microwave signals from a ground station is picked up
by a transponder(device attached to a satellite),
amplifies the signal and rebroadcasts it in another
frequency, which can be received by ground stations
at long distances.
3. To keep the satellite stationary with respect to the
ground based stations, the satellite is placed in a
geostationary orbit above the equator at an altitude
of about 36,000 km.
4. A satellite can be used for point-to-point
communication between two ground-based stations
as shown in Fig. 1.25.
90. Figure 1.25 Satellite Microwave Communication: point –to- point
Figure 1.26 Satellite Microwave Communication: Broadcast links
91. Characteristics:
1. Optimum frequency range for satellite communication is 1 to 10
GHz.
2. The most popular frequency band is 3.7 to 4.2 GHz for down link
and 5.925 to 6.425 for uplink transmissions.
3. The most important is the long communication delay for the
round trip (about 270 ms) because of the long distance (about
72,000 km) the signal has to travel between two earth stations.
4. All stations under the downward beam can receive the
transmission.
5. It may be necessary to send encrypted data to protect against
piracy.
Uses:
1. Now-a-days communication satellites are not only used to handle
telephone, telex and television traffic over long distances, but are
used to support various internet based services such as e-mail,
FTP, World Wide Web (WWW), etc.
2. New types of services, based on communication satellites, are
emerging.
92. Channel Access on links
Multiple Access Techniques
Various multiple access techniques are
1. Frequency Division Multiple Access(FDMA)
2. Time Division Multiple Access (TDMA)
3. Code Division Multiple Access(CDMA)
1. Frequency Division Multiple Access
In frequency-division multiple access (FDMA), the available bandwidth is divided into
frequency bands.
Each station is allocated a band to send its data.
In this method when any one frequency level is kept idle and another is used frequently
leads to inefficiency.
Figure 1.27 Frequency Division Multiple Access
f
f5
f4
f3
f2
f1
t
93. 2. Time Division Multiple Access
In time-division multiple access (TDMA), the stations share the
bandwidth of the channel in time.
Each station is allocated a time slot during which it can send data.
The main problem with TDMA lies in achieving synchronization between
the different stations.
Each station needs to know the beginning of its slot and the location of its
slot.
Figure 1.28 Time Division Multiple Access
f
f1 f2 f3 f4 f5 f6
t
94. 3. Code Division Multiple Access
CDMA differs from FDMA because only one channel occupies the entire
bandwidth of the link(transmission) at the same time.
It also differs from TDMA because all stations can send data at the same time
without timesharing.
CDMA simply means communication with different codes.
CDMA is based on coding theory. Each station is assigned a code, which is a
sequence of numbers called chips.
Chips will be added with the original data and it can be transmitted through
same medium.
Figure 1.29 Code Division Multiple Access
Code c
C5
c4
c3
c2
c1
Frequency f
95. Issues in the data link layer
Framing
1. It separates a message from one source to a destination. To transmit frames over
the node it is necessary to mention start and end of each frame.
2. Frames can be of fixed or variable size.
3. In fixed-size framing, there is no need for defining the boundaries of frames; in
variable-size framing, we need a delimiter (flag) to define the boundary of two
frames.
4. Variable-size framing uses two categories of protocols:
̶ Byte-oriented (or character oriented)
̶ Bit-oriented
5. In a byte-oriented protocol, the data section of a frame is a sequence of bytes; in
a bit-oriented protocol, the data section of a frame is a sequence of bits.
6. There are three techniques to solve this frame
1. Byte-Oriented Protocols (BISYNC, PPP, DDCMP)
2. Bit-Oriented Protocols (HDLC)
3. Clock-Based Framing (SONET)
96. 1. Byte Oriented protocols
1. This is to view each frame as a collection of bytes (characters)
rather than a collection of bits.
2. Such a byte-oriented approach is exemplified by the BISYNC
(Binary Synchronous Communication) protocol and the DDCMP
(Digital Data Communication Message Protocol).
3. These two protocols are examples of two different framing
techniques, the sentinel approach and the byte-counting
approach.
Sentinel Approach
The BISYNC protocol illustrates the sentinel approach to framing; its
frame format is shown in fig.1.23.
Figure 1.30 BISYNC Frame format
97. 1. The beginning of a frame is denoted by sending a special SYN
(synchronization) character.
2. The data portion of the frame is then contained between special
sentinel characters: STX (start of text) and ETX (end of text).
3. The SOH (start of header) field serves much the same purpose as
the STX field.
4. The frame format also includes a field labeled CRC (cyclic
redundancy check) that is used to detect transmission errors.
5. The problem with the sentinel approach is that the ETX character
might appear in the data portion of the frame.
6. BISYNC overcomes this problem by “escaping” the ETX character
by preceding it with a DLE (data-link-escape) character whenever
it appears in the body of a frame; the DLE character is also
escaped in the frame body. This approach is called character
stuffing.
98. Point-to-Point Protocol (PPP)
The more recent Point-to-Point Protocol (PPP), which is commonly run
over dialup modem links. The format of PPP frame is
Figure 1.31 PPP Frame Format
1. The Flag field has 01111110 as starting sequence.
2. The Address and Control fields usually contain default values.
3. The Protocol field is used for demultiplexing.
4. The frame payload size can he negotiated, but it is 1500 bytes by default.
5. The PPP frame format is unusual in that several of the field sizes are
negotiated rather than fixed.
6. Negotiation is conducted by a protocol called LCP (Link Control Protocol).
7. LCP sends control messages encapsulated in PPP frames—such messages
are denoted by an LCP identifier in the PPP Protocol.
99. Byte-Counting Approach
Figure 1.32 DDCMP frame format
1. The number of bytes contained in a frame can be included as a field in
the frame header. DDCMP protocol uses this approach as shown in
fig.1.32.
2. COUNT Field specifies how many bytes are contained in the frame’s
body.
3. A transmission error could corrupt the COUNT field, in which case the
end of the frame would not be correctly detected.
4. Hence the receiver will accumulate as many bytes as the bad COUNT
field indicates and then use the error detection field to determine that
the frame is bad. This is sometimes called a framing error.
5. The receiver will then wait until it sees the next SYN character to start
collecting the bytes that make up the next frame.
100. 2. Bit-Oriented Protocols (HDLC)
1. In this, frames are viewed as a collection of bits.
2. The Synchronous Data Link Control (SDLC) protocol is an example of a bit-oriented protocol; SDLC was later
standardized by the ISO as the High-Level Data Link Control (HDLC) protocol.
Figure 1.33 HDLC Frame Format
3. HDLC denotes both the beginning and the end of a frame with the distinguished bit sequence 01111110.
4. This sequence might appear anywhere in the body of the frame, it can be avoided by bit stuffing.
5. On the sending side, any time five consecutive 1’s have been transmitted from the body of the message
(i.e., excluding when the sender is trying to transmit the distinguished 01111110 sequence), the sender
inserts a 0 before transmitting the next bit.
6. By looking at the next bit, the receiver can distinguish between these two cases:
If it sees a 0 (i.e., the last eight bits it has looked at are 01111110), then it is the end-of- frame
marker.
If it sees a 1 (i.e., the last eight bits it has looked at are 01111111), then there must have been an
error and the whole frame is discarded.
101. Byte stuffing and unstuffing
Byte stuffing is the process of adding 1 extra byte whenever there is a flag or
escape character in the text.
102. A frame in a bit-oriented protocol
Bit stuffing is the process of adding one extra 0 whenever five consecutive 1s
follow a 0 in the data, so that the receiver does not mistake the pattern
0111110 for a flag.
103. 3. Clock-Based Framing (SONET)
1. Synchronous Optical Network Standard is used for long distance transmission of data
over optical network.
2. It supports multiplexing of several low speed links into one high speed links.
3. An STS-1 frame is used in this method.
Figure 1.34 A SONET STS-1 frame
4. It is arranged as nine rows of 90 bytes each, and the first 3 bytes of each row are
overhead, with the rest being available for data.
5. The first 2 bytes of the frame contain a special bit pattern, and it is these bytes that
enable the receiver to determine where the frame starts.
6. The receiver looks for the special bit pattern consistently, once in every 810 bytes, since
each frame is 9 x 90 = 810 bytes long.
104. Figure 1.35 Three STS-1 frames multiplexed onto one STS-3c frame.
1. The STS-N frame can he thought of as consisting of N STS-1 frames, where the
bytes from these frames are interleaved; that is, a byte from the first frame is
transmitted, then a byte from the second frame is transmitted, and so on.
2. Payload from these STS-1 frames can he linked together to form a larger STS-N
payload, such a link is denoted STS-Nc. One of the bits in overhead is used for this
purpose.
105. Error Detection and Correction
1. Networks must be able to transfer data from one device to another with complete
accuracy.
2. Data can be corrupted during transmission.
3. For reliable communication, errors must be detected and corrected.
4. Error detection and correction are implemented either at the data link layer or
the transport layer of the OSI model.
5. The number of bits position in which code words differ is called the Hamming
distance.
Types of errors
These errors can be divided into two types:
– Single-bit error
– Burst error
106. Single-bit Error
The term single-bit error means that only one
bit of given data unit (such as a byte,
character, or data unit) is changed from 1 to 0
or from 0 to 1 as shown in Fig.1.36
Figure 1.36 Single bit error
107. 2. Burst Error
1. The term burst error means that two or more bits in the data unit have changed from 0 to 1 or
vice-versa.
2. Burst errors are mostly likely to happen in serial transmission.
3. The length of the burst error is measured from the first corrupted bit to the last corrupted bit.
4. Some bits in between may not be corrupted.
Figure 1.37 Burst Error
Figure 1.38 Burst Error
108. Error Detecting Codes
• Error detection means to decide whether the received data is correct or
not without having a copy of the original message.
• Error detection uses the concept of redundancy, which means adding
extra bits for detecting errors at the destination.
concept of redundancy
110. Simple Parity Checking or One-dimension Parity Check(VRC)
1. The most common and least expensive mechanism for error- detection is the simple
parity check.
2. In this technique, a redundant bit called parity bit, is appended to every data unit so
that the number of 1s in the unit (including the parity becomes even).
3. Blocks of data from the source are subjected to a check bit or Parity bit generator form,
where a parity of 1 is added to the block if it contains an odd number of 1’s (ON bits)
and 0 is added if it contains an even number of 1’s.
4. At the receiving end the parity bit is computed from the received data bits and
compared with the received parity bit, as shown in Fig.1.38.
5. This scheme makes the total number of 1’s even, that is why it is called even parity
checking.
Figure 1.38 Even-parity checking scheme
111. Winter 2005
ECE
ECE 766
Computer Interfacing and Protocols
111
Example for Vertical Redundancy Check (VRC)
– Append a single bit at the end of data block such that the number of
ones is even
Even Parity (odd parity is similar)
0110011 01100110
0110001 01100011
– VRC is also known as Parity Check
– It can detect single bit error
– It can detect burst errors only if the total number of errors is odd.
112. Decimal value Data Block Parity bit Code word
0 0000 0 00000
1 0001 1 00011
2 0010 1 00101
3 0011 0 00110
4 0100 1 01001
5 0101 0 01010
6 0110 0 01100
7 0111 1 01111
8 1000 1 10001
9 1001 0 10010
10 1010 0 10100
11 1011 1 10111
12 1100 0 11000
13 1101 1 11011
14 1110 1 11101
15 1111 0 11110
Table 1.1 Possible 4-bit data words and corresponding code words
113. Two-dimension Parity Check(LRC)
1. Performance can be improved by using two-dimensional parity check, which organizes the block of bits
in the form of a table.
2. Parity check bits are calculated for each row, which is equivalent to a simple parity check bit.
3. Parity check bits are also calculated for all columns then both are sent along with the data.
4. At the receiving end these are compared with the parity bits calculated on the received data.
5. Organize data into a table and create a parity for each column.
Figure 1.39 Two-dimension Parity Checking
114. LRC increases the likelihood of detecting
burst errors.
If two bits in one data units are damaged and
two bits in exactly the same positions in
another data unit are also damaged, the LRC
checker will not detect an error.
115. Checksum
1. In checksum error detection scheme, the data is divided into k segments each of m bits.
2. In the sender’s end the segments are added using 1’s complement arithmetic to get the sum.
3. The sum is complemented to get the checksum. The checksum segment is sent along with the data
segments(10110011 ,10101011, 01011010 ,11010101 )as shown in Fig. 1.40 (a).
4. At the receiver’s end, all received segments are added using 1’s complement arithmetic to get the sum.
5. The sum is complemented. If the result is zero, the received data is accepted; otherwise discarded, as
shown in Fig.1.40 (b).
Figure 1.40 (a) Sender’s end for the calculation of the checksum, (b) Receiving end for checking the checksum
116. At the sender
The unit is divided into k sections, each of n
bits.
All sections are added together using one’s
complement to get the sum.
The sum is complemented and becomes the
checksum.
The checksum is sent with the data
117. At the receiver
The unit is divided into k sections, each of n
bits.
All sections are added together using one’s
complement to get the sum.
The sum is complemented.
If the result is zero, the data are accepted:
otherwise, they are rejected.
118. Cyclic Redundancy Checks (CRC)
1. This Cyclic Redundancy Check is the most powerful and easy to implement
technique. CRC is based on binary division.
2. In CRC, a sequence of redundant bits, called cyclic redundancy check bits,
are appended to the end of data unit so that the resulting data unit becomes
exactly divisible by a second, predetermined binary number.
3. At the destination, the incoming data unit is divided by the same number. If there
is no remainder, the data unit is assumed to be correct and is therefore accepted.
4. A remainder indicates that the data unit has been damaged in transit and
therefore must be rejected.
Figure 1.41 Basic scheme for Cyclic Redundancy Checking
119. 1. Consider the case where k=1101 as in fig.1.42. Hence we have to divide 1101000 (i.e. k
appended by 3 zeros) by 1011, which produces the remainder r=001, so that the bit frame
(k+r) =1101001 is actually being transmitted through the communication channel.
2. At the receiving end, if the received number, i.e.1101001 is divided by the same generator
polynomial 1011 to get the remainder as 000, it can be assumed that the data is free of
errors.
Figure 1.42 Cyclic Redundancy Checks (CRC)
1. A polynomial is selected to have at least the following properties:
It should not be divisible by X.
It should not be divisible by (X+1)
2. The first condition guarantees that all burst errors of a length equal to the degree of
polynomial are detected.
3. The second condition guarantees that all burst errors affecting an odd number of bits are
detected.
122. Error Correcting Codes
1. Error Correction can be handled in two ways.
2. One is when an error is discovered; the receiver can have the sender retransmit the entire data
unit. This is known as backward error correction.
3. In the other, receiver can use an error-correcting code, which automatically corrects certain
errors. This is known as forward error correction.
Single-bit error correction
1. A single additional bit can detect error, but it’s not sufficient enough to correct that error too.
2. For correcting an error one has to know the exact position of error, i.e. exactly which bit is in
error.
3. For example, to correct a single-bit error in an ASCII character, the error correction must
determine which one of the seven bits is in error.
4. To this, we have to add some additional redundant bits.
5. To calculate the numbers of redundant bits (r) required to correct d data bits, let us find out the
relationship between the two.
123. 1. So we have (d+r) as the total number of bits, which are to be transmitted; then r
must be able to indicate at least d+r+1 different value.
2. One value means no error, and remaining d+r values indicate error location of
error in each of d+r locations. So, d+r+1 state must be distinguishable by r bits,
and r bits can indicate 2r states. Hence, 2r must be greater than d+r+1.i.e
2r >= d+r+1
1. The value of r must be determined by putting in the value of d in the relation.
2. For example, if d is 7, then the smallest value of r that satisfies the above relation
is 4. So the total bits, which are to be transmitted is 11 bits (d+r = 7+4 =11).
3. A technique developed by R.W.Hamming provides a practical solution. The
solution or coding scheme he developed is commonly known as Hamming Code.
4. Hamming code can be applied to data units of any length and uses the
relationship between the data bits and redundant bits.
124. Number of Data Bits
(d)
Number of
Redundancy Bits
(r)
Total Bits
(d+r)
1
2
3
4
5
6
7
2
3
3
3
4
4
4
3
5
6
7
9
10
11
125. Basic approach for error detection by using Hamming code is as follows:
1. To each group of m information bits k parity bits are added to form (m+k) bit code as shown
in Fig.1.43
2. Location of each of the (m+k) digits is assigned a decimal value.
3. The k parity bits are placed in positions 1, 2… 2k-1 positions.–K parity checks are performed on
selected digits of each codeword.
4. At the receiving end the parity bits are recalculated. The decimal value of the k parity bits
provides the bit-position in error, if any
Figure 1.44 Use of Hamming code for error correction for a 4-bit data
126. 1. Figure1.44 shows how hamming code is used for correction for 4-bit numbers
(d4d3d2d1) with the help of three redundant bits (r3r2r1).
2. For the example data 1010, first r1 (0) is calculated considering the parity of the
bit positions, 1, 3, 5 and 7.
3. Then the parity bits r2 is calculated considering bit positions 2, 3, 6 and 7.
4. Finally, the parity bits r4 is calculated considering bit positions 4, 5, 6 and 7 as
shown.
5. If any corruption occurs in any of the transmitted code 1010010, the bit position
in error can be found out by calculating r3r2r1 at the receiving end.
6. For example, if the received code word is 1110010, the recalculated value of
r3r2r1 is 110, which indicates that bit position in error is 6, the decimal value of
110.
127. Single-bit error correction
To correct an error, the receiver reverses the value
of the altered bit. To do so, it must know which
bit is in error.
Number of redundancy bits needed
• Let data bits = m
• Redundancy bits = r
Total message sent = m+r
The value of r must satisfy the following relation:
2r ≥ m+r+1
135. Link-level Flow Control
1. Flow control ensures that a transmitting station does not overwhelm a receiving station
with lesser processing capability.
2. Error Control involves both error detection and error correction.
3. When an error is detected, the receiver can have the specified frame retransmitted by
the sender. This process is commonly known as Automatic Repeat Request (ARQ).
Flow Control
1. Flow Control is a set of procedures that tells the sender how much data it can transmit
before it must wait for an acknowledgment from the receiver.
2. The flow of data should not be allowed to overwhelm the receiver.
3. There are two methods developed for flow control. They are
̶ Stop-and-wait (Request/reply)
̶ Sliding-window
136. 1. Stop-and-wait
1. This is the simplest form of flow control where a sender transmits a data frame.
2. After receiving the frame, the receiver indicates its willingness to accept another frame by sending back
an ACK frame acknowledging the frame just received.
3. The sender must wait until it receives the ACK frame before sending the next data frame. This is
sometimes referred to as ping-pong behavior(Piggybacking).
4. Request/reply is simple to understand and easy to implement, but not very efficient.
5. Figure 1.45 illustrates the operation of the stop-and-wait protocol. The protocol relies on two-way
transmission to allow the receiver at the remote node to return frames acknowledging the successful
transmission.
6. A small processing delay may be introduced.
7. Major drawback of Stop-and-Wait Flow Control is that only one frame can be in transmission at a time,
this leads to inefficiency if propagation delay is much longer than the transmission delay.
Figure 1.45 Stop-and Wait protocol
137. Link Utilization in Stop-and-Wait
Let us assume the following:
1. Transmission time: The time it takes for a station to transmit a frame (normalized to a value of 1).
2. Propagation delay: The time it takes for a bit to travel from sender to receiver (expressed as a).
3. a < 1 :The frame is sufficiently long such that the first bits of the frame arrive at the destination before the source has
completed transmission of the frame.
4. a > 1: Sender completes transmission of the entire frame before the leading bits of the frame arrive at the receiver.
5. The link utilization U = 1/ (1+2a), a = Propagation time / transmission time
6. When the propagation time is small, as in case of LAN environment, the link utilization is good. But, in case of long
propagation delays, as in case of satellite communication, the utilization can be very poor.
7. To improve the link utilization, we can use sliding-window protocol instead of using stop-and-wait protocol.
2. Sliding Window
1. Sliding window is an abstract concept that defines the range of sequence numbers that is the concern of the sender and
receiver.(Both need to deal with part of the possible sequence numbers)
2. In stop-and-wait flow control, only one frame at a time can be in transit.
3. Efficiency can be greatly improved by allowing multiple frames to be in transit at the same time.
4. To keep track of the frames, sender station sends sequentially numbered frames.
138. 1. The receiver acknowledges a frame by sending an ACK frame that includes the sequence number of the next frame
expected.
2. This also explicitly announces that it is prepared to receive the next N frames, beginning with the number specified. This
scheme can be used to acknowledge multiple frames.
3. It could receive frames 2, 3, 4 but withhold ACK until frame 4 has arrived. By returning an ACK with sequence number 5, it
acknowledges frames 2, 3, 4 in one go.
4. The receiver needs a buffer of size 1.Sliding window algorithm is a method of flow control for network data transfers.
5. TCP, the Internet's stream transfer protocol, uses a sliding window algorithm.
Figure 1.46 Buffer in sliding window
6. A sliding window algorithm places a buffer between the application program and the network data flow.
7. For TCP, the buffer is typically in the operating system kernel, but this is more of an implementation detail than a hard-
and-fast requirement.
139. Sender sliding Window: At any instant, the sender is permitted to send frames with sequence
numbers in a certain range (the sending window) as shown in Fig. 1.47
Figure 1.47 Sender’s window
1. If the header of the frame allows k bits, the sequence numbers range from 0 to 2k – 1.
2. Sender maintains a list of sequence numbers that it is allowed to send (sender window).
The size of the sender’s window is at most 2k– 1.
3. The sender is provided with a buffer equal to the window size. Receiver also maintains a
window of size 2k – 1.
140. Figure 1.48 Receiver window
Receiver sliding Window:
1. The receiver always maintains window of size 1 as shown in Fig.1.48. It looks for a specific frame (frame 4
as shown in the figure) to arrive in a specific order.
2. If it receives any other frame (out of order), it is discarded and it needs to be resent. However, the receiver
window also slides by one as the specific frame is received and accepted as shown in the figure.
3. The receiver acknowledges a frame by sending an ACK frame that includes the sequence number of the
next frame expected.
4. This also explicitly announces that it is prepared to receive the next N frames, beginning with the number
specified. This scheme can be used to acknowledge multiple frames.
5. It could receive frames 2, 3, 4 but withhold ACK until frame 4 has arrived.
6. By returning an ACK with sequence number 5, it acknowledges frames 2, 3, 4 at one time. The receiver
needs a buffer of size 1.
141. 1. If the window size is larger than the packet size, then multiple packets can be outstanding
in the network, since the sender knows that buffer space is available on the receiver to
hold all of them.
2. Ideally, a steady-state condition can be reached where a series of packets (in the forward
direction) and window announcements (in the reverse direction) are constantly in transit.
3. As each new window announcement is received by the sender, more data packets are
transmitted.
Sliding Window Flow Control
1. Allows transmission of multiple frames
2. Assigns each frame a k-bit sequence number
3. Range of sequence number is [0…2k -1], i.e., frames are counted modulo 2k.
4. The link utilization in case of Sliding Window Protocol
U = 1, for N > 2a + 1
N/(1+2a), for N < 2a + 1
Where N = the window size, and a = Propagation time / transmission time
142. Error Control Techniques
1. When an error is detected in a message, the receiver sends a request to the
transmitter to retransmit the message or packet.
2. The most popular retransmission scheme is known as Automatic-Repeat-Request
(ARQ). Such schemes, where receiver asks transmitter to re-transmit if it detects
an error, are known as reverse error correction techniques.
3. There exist three popular ARQ techniques, as shown in Fig.1.49.
Figure 1.49 Error control techniques
143. 1. Stop-and-Wait ARQ
1. In Stop-and-Wait ARQ, which is simplest among all protocols, the sender (say
station A) transmits a frame and then waits till it receives positive
acknowledgement (ACK) or negative acknowledgement (NACK) from the
receiver (say station B).
2. Station B sends an ACK if the frame is received correctly, otherwise it sends
NACK.
3. Station A sends a new frame after receiving ACK; otherwise it retransmits the
old frame, if it receives a NACK. This is illustrated in Fig 1.50
Figure 1.50 Stop-And-Wait ARQ technique
144. 1. To tackle the problem of a lost or damaged frame, the sender is equipped
with a timer.
2. In case of a lost ACK, the sender transmits the old frame. In the Fig.1.51, the
second PDU of Data is lost during transmission.
3. The sender is unaware of this loss, but starts a timer after sending each PDU.
4. Normally an ACK PDU is received before the timer expires. In this case no ACK
is received, and the timer counts down to zero and triggers retransmission of
the same PDU by the sender.
5. The sender always starts a timer following transmission, but in the second
transmission receives an ACK PDU before the timer expires, finally indicating
that the data has now been received by the remote node.
Figure 1.51 Retransmission due to lost frame
146. 1. The receiver now can identify that it has received a duplicate frame from
the label of the frame and it is discarded.
2. To tackle the problem of damaged frames, there is a concept of NACK
frames, i.e. Negative Acknowledge frames.
3. Receiver transmits a NACK frame to the sender if it founds the received
frame to be corrupted.
4. When a NACK is received by a transmitter before the time-out, the old
frame is sent again as shown in Fig.1.52.
Figure 1.52 Retransmission due to damaged frame
147. Retransmission due to damaged frame
The main advantage of stop-and-wait ARQ is
• simplicity
• minimum buffer size.
148. 2. Sliding Window ARQ
2.1 Go-back-N ARQ
1. The most popular ARQ protocol is the go-back-N ARQ, where the sender sends the frames
continuously without waiting for acknowledgement.
2. As the receiver receives the frames, it keeps on sending ACKs or a NACK, in case a frame is
incorrectly received.
3. When the sender receives a NACK, it retransmits the frame in error plus all the succeeding
frames as shown in Fig.1.53. Hence, the name of the protocol is go-back-N ARQ.
4. If a frame is lost, the receiver sends NAK after receiving the next frame as shown in
Fig.1.54.
5. In case there is long delay before sending the NAK, the sender will resend the lost frame
after its timer times out.
6. If the ACK frame sent by the receiver is lost, the sender resends the frames after its timer
times out as shown in Fig.1.55.
7. Assuming full-duplex transmission, the receiving end sends piggybacked acknowledgement
by using some number in the ACK field of its data frame.
8. Let us assume that a 3-bit sequence number is used and suppose that a station sends
frame 0 and gets back an RR1, and then sends frames 1, 2, 3, 4, 5, 6, 7, 0 and gets another
RR1.
9. This might either mean that RR1 is a cumulative ACK or all 8 frames were damaged. This
ambiguity can be overcome if the maximum window size is limited to 7, i.e. for a k-bit
sequence number field it is limited to 2k-1.
10. The number N (=2k-1) specifies how many frames can be sent without receiving
acknowledgement.
151. Figure 1.55 Lost ACK in Go-Back-N ARQ
1. If no acknowledgement is received after sending N frames, the sender takes the help of a timer.
2. After the time-out, it resumes retransmission. The go-back-N protocol also takes care of damaged frames
and damaged ACKs. This scheme is little more complex than the previous one but gives much higher
throughput.
3. Assuming full-duplex transmission, the receiving end sends piggybacked acknowledgement by using some
number in the ACK field of its data frame.
4. Let us assume that a 3-bit sequence number is used and suppose that a station sends frame 0 and gets
back an RR1, and then sends frames 1, 2, 3, 4, 5, 6, 7, 0 and gets another RR1.
5. This might either mean that RR1 is a cumulative ACK or all 8 frames were damaged. This ambiguity can be
overcome if the maximum window size is limited to 7, i.e. for a k-bit sequence number field it is limited to
2k-1.
6. The number N (=2k-1) specifies how many frames can be sent without receiving acknowledgement.
7. If no acknowledgement is received after sending N frames, the sender takes the help of a timer.
152. 2.2 Selective-Repeat ARQ
1. The selective-repetitive ARQ scheme retransmits only those for which
NAKs are received or for which timer has expired, this is shown in the
Fig.1.56.
2. This is the most efficient among the ARQ schemes, but the sender must
be more complex so that it can send out-of-order frames.
Figure 1.56 Selective-repeat Reject
3. The receiver also must have storage space to store the post-NAK frames
and processing power to reinsert frames in proper sequence
155. Line coding
• Converting a string of 1’s and 0’s (digital data) into a
sequence of signals that denote the 1’s and 0’s.
• For example a high voltage level (+V) could represent a “1”
and a low voltage level (0 or -V) could represent a “0”.
• It converts digital data to digital signal, known as line
coding, as shown in Fig.1.58.
Figure.1.58 Line coding to convert digital data to digital signal
156. Line coding Characteristics
• No of signal levels:
– This refers to the number values allowed in a signal, known as signal levels, to
represent data. Fig.1.59 (a) shows two signal levels, whereas Fig1.59 (b) shows
three signal levels to represent binary data.
– A data element is the smallest entity that can represent a piece of information
(a bit). A signal element is the shortest unit of a digital signal.
– Data elements are what we need to send; signal elements are what we can
send. Data elements are being carried; signal elements are the carriers."
• Bit rate versus Baud rate:
– The bit rate represents the number of bits sent per second.
– The baud rate defines the number of signal elements per second in the signal.
Figure .1.59 (a) Signal with two voltage levels, (b) Signal with three voltage levels
157. • Signal Spectrum:
– Different encoding of data leads to different spectrum of the signal.
– It is necessary to use suitable encoding technique to match with the medium so that the
signal suffers minimum attenuation and distortion as it is transmitted through a
medium.
• Synchronization:
– To interpret the received signal correctly, the bit interval of the receiver should be
exactly same or within certain limit of that of the transmitter.
– Any mismatch between the two may lead wrong interpretation of the received signal.
Usually, clock is generated and synchronized from the received signal with the help of a
special hardware known as Phase Lock Loop (PLL).
• DC components:
– After line coding, the signal may have zero frequency component in the spectrum of the
signal, which is known as the direct-current (DC) component.
– DC component in a signal is not desirable because the DC component does not pass
through some components of a communication system such as a transformer.
– This leads to distortion of the signal and may create error at the output. The DC
component also results in unwanted energy loss on the line.
– When the voltage level in a digital signal is constant for a while, the spectrum creates
very low frequencies, called DC components, that present problems for a system that
cannot pass low frequencies.
160. Line Coding Techniques
• Line coding techniques can be broadly divided into three broad categories: Unipolar, Polar and Bipolar, as
shown in Fig.1.60.
Figure.1.60 Three basic categories of line coding techniques
1. Unipolar:
• In unipolar encoding technique, only two voltage levels are used. It uses only one polarity of voltage level
as shown in Fig.1.61.
• In this encoding approach, the bit rate same as data rate.
• Unfortunately, DC component present in the encoded signal and there is loss of synchronization for long
sequences of 0’s and 1’s. It is simple but obsolete.
Figure.1.61 Unipolar encoding with two voltage levels
161. 2. Polar:
• Polar encoding technique uses two voltage levels – one positive and the other one
negative.
Figure.1.62 Encoding Schemes under polar category
2.1 Non Return to zero (NRZ):
• The most common and easiest way to transmit digital signals is to use two
different voltage levels for the two binary digits.
• Usually a negative voltage is used to represent one binary value and a positive
voltage to represent the other.
• The data is encoded as the presence or absence of a signal transition at the
beginning of the bit time.
• As shown in the figure below, in NRZ encoding, the signal level remains same
throughout the bit-period. There are two encoding schemes in NRZ: NRZ-L and
NRZ-I, as shown in Fig.1.63.
162. Figure 1.63 NRZ encoding scheme
In NRZ-L the level of the voltage determines the value of the bit. In NRZ-I the inversion
or the lack of inversion determines the value of the bit.
163. 2.2 Return to Zero RZ:
• To ensure synchronization, there must be a signal transition in each bit as shown in
Fig. 1.64.
Key characteristics of the RZ coding are:
• Three levels
• Bit rate is double than that of data rate
• No dc component
• Good synchronization
• Main limitation is the increase in bandwidth
Figure.1.64 RZ encoding technique
164. 2.3 Biphase:
• To overcome the limitations of NRZ encoding, biphase encoding techniques can be adopted.
Manchester and differential Manchester Coding are the two common Biphase techniques in
use, as shown in Fig.1.65.
• In the standard Manchester coding there is a transition at the middle of each bit period. A
binary 1 corresponds to a low-to-high transition and a binary 0 to a high-to-low transition
in the middle.
• In Differential Manchester, inversion in the middle of each bit is used for synchronization.
• The encoding of a 0 is represented by the presence of a transition both at the beginning and
at the middle and 1 is represented by a transition only in the middle of the bit period.
Figure.1.65 Manchester encoding schemes
165. In Manchester and differential Manchester encoding, the transition
at the middle of the bit is used for synchronization.
166. 3. Bipolar Encoding:
• Bipolar AMI uses three voltage levels.
• Unlike RZ, the zero level is used to represent a
0 and a binary 1’s are represented by
alternating positive and negative voltage, as
shown in fig.1.66.
Figure.1.66 Bipolar AMI signal
167. Bipolar with 8-zero substitution (B8ZS):
• The limitation of bipolar AMI is overcome in B8ZS, which is used in North America.
A sequence of eight zero’s is preplaced by the following encoding:
• A sequence of eight 0’s is replaced by 000+-0+-, if the previous pulse was positive.
• A sequence of eight 0’s is replaced by 000-+0+-, if the previous pulse was negative.
High Density Bipolar-3 Zeros: Another alternative, which is used in Europe and Japan
is HDB3.It replaces a sequence of 4 zeros by a code as per the rule given in the
following table. The encoded signals are shown in Fig.1.67.
Figure.1.67 B8ZS and HDB3 encoding techniques
168. Comparision between 1G,2G,3G and 4G
Generation
(1G,2G,3G,
4G,5G)
Definition Throughput/
Speed
Technology Time period Features
1G Analog 14.4 Kbps (peak) AMPS,NMT,TACS 1970 – 1980 During 1G Wireless phones
are used forvoice only.
2G Digital Narrow
band circuit
data
9.6/14.4 Kbps TDMA,CDMA 1990 to 2000 2G capabilities are achieved
by allowingmultiple users on
a single channel via
multiplexing.During 2G
Cellular phones are used
for data also along with
voice.
2.5G Packet Data 171.2 Kbps(peak)
20-40 Kbps
GPRS 2001-2004 In 2.5G theinternet becomes
popular and data becomes
more
relevant.2.5GMultimedia
services and streaming starts
to show growth.Phones start
supportingweb
browsingthough limited and
very few phones have that.
169. 3G Digital
Broadband
Packet Data
3.1 Mbps (peak)
500-700 Kbps
CDMA 2000
(1xRTT, EVDO)
UMTS, EDGE
2004-2005 3G has Multimedia
services supportalong with
streaming are more
popular.In 3G, Universal
access andportability acros
s different device types are
made possible. (Telephones,
PDA’s, etc.)
3.5G Packet Data 14.4 Mbps (peak)
1-3 Mbps
HSPA 2006 – 2010 3.5G supports higher
throughput and speeds to
support higher data needs of
the consumers.
4G Digital
Broadband
Packet
All IP
Very high
throughput
100-300 Mbps
(peak)
3-5 Mbps
100 Mbps (Wi-Fi)
WiMax
LTE
Wi-Fi
Now (Read more
onTransitioning to 4G)
Speeds for 4G are further
increased to keep up with
data access demand used
by various services.High
definition streaming is now
supported in 4G. New
phones with HD capabilities
surface. It gets pretty cool.In
4G,Portability is increased
further.World-wide
roaming is not a distant
dream.
5G Not Yet Probably gigabits Not Yet Soon (probably 2020)
Update:Samsung
conducts tests on 5G
Currently there is no 5G
technology deployed.
When this becomes
available it will provide very
high speeds to the
consumers. It would also
provide efficient use of
available bandwidth as has
been seen through
development of each new
technology.