The document discusses various data link layer protocols. It begins by introducing stop-and-wait and sliding window protocols. It then provides an example of a stop-and-wait protocol where a frame is lost, leading the sender to retransmit a duplicate frame. Next, it discusses sliding window protocols and provides an example where the window allows multiple outstanding frames. Finally, it gives an example of a one-bit sliding window protocol that uses acknowledgments to control the window.
The data link layer, or layer 2, is the second layer of the seven-layer OSI model of computer networking. This layer is the protocol layer that transfers data between adjacent network nodes in a wide area network (WAN) or between nodes on the same local area network (LAN) segment.
The data link layer, or layer 2, is the second layer of the seven-layer OSI model of computer networking. This layer is the protocol layer that transfers data between adjacent network nodes in a wide area network (WAN) or between nodes on the same local area network (LAN) segment.
Carrier-sense multiple access with collision detection (CSMA/CD) is a media access control method used most notably in early Ethernet technology for local area networking.Carrier-sense multiple access with collision detection is a media access control method used most notably in early Ethernet technology for local area networking. It uses carrier-sensing to defer transmissions until no other stations are transmitting.
Network layer - design Issues ,Store-and-Forward Packet Switching, Services Provided to the Transport Layer, Which service is the best , Implementation of Service , Implementation of Connectionless Service , Implementation of Connection-Oriented Service
The network layer is responsible for routing packets from the source to destination. The routing algorithm is the piece of software that decides where a packet goes next (e.g., which output line, or which node on a broadcast channel).For connectionless networks, the routing decision is made for each datagram. For connection-oriented networks, the decision is made once, at circuit setup time.
Routing Issues
The routing algorithm must deal with the following issues:
Correctness and simplicity: networks are never taken down; individual parts (e.g., links, routers) may fail, but the whole network should not.
Stability: if a link or router fails, how much time elapses before the remaining routers recognize the topology change? (Some never do..)
Fairness and optimality: an inherently intractable problem. Definition of optimality usually doesn't consider fairness. Do we want to maximize channel usage? Minimize average delay?
When we look at routing in detail, we'll consider both adaptive--those that take current traffic and topology into consideration--and nonadaptive algorithms.
In the seven-layer OSI model of computer networking, media access control (MAC) data communication protocol is a sublayer of the data link layer (layer 2). The MAC sublayer provides addressing and channel access control mechanisms that make it possible for several terminals or network nodes to communicate within a multiple access network that incorporates a shared medium, e.g. an Ethernet network. The hardware that implements the MAC is referred to as a media access controller.
The MAC sublayer acts as an interface between the logical link control (LLC) sublayer and the network's physical layer. The MAC layer emulates a full-duplex logical communication channel in a multi-point network. This channel may provide unicast, multicast or broadcast communication service.
Carrier-sense multiple access with collision detection (CSMA/CD) is a media access control method used most notably in early Ethernet technology for local area networking.Carrier-sense multiple access with collision detection is a media access control method used most notably in early Ethernet technology for local area networking. It uses carrier-sensing to defer transmissions until no other stations are transmitting.
Network layer - design Issues ,Store-and-Forward Packet Switching, Services Provided to the Transport Layer, Which service is the best , Implementation of Service , Implementation of Connectionless Service , Implementation of Connection-Oriented Service
The network layer is responsible for routing packets from the source to destination. The routing algorithm is the piece of software that decides where a packet goes next (e.g., which output line, or which node on a broadcast channel).For connectionless networks, the routing decision is made for each datagram. For connection-oriented networks, the decision is made once, at circuit setup time.
Routing Issues
The routing algorithm must deal with the following issues:
Correctness and simplicity: networks are never taken down; individual parts (e.g., links, routers) may fail, but the whole network should not.
Stability: if a link or router fails, how much time elapses before the remaining routers recognize the topology change? (Some never do..)
Fairness and optimality: an inherently intractable problem. Definition of optimality usually doesn't consider fairness. Do we want to maximize channel usage? Minimize average delay?
When we look at routing in detail, we'll consider both adaptive--those that take current traffic and topology into consideration--and nonadaptive algorithms.
In the seven-layer OSI model of computer networking, media access control (MAC) data communication protocol is a sublayer of the data link layer (layer 2). The MAC sublayer provides addressing and channel access control mechanisms that make it possible for several terminals or network nodes to communicate within a multiple access network that incorporates a shared medium, e.g. an Ethernet network. The hardware that implements the MAC is referred to as a media access controller.
The MAC sublayer acts as an interface between the logical link control (LLC) sublayer and the network's physical layer. The MAC layer emulates a full-duplex logical communication channel in a multi-point network. This channel may provide unicast, multicast or broadcast communication service.
Transport layer in network it's functions
Best material i used to qualify my exam just by sitting one day of exam
The way the PPT presented is so informative and we'll understandable even for the beginners.
It will be very useful to me to get information and for giving seminars in college
Opendatabay - Open Data Marketplace.pptxOpendatabay
Opendatabay.com unlocks the power of data for everyone. Open Data Marketplace fosters a collaborative hub for data enthusiasts to explore, share, and contribute to a vast collection of datasets.
First ever open hub for data enthusiasts to collaborate and innovate. A platform to explore, share, and contribute to a vast collection of datasets. Through robust quality control and innovative technologies like blockchain verification, opendatabay ensures the authenticity and reliability of datasets, empowering users to make data-driven decisions with confidence. Leverage cutting-edge AI technologies to enhance the data exploration, analysis, and discovery experience.
From intelligent search and recommendations to automated data productisation and quotation, Opendatabay AI-driven features streamline the data workflow. Finding the data you need shouldn't be a complex. Opendatabay simplifies the data acquisition process with an intuitive interface and robust search tools. Effortlessly explore, discover, and access the data you need, allowing you to focus on extracting valuable insights. Opendatabay breaks new ground with a dedicated, AI-generated, synthetic datasets.
Leverage these privacy-preserving datasets for training and testing AI models without compromising sensitive information. Opendatabay prioritizes transparency by providing detailed metadata, provenance information, and usage guidelines for each dataset, ensuring users have a comprehensive understanding of the data they're working with. By leveraging a powerful combination of distributed ledger technology and rigorous third-party audits Opendatabay ensures the authenticity and reliability of every dataset. Security is at the core of Opendatabay. Marketplace implements stringent security measures, including encryption, access controls, and regular vulnerability assessments, to safeguard your data and protect your privacy.
Show drafts
volume_up
Empowering the Data Analytics Ecosystem: A Laser Focus on Value
The data analytics ecosystem thrives when every component functions at its peak, unlocking the true potential of data. Here's a laser focus on key areas for an empowered ecosystem:
1. Democratize Access, Not Data:
Granular Access Controls: Provide users with self-service tools tailored to their specific needs, preventing data overload and misuse.
Data Catalogs: Implement robust data catalogs for easy discovery and understanding of available data sources.
2. Foster Collaboration with Clear Roles:
Data Mesh Architecture: Break down data silos by creating a distributed data ownership model with clear ownership and responsibilities.
Collaborative Workspaces: Utilize interactive platforms where data scientists, analysts, and domain experts can work seamlessly together.
3. Leverage Advanced Analytics Strategically:
AI-powered Automation: Automate repetitive tasks like data cleaning and feature engineering, freeing up data talent for higher-level analysis.
Right-Tool Selection: Strategically choose the most effective advanced analytics techniques (e.g., AI, ML) based on specific business problems.
4. Prioritize Data Quality with Automation:
Automated Data Validation: Implement automated data quality checks to identify and rectify errors at the source, minimizing downstream issues.
Data Lineage Tracking: Track the flow of data throughout the ecosystem, ensuring transparency and facilitating root cause analysis for errors.
5. Cultivate a Data-Driven Mindset:
Metrics-Driven Performance Management: Align KPIs and performance metrics with data-driven insights to ensure actionable decision making.
Data Storytelling Workshops: Equip stakeholders with the skills to translate complex data findings into compelling narratives that drive action.
Benefits of a Precise Ecosystem:
Sharpened Focus: Precise access and clear roles ensure everyone works with the most relevant data, maximizing efficiency.
Actionable Insights: Strategic analytics and automated quality checks lead to more reliable and actionable data insights.
Continuous Improvement: Data-driven performance management fosters a culture of learning and continuous improvement.
Sustainable Growth: Empowered by data, organizations can make informed decisions to drive sustainable growth and innovation.
By focusing on these precise actions, organizations can create an empowered data analytics ecosystem that delivers real value by driving data-driven decisions and maximizing the return on their data investment.
As Europe's leading economic powerhouse and the fourth-largest hashtag#economy globally, Germany stands at the forefront of innovation and industrial might. Renowned for its precision engineering and high-tech sectors, Germany's economic structure is heavily supported by a robust service industry, accounting for approximately 68% of its GDP. This economic clout and strategic geopolitical stance position Germany as a focal point in the global cyber threat landscape.
In the face of escalating global tensions, particularly those emanating from geopolitical disputes with nations like hashtag#Russia and hashtag#China, hashtag#Germany has witnessed a significant uptick in targeted cyber operations. Our analysis indicates a marked increase in hashtag#cyberattack sophistication aimed at critical infrastructure and key industrial sectors. These attacks range from ransomware campaigns to hashtag#AdvancedPersistentThreats (hashtag#APTs), threatening national security and business integrity.
🔑 Key findings include:
🔍 Increased frequency and complexity of cyber threats.
🔍 Escalation of state-sponsored and criminally motivated cyber operations.
🔍 Active dark web exchanges of malicious tools and tactics.
Our comprehensive report delves into these challenges, using a blend of open-source and proprietary data collection techniques. By monitoring activity on critical networks and analyzing attack patterns, our team provides a detailed overview of the threats facing German entities.
This report aims to equip stakeholders across public and private sectors with the knowledge to enhance their defensive strategies, reduce exposure to cyber risks, and reinforce Germany's resilience against cyber threats.
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
StarCompliance is a leading firm specializing in the recovery of stolen cryptocurrency. Our comprehensive services are designed to assist individuals and organizations in navigating the complex process of fraud reporting, investigation, and fund recovery. We combine cutting-edge technology with expert legal support to provide a robust solution for victims of crypto theft.
Our Services Include:
Reporting to Tracking Authorities:
We immediately notify all relevant centralized exchanges (CEX), decentralized exchanges (DEX), and wallet providers about the stolen cryptocurrency. This ensures that the stolen assets are flagged as scam transactions, making it impossible for the thief to use them.
Assistance with Filing Police Reports:
We guide you through the process of filing a valid police report. Our support team provides detailed instructions on which police department to contact and helps you complete the necessary paperwork within the critical 72-hour window.
Launching the Refund Process:
Our team of experienced lawyers can initiate lawsuits on your behalf and represent you in various jurisdictions around the world. They work diligently to recover your stolen funds and ensure that justice is served.
At StarCompliance, we understand the urgency and stress involved in dealing with cryptocurrency theft. Our dedicated team works quickly and efficiently to provide you with the support and expertise needed to recover your assets. Trust us to be your partner in navigating the complexities of the crypto world and safeguarding your investments.
【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】
Data link layer
1. 1
3a. Data Link Layer Protocols
1. Introduction
2. DLL Design a. Network Layer Services
b. Error Control
c. Flow Control
3. Elementary Data Link
Protocols
a. Stop-and-Wait Protocol
b. Simplex Protocol for Noisy
Channel; Time-out
c. Sliding Window Protocols
d. Sliding-window Flow Control
e. A One bit Sliding-Window
f. A Protocol Using Go-Back-N
g. Selective RejectHigh-Level Data Linc Control a. HDLC Operation
b. HDLC Protocol
The Internet Protocol a. PPP-The point-to-point protocol
(T. 183-229; 234-246)
2. 2
1. Data/control exchanged via
protocols
a human protocol and a computer network
protocol:
Hi
Hi
Got the
time?
2:00
TCP connection
req
TCP connection
responseGet http://www.awl.com/kurose-ross
<file>time
3. 3
Data Link Layer
application
transport
network
link
physical
Requirements and Objectives:
Maintain and release data Link
Frame synchronization
Error control
Flow control
Addressing
Link management
DLL functions:
• Providing service interface to the network layer.
• Data Link Protocols must take circuit errors,
• Flow regulating.
• Data transfer between
neighboring network elements
4. 4
Link Layer: Introduction
Some terminology:
• Hosts, bridges, switches and
routers are nodes
• Communication channels that
connect adjacent nodes along
communication path are links
– wired links
– wireless links
– LANs
• frame, encapsulates datagram
“Data link”
Data link layer has responsibility of
transferring datagram from one node
to adjacent node over a data link
5. 5
2. “Packet” and “Frame” relationship
Packet
Header Payload fild Trailer
Sending machine Receiving machine
PacketFrame
Header Payload field Trailer
Network Layer
Network Layer
In some cases, functions of error control and flow control are
allocated in transport or other upper layer protocols and not in the
DLL, but principles are pretty much the same.
6. 6
Protocol layering and data
Each layer takes data from above
• adds header information to create new data
unit
• passes new data unit to layer below
application
transport
network
link
physical
application
transport
network
link
physical
source destination
M
M
M
M
Ht
HtHn
HtHnHl
M
M
M
M
Ht
HtHn
HtHnHl
message
segment
datagram
frame
8. 8
list of the DLL requirements
• Frame synchronization. Data are sent in blocks
called frames. The beginning and end of each frame
must be recognized.
• Flow control. The sending station must not send
frames at a rate faster then the receiving station can
absorb them.
• Error control. Any bit errors introduced by the
transmission system must be checked & corrected.
• Addressing. On a multipoint line, such as a LAN,
the identity of the two stations involved in a
transmission must be specified.
• Link management. The initiation, maintenance, and
termination of a data exchange requires a fair
amount of coordination and cooperation among
stations.
9. 9
Services to the Network Layer (NL)
• DLL processes data transfer using a data link
protocol.
• The actual services can vary from system to system.
Three reasonable services to the NL are:
1. Unacknowledged connectionless service.
2. Acknowledged connectionless service.
3. Acknowledged connection-oriented service.
10. 10
1. Unacknowledged connectionless service
• The source machine send frames to the destination
machine without having the destination machine
acknowledged them.
• No logical connection is established beforehand or
released afterward.
• If a frame is lost due to noise on the line, no attempt
is made to detect the loss or recover from it in the
DLL.
• This class of service is appropriate when the error
rate is very low so that recovery task is left for
solution to higher layers.
• It is also appropriate for real-time traffic, such as
voice, in which late data are worse than bad data.
• Most LANs use unacknowledged connectionless
service in the DLL
11. 11
2. Acknowledged connectionless service
• Is more reliable.
• Still no logical connections used, but each frame
sent is individually acknowledged.
• The sender knows whether a frame has arrived
correctly.
• If it has not arrived within a specific time interval, it
can be sent again.
• This service is useful over unreliable channels,
such as wireless system.
• If the large packet is broken up into frames, If
individual frames are acknowledged or
retransmitted, entire packets get through much
faster than unbroken frame that is lost, it may take
a very long time for the packet to get through..
12. 12
3. ACKed connection-oriented service
• The service requires established connection between
source/destination machines before data are transferred.
• Any frame sent over the connection is numbered, and the
DLL guarantees that each frame sent, is received, and are
received in the same order.
• With connectionless service, in contrast, it is possible that a
lost acknowledgement causes a packet to be sent several
times and thus received several times.
• When connection-oriented service is used, transfers go
through 3 distinct phases:
1. The connection is established and counters needed to keep
track of which frames have been received and which ones
have not.
2. One or more frames are transmitted and acknowledged.
3. Connection is released, freeing up the variables - buffers and
other resources used to maintain the connection.
13. 13
Link Layer Job
Framing:
– encapsulate datagram into frame, adding header, trailer
Error Detection:
– errors caused by signal attenuation, noise.
– receiver detects presence of errors:
• signals sender for retransmission or drops frame
two types of errors:
• Lost frame
• Damaged frame
Error Correction:
– receiver identifies and corrects bit errors without
retransmission
14. 14
Example is a WAN subnet
• Consisting of routers connected by point-to-point
leased telephone lines.
1. When a frame arrives at a router, the hardware
checks it for errors, (Passes the frame to the DLL
software which might be embedded in a chip on the
network interface board).
2. The DLL software checks to see if it is the frame
expected,
3. If so, gives the packet (contained the payload field)
to the routing software.
4. The routing software then chooses the appropriate
outgoing line and passes the packet back down to
the DLL software, which then transmits it.
15. 15
Techniques for error control are:
• Error detection.
• Positive Acknowledgment.
• Retransmission after time-out.
• Negative acknowledgment and retransmission
These 4 mechanisms are all referred to as Automatic
Report reQuest (ARQ); the effect of ARQ is to turn
an unreliable data link into a reliable one.
Three standardized versions Of ARQ:
• Stop-and-wait ARQ
• Go-back-N ARQ
• Selective-reject ARQ
16. 16
Link Layer Job (Cont)
Flow Control:
Two approaches are commonly used:
1. Feedback-based flow control, the receiver
sends back information to the sender giving it
permission to send more data or at least
telling the sender how the receiver is doing.
“You may send me n frames now, but after
they have been sent, do not send any more
until I have told you to continue”.
2. Rate-based flow control, the protocol has a
built-in mechanism that limits the rate at
which senders may transmit data. Since rate-
17. 17
Elementary Data Link Protocols
• Assumptions:
1). DLL and Network layer are independent processes
that communicate by passing messages back and
forth trough the physical layer.
2). a. Machine A wants to send a long stream of data to
machine B, using a reliable, connection-oriented
service.
b. We will consider the case where B also wants to
send data to A simultaneously. A is assumed to
have a data ready to send.
3). Machines do not crash.
18. 18
Prtcl.1. Stop-and Wait Protocol
• Protocol in which the sender sends one frame and then waits
for an ACK: stop-and-wait.
• Δt (timeout); Damaged ACK; ACK0, ACK1.
• bidirectional information transfer.
• Half duplex physical channel.
• It is often the case that a source will break up a large block of
data into smaller blocks and transmit the data in many
frames, Reason:
1. The buffer size of the receiver may be limited.
2. The larger the transmission, the more error,
With smaller frames, error are detected sooner, Smaller
amount of data needs retransmission.
3. On a shared medium, (LAN), it is usually desirable not to
permit one station to occupy the medium for an extended
period, as this causes long delay at the other sending
stations.
∆
19. 19
Stop-and-Wait ARQ
Frame 0
ACK1
Frame 1
ACK0
Frame 0
Timeout
Frame 0
ACK1
Timeout
Frame 0
Frame lost A
retransmits
ACK1 lost A
retransmits
A B
B discards
duplicate
frame
20. 20
How to prevent the sender from
flooding the receiver?
• Δt= from_physical_layer + to_network_layer,
• Errors
Damaged:
Error detection
Acknowledgment
(Copies are maintained).,
Damaged ACK=Time-out+
Duplicates frame
Frame labeling (0 / 1).
Positive ACK0= ready for 1;
ACK1= ready for 0.
Lost:
Timer
Time-out
Frame resend
(Copies are maintained)
21. 21
T T
T T
T T
T T
T T
R
R
R R
R
R
R
R
R
R
t0
t0+α
t0+1+2α
t0+1+α
t0+1+2α
t0+1+α
t0+1
t0+α
t0
t0+1
(a) α>1 (b) α<1
Stop-and-wait link utilization
(transmission time=1; propagation delay=α).
underutilized inefficiently utilized
22. 22
Prtcl.2. Simplex prtcl for Noisy Channel; Time-out
• Data are transmitted in one direction only (simplex
channel), that makes error. Frames may be either
damaged or lost completely.
• Stop-and-wait protocol would work: adding a timer.
a. The sender could send a frame, but the receiver
would only send an ACK frame if the data were
correctly received.
b. If a damaged frame arrived at the receiver, it
would be discarded.
c. After a while the sender would time out and
sends the frame again. This process would be
repeated until the frame finally arrives intact.
• 1-bit sequence number (0 or 1)
23. 23
TCP Round Trip Time and Timeout
Q: how to set TCP
timeout value?
• too short: premature
timeout
=unnecessary
retransmissions
• too long: slow
reaction =time
wasting
Q: how to estimate RTT?
• SampleRTT: measured time
from segment transmission
until ACK receipt
– ignore retransmissions
• SampleRTT will vary, want
estimated RTT “smoother”
– average several recent
measurements, not just
current SampleRTT
24. 24
Fast Retransmit
• Time-out period often
relatively long:
– long delay before
resending lost packet
• Detect lost frame via
duplicate ACKs.
– Sender often sends
many frames back-to-
back
– If frame is lost, there
will likely be many
duplicate ACKs.
• If sender receives 3
ACKs for the same
data, it presumes that
frame after ACKed
data was lost:
– fast retransmit: resend
frame immediately,
before timer expires
25. 25
Protocol scenario:
1. The network layer on A gives packet 1 to its DLL. The
packet is correctly received at B and passed to the
network layer on B.
B sends an ACK frame back to A.
2. The ACK frame gets lost completely. It just never
arrives at all.
3. The DLL on A times out. Not having received an ACK,
it (incorrectly) assumes that its data frame was lost or
damaged and sends the frame containing packet 1
again.
4. The duplicate frame also arrives at the DLL on B
perfectly and is randomly passed to the network layer
there. If A is sending a file to B, part of the file will be
duplicated (i.e., the copy of the file made by B will be
incorrect and the error will not have been detected). In
other words, the protocol will fail.
26. 26
Sliding-Window
• Better idea is to use the Duplex Channel.
• Data frame from A to B are intermixed with the
acknowledgment frames from B to A.
By looking at the kind field in the header of an
incoming frame, the receiver can tell whether
the frame is data or ACK.
• Station B,-buffer space for n frames. Thus, B
can accept n frames, and A is allowed to send
n frames without waiting for any ACK.
• 3-bit field, the sequence number can range
from 0 to 7 0 through , from 0 to 12 −k
27. 27
Sliding-Window
0 1 2 3 5 6 7 0 14 0765432
0
4321
554321 6 2107 43
Frames already received
Frames already received
Window of frames that
may be transmitted
Window of frames that
may be accepted
321076 4
Frame
Sequence
number
Last frame
transmitted
Window shrinks
from trailing edge
as frames are sent
Window expands from
leading edge as received
acknowledgment
Last frame
acknowledged
Window shrinks
from trailing edge
as frames are received
Window expands from
leading edge as sent
acknowledgment
(a) Transmitter’s perspective
(b) Receiver’s perspective
Pipeline
29. 29
Pr.4 Example: One-Bit Sliding Window
(piggybacking)
A sends (0,1,A0)
A gets (0,0,B0)*
A sends (1,0,A1)
A gets (1,1,B1)*
A sends (0,1,A2)
A gets (0,0,B2)*
A sends (1,0,A3)
B gets (0,1,A0)*
B sends (0,0,B0)
B gets (1,0,A1)*
B sends (1,1,B1)
B gets (0,1,A2)*
B sends (0,0,B2)
B gets (1,0,A3)*
B sends (1,1,B3)
A sends (0,1,A0)
A gets (0,1,B0)*
A sends (0,0,A0)
A gets (0,0,B0)
A sends (1,0,A1)
A gets (1,0,B1)*
A sends (1,1,A1)
B sends (0,1,B0)
B gets (0,1,A0)*
B sends (0,0,B0)
B gets (0,0,A0)
B sends (1,1,B1)
B gets (1,0,A1)*
B sends (1,1,B1)
B gets (1,1,A1)
B sends (0,1,B2)
a b
Two scenario: (a) Normal case. (b) Abnormal case. The notation is (seq, ack,
packet number). An asterisk indicates where a network layer accepts a packet.
A; T-O short
30. 30
Prtcl.5. A Protocol Using Go Back N
• For efficiency of the bandwidth utilization:
• 59 kbps satellite channel-500-msec round-trip delay.
Sent 1000-bit frame. At t=0 msec-the frame has been
Started and t=20 msec sent. Received t=270 msec
frame fully arrived at the receiver; t=520 msec- ACK to
the sender; So, sender was blocked during 500/520 or
96% of the time. 4 % of the bandwidth was used.
The solution: the sender transmits up to w frames
before blocking, instead of just 1 frame.
• The example, w should be at least 26. The sender
begins sending Fr. 0 as before. Finishes sending 26
frames, at t=520 msec, the ACK for frame 0 will have
just arrived. ACK arrive every 20 msec, (PIPLINING)
so the sender always gets permission to continue
when it needs it.
31. 31
Pr.5.A Protocol Using Go Back N (Cont)
• If the channel capacity is b bits/sec, the frame
size l bits, and the round-trip propagation time
R sec, the time required to transmit a single
frame is l/b sec. After the last bit of data
frame has been sent, there is a delay of R/2
before that bit arrives at the receiver and
another delay of at least R/2 for ACK to come
back, for a total delay of R.
• In stop-and-wait the line is busy for l/b and
idle for R, giving:
Line utilization = l / (l+bR).=4%
32. 32
Pr.5. A Protocol Using Go Back N
0 31 2 4 5 6 7 8
0 1 E
765432
532
D
0 1 2
5432
4
DD D DD
Time interval, Time out
9876 13121110
53E0 1 24 9876 13121110
Error Frames discarded by DLL Time
Error Frames buffered by DLL
a
b
0 1 NAK2 1 1 5 6 7 8 9 10 11
0 1 2 3 4 5
Data flow ACK flow
Error recovery, when:
(a) receiver’s window size is 1 and
(b) receiver’s window size is large; Selective Repeat
Selective
repeat
(NAK)
Go Back N
With size
window 1
33. 33
Pr.5 .Go-Back-N
Sender:
• k-bit seq # in packet header
• “window” of up to N, consecutive unACK’ed pkts allowed
• ACK(n): ACKs all pkts up to, including seq # n -
“cumulative ACK”
– may receive duplicate ACKs (see receiver)
• timer for each in-flight pkt
• timeout(n): retransmit pkt n and all higher seq # pkts in
window
35. 35
Prtcl.7. High-Level Data Link Control
• High-Level Data Link Control (HDLC) subsets:
• (Synchronous Data Link Control (SDLC)
• Link Access Procedure for D Channel (LAPD)
• Advanced Data Communication Control
Procedure (ADCCP)
• Link Access Procedure (LAP).
These protocols are based on the same principles.
36. 36
Pr.7. HDLC Frame Format
Flag
8 Bits
Address
8/16 Bits
Control
8/16 Bits
Data
Variable Length
CRC
8/16 Bits
Flag
8 Bits
bit oriented; bit stuffing
Master Slave
Commends
Response
Flag- synchronization.
Address- address of the secondary station.
Control- keep track of transmitted and received frames for
acknowledgment and flow control.
CRC- contains a checksum to ensure data integrity.
Flag- used to signal the end of a frame, and possibly the start
of the next frame.
37. 37
Pr.7. High-Level Data Link Control
• Three kinds of control fields:
a. Information
b. Supervisory
c. Unnumbered.
0 Seq P/F Next
1
1
1
0
Bits 1 3 1 3
Type P/F Next
Type P/F Modifier
(a)
(b)
(c)
The protocol uses a sliding
window, with 3-bit sequence
number. Up to seven
unacknowledged frames may be
outstanding at any instant.
For ACK is used the number of the first frame not
yet received (i.e.., the next frame expected).
P-polling data
F-finished polling.
(a)-nACK (reject)
(b)-RNR
(c)-Selective reject-
retransmit specified
38. 38
Pr.7. High-Level Data Link Control
• Different types of frames use different ACKs:
Type
1
REJECT Transmission error
has been detected
Type
2
RECEIVE
NOT
READY
Acknowledges all
frames, but not
including Next.
Stop sending
Type
3
SELECTIE
REJECT
Retransmission
of only the frame
specified.
ACK Definition Used
Frame with
error
Problems with
the receiver
shortage of
buffer
sender’s window
size is half or
less the
sequence space
39. 39
A Network Layer in the Internet
Leased
Lines to
Asia
A U.S. backbone
Regional
network
IP Ethernet
LAN
IP token
Ring LAN
Regional
network
A European backbone
A1 C
D
B
2
40. 40
Data Link Layer in the Internet
Subnet
router
Host
ATC
PC
Service
provider
41. 41
Data Link Layer in the Internet
PC
modemClient process
Using TCP/IP
User’s home
Dial-up
Telephone
line
TCP/IP
Connection
Using PPP
modems
Router
Routing
process
Internet provider’s office
A home personal computer acting as an Internet host
PPP Situation
42. 42
Pr.8. PPP-The Point-to-point
Protocol
PPP provides three features:
• A framing method that clearly determines the:
end of one frame and the start of the next one,
Error detection.
• A link control protocol for bringing lines up, testing them,
negotiating options, and bringing them down again when they
are no longer needed, This protocol is called LCP (Link
Control Protocol). It supports synchronous and
asynchronous circuits and byte-oriented and bit-oriented
encodings.
• A way to negotiate network-layer options in a way that is
independent of the network layer protocol to be used. The
method chosen is to have a different NCP (Network Control
Protocol) for each network layer supported.
43. 43
Pr.8. PPP- Steps
ATC
Router 1. PC calls the provider’s router via a modem.
2. The router’s modem has answered the
phone and established a physical connection
3. PC sends to the router a series of LCP
packets in the payload field of one or more
PPP frames
• These packets and their responses select the PPP
parameters to be used.
• Once the parameters have been agreed upon, a series of
Network Control Protocol packets are sent to configure the
network layer.
• Typically, the PC wants to run a TCP/IP protocol stack, so it
needs an IP address.
44. 44
Difference between PPP and HDLC
Flag
8 Bits
01111110
Address
8/16
Bits
Control
8/16
Bits
Data
Variable
Length
CRC
8/16
Bits
Flag
8 Bits
01111110
High-Level Data Link Frame; Bit-Oriented
Flag
01111110
Address
11111111
Control
00000011
Protocol Payload Checksum Flag
01111110
1 1 1 1 or 2 variable 2 or 4 1
Bytes
PPP Frame; Byte Oriented
45. 45
Pr.8. PPP-Protocol field
• The Protocol field’s job is to tell what kind of
packet is in the Payload field.
• Codes are defined for LCP, NCP, IP, and other
protocols.
• Protocols starting with a 0 bit are network layer
protocols such as IP, IPX, OSI CLANP.
• Those starting with a 1 bit are used to negotiate
other protocols. These include LCP and a different
NCP for each network layer protocol supported.
• The default size of the protocol field is 2 bytes, but it
can be negotiated down to 1 byte using LCP.
46. 46
PPP-summary
• PPP is a multiprotocol framing mechanism suitable for use
over: Modems, HDLC,SONET, Other physical layers.
• It supports: Error detection, Option negotiating, Header
compression.
• DLL converts the raw bit stream (from physical layer) into a
stream of frames (for network layer).
• Various framing methods are used: character count, byte
stuffing, and bit stuffing.
• Data link protocols can provide: 1. Error control to retransmit
damaged or lost frames. 2.To prevent a fast sender from
overrunning a slow receiver.
• The data link protocol also provide flow control.
• The sliding window mechanism is used to integrate error
control and flow control in a convenient way
Editor's Notes
Piggybacking it is typically the technique of temporarily delaying outgoing ACKs so that they can be hocked onto the next outgoing data frame.
Each data frame includes a field that holds the sequence number of that frame plus a field that holds the sequence number used for acknowledgment. Thus, if a station has data to send and an acknowledgment to send, it sends both together in one frame,
piggybacking, it is typically the technique of temporarily delaying outgoing ACKs so that they can be hocked onto the next outgoing data frame. Each data frame includes a field that holds the sequence number of that frame plus a field that holds the sequence number used for acknowledgment.