The document discusses network layer services and performance. It describes the key services as packetizing, routing, forwarding and other services like error control, flow control and congestion control. It then discusses various delays in network layer performance like transmission delay, propagation delay, processing delay and queuing delay. The total delay a packet experiences is the sum of these delays across all the devices and routers between source and destination.
Introduction, Virtual and Datagram networks, study of router, IP protocol and addressing in the Internet, Routing algorithms, Broadcast and Multicast routing
Introduction, Virtual and Datagram networks, study of router, IP protocol and addressing in the Internet, Routing algorithms, Broadcast and Multicast routing
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
Explore our comprehensive data analysis project presentation on predicting product ad campaign performance. Learn how data-driven insights can optimize your marketing strategies and enhance campaign effectiveness. Perfect for professionals and students looking to understand the power of data analysis in advertising. for more details visit: https://bostoninstituteofanalytics.org/data-science-and-artificial-intelligence/
Opendatabay - Open Data Marketplace.pptxOpendatabay
Opendatabay.com unlocks the power of data for everyone. Open Data Marketplace fosters a collaborative hub for data enthusiasts to explore, share, and contribute to a vast collection of datasets.
First ever open hub for data enthusiasts to collaborate and innovate. A platform to explore, share, and contribute to a vast collection of datasets. Through robust quality control and innovative technologies like blockchain verification, opendatabay ensures the authenticity and reliability of datasets, empowering users to make data-driven decisions with confidence. Leverage cutting-edge AI technologies to enhance the data exploration, analysis, and discovery experience.
From intelligent search and recommendations to automated data productisation and quotation, Opendatabay AI-driven features streamline the data workflow. Finding the data you need shouldn't be a complex. Opendatabay simplifies the data acquisition process with an intuitive interface and robust search tools. Effortlessly explore, discover, and access the data you need, allowing you to focus on extracting valuable insights. Opendatabay breaks new ground with a dedicated, AI-generated, synthetic datasets.
Leverage these privacy-preserving datasets for training and testing AI models without compromising sensitive information. Opendatabay prioritizes transparency by providing detailed metadata, provenance information, and usage guidelines for each dataset, ensuring users have a comprehensive understanding of the data they're working with. By leveraging a powerful combination of distributed ledger technology and rigorous third-party audits Opendatabay ensures the authenticity and reliability of every dataset. Security is at the core of Opendatabay. Marketplace implements stringent security measures, including encryption, access controls, and regular vulnerability assessments, to safeguard your data and protect your privacy.
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
2. Network Layer Services
1. Packetizing
2. Routing and Forwarding
3. Other Services
Expected services are
■ Error Control
■ Flow Control
■ Congestion Control
■ Quality of Service
■ Security
3. 1. Packetizing
Packetizing: Encapsulating the payload (data received from
upper layer) in a network-layer packet at the source and
decapsulating the payload from the network-layer packet at the
destination.
Carry a payload from the source to the destination without
changing it or using it.
If the packet is too large, it is fragmented by the intermediate
routers.
All the fragments have the same header as the original
(especially source and destination addresses), with small changes
to specify fragments.
The fragments are reassembled at the destination.
4. 2. Routing and Forwarding
Routing: In a large network, there will be a number of
routes in between the source and destination devices.
Network layer finds the best route based on some
specific strategies(load, bandwidth, hops, etc.)
The strategies are mostly defined by the Routing
protocols.
The strategies are used to create a decision-making
table, called the routing table for each router.
Routing is applying strategies and running some routing
protocols to create the decision-making tables for each
router.
5. Forwarding
The action applied by a router when a packet arrives at
one of its interfaces.
Forwarding is done with the help of Forwarding table or
Routing table.
On receiving a packet at an interface, the router reads
the destination address/label in the incoming packet,
find the output interface number from the table and
forwards the packet.
The packet forwarding can be;
■ to another attached network (in unicast routing) or
■ to some attached networks (in multicast routing).
7. 3.Other Services
Other services expected from this layer are;
Error Control
Flow Control
Congestion Control
Quality of Service
Security
Error Control: Packets in the network layer may be
fragmented at routers, which makes error checking at
this layer inefficient.
A checksum field in the datagram controls any
corruption in the header, but not in the whole datagram
8. Flow Control
Network layer does not directly provide flow control,
because;
1. To make the network layer at the receiver is so
simple .
2. The upper layers can implement buffers to receive
data from the network layer.
3. Flow control is provided for most of the upper-layer
protocols, so another level of flow control makes the
network layer more complicated and less efficient.
9. Congestion Control
■ Congestion: Too many datagrams are present in an area
of the internet.
■ Happens if the number of datagrams sent by source
computers is beyond the capacity of the network or
routers.
■ Hence, some routers may drop some of the datagrams.
■ Due to the error control mechanism at the upper layers,
the sender may send duplicates of the lost packets.
■ If the congestion continues, sometimes the system
collapses and no datagrams are delivered.
10. Quality of Service (QoS)
■ To keep the network layer simple and untouched,
QoS is implemented in upper layers.
Security
■ The network layer was created with no security
provision.
■ To make the network layer secure, a connection
oriented virtual layer service (called IPSec) is
created.
11. NETWORK LAYER PERFORMANCE
● Can be measured in terms of;
○ Delay
○ Throughput
○ Packet loss
● Congestion control also improves performance.
12. Delay
A packet, from its source to its destination,
encounters delays.
Can be subdivided into;
i. Transmission delay
ii. Propagation delay
iii. Processing delay
iv. Queuing delay.
v. Total Delay
13. 1.(i) Transmission Delay
● A sender puts the bits in a packet on the line one by one.
● If the first bit of the packet is put on the line at time t1 and the last bit is
put on the line at time t2 , transmission delay of the packet is (t2 − t1).
○ Delaytr = (Packet length) / (Transmission rate).
● The longer the packet, the longer the transmission delay.
● Eg: For a Fast Ethernet LAN (100 million bits/sec) with a packet size of
10,000 bits, the transmission delay is (10,000)/(100,000,000) or 100
microseconds
14. 1.(ii) Propagation Delay
● The time taken for a bit to travel from point A to point B in the
transmission media.
○ Delaypg = (Distance) / (Propagation speed).
● Eg: If the distance of a cable link in a point-to-point WAN is 2000 meters
and the propagation speed of the bits in the cable is 2 × 108
meters/second, then the propagation delay is 10 microseconds.
15. 1.(iii) Processing Delay
The time required for a router or a destination host to receive a packet
from its input port, remove the header, perform an error
detectionprocedure, and deliver the packet to the output port (in the case
of a router) or deliver the packet to the upper-layer protocol (in the case of
the destination host).
● May be different for each packet, but normally is calculated as an
average.
○ Delaypr = Time required to process a packet in a router or a
destination host.
16. 1.(iv) Queuing Delay
● Happen in a router.
● A router has an input queue connected to each of its input ports to store
packets waiting to be processed.
● Also an output queue connected to each of its output ports to store
packets waiting to be transmitted.
● Queuing delay is the time a packet waits in the input queue and output
queue of a router.
○ Delayqu = The time a packet waits in input and output queues in a router
17. 1.(v) Total Delay
● The total delay (source-to-destination delay) a packet encounters is the
sum of all the above delays in all the devices and routers that a packet
transfers between the source and destination, including them.
● If there are n routers in between;
○ Total delay = (n + 1) (Delaytr + Delaypg + Delaypr ) + (n) (Delayqu)