The document discusses the history and development of the London Underground rail system. It describes how Charles Pearson envisioned a system of trains running through spacious, well-lit tunnels to connect the main railroad terminals in London. While an earlier plan involved gaslit streets underground for horse-drawn carriages, it was rejected due to safety concerns. Pearson's vision eventually led to the successful implementation of the London Underground railway system.
M2M systems layers and designs standardizationsFabMinds
The document discusses standards and standardization bodies for Internet of Things (IoT) systems. The Internet Engineering Task Force (IETF), International Telecommunication Union (ITU-T), European Telecommunication Standards Institute (ETSI), and Open Geospatial Consortium (OGC) have all proposed standards and reference models for IoT layers, communication, and device/sensor capabilities. Specifically, ETSI defined domains and capabilities for machine-to-machine communication systems, while IETF, ITU-T, and OGC focused on network layers, transport protocols, and sensor discovery/metadata.
This document discusses various web communication protocols used for connectivity of devices over constrained and unconstrained environments. It describes protocols like CoAP, DTLS, JSON and TLV formats that allow small devices with limited resources to communicate securely over the web. CoAP is a specialized web transfer protocol that uses request/response model and supports resource discovery. DTLS provides security services like integrity, authentication and confidentiality for UDP-based applications. JSON and TLV are compact data formats used for message transmission.
PR-278: RAFT: Recurrent All-Pairs Field Transforms for Optical FlowHyeongmin Lee
이번 논문은 ECCV2020에서 Best Paper를 받은 논문으로, 기존 방법들과는 다르게 반복적인 Update를 통해 Optical Flow를 예측하여 꽤나 높은 성능을 기록한 논문입니다.
paper link: https://arxiv.org/pdf/2003.12039.pdf
video link: https://youtu.be/OnZIDatotZ4
MULTIPLE CHOICE QUESTIONS WITH ANSWERS ON WIRELESS SENSOR NETWORKSvtunotesbysree
This document discusses wireless sensor networks and localization/tracking problems in sensor networks. It begins with key concepts in sensor networks like limited resources, distributed sensing, and canonical problems like localization and tracking. It then discusses approaches to localization and tracking like Bayesian estimation, Kalman filtering, and particle filtering. It highlights challenges like distributed representation of information and tracking multiple interacting targets. The document provides examples and explanations of fundamental concepts in collaborative signal and information processing for sensor networks.
This document discusses different types of switching fabrics used in computer networks, including time division switching, space division switching, and time-space switching. It provides examples of how each type works and their advantages and disadvantages. It also covers Clos networks and recursive constructions like Benes networks that enable building large non-blocking switches.
This document discusses different types of errors that can occur during data transmission and various error detection and correction techniques. It describes single-bit errors where one bit is changed and burst errors where multiple consecutive bits are changed. It then explains techniques like two-dimensional parity, checksums, and cyclic redundancy checks which add redundant bits to detect errors by checking for discrepancies between transmitted and received data. The document provides examples of how internet checksums and cyclic redundancy checks work to detect errors.
This document provides an overview of associative memories and discrete Hopfield networks. It begins with introductions to basic concepts like autoassociative and heteroassociative memory. It then describes linear associative memory, which uses a Hebbian learning rule to form associations between input-output patterns. Next, it covers Hopfield's autoassociative memory, a recurrent neural network for associating patterns to themselves. Finally, it discusses performance analysis of recurrent autoassociative memories. The document presents key concepts in associative memory theory and different models like linear associative memory and Hopfield networks.
M2M systems layers and designs standardizationsFabMinds
The document discusses standards and standardization bodies for Internet of Things (IoT) systems. The Internet Engineering Task Force (IETF), International Telecommunication Union (ITU-T), European Telecommunication Standards Institute (ETSI), and Open Geospatial Consortium (OGC) have all proposed standards and reference models for IoT layers, communication, and device/sensor capabilities. Specifically, ETSI defined domains and capabilities for machine-to-machine communication systems, while IETF, ITU-T, and OGC focused on network layers, transport protocols, and sensor discovery/metadata.
This document discusses various web communication protocols used for connectivity of devices over constrained and unconstrained environments. It describes protocols like CoAP, DTLS, JSON and TLV formats that allow small devices with limited resources to communicate securely over the web. CoAP is a specialized web transfer protocol that uses request/response model and supports resource discovery. DTLS provides security services like integrity, authentication and confidentiality for UDP-based applications. JSON and TLV are compact data formats used for message transmission.
PR-278: RAFT: Recurrent All-Pairs Field Transforms for Optical FlowHyeongmin Lee
이번 논문은 ECCV2020에서 Best Paper를 받은 논문으로, 기존 방법들과는 다르게 반복적인 Update를 통해 Optical Flow를 예측하여 꽤나 높은 성능을 기록한 논문입니다.
paper link: https://arxiv.org/pdf/2003.12039.pdf
video link: https://youtu.be/OnZIDatotZ4
MULTIPLE CHOICE QUESTIONS WITH ANSWERS ON WIRELESS SENSOR NETWORKSvtunotesbysree
This document discusses wireless sensor networks and localization/tracking problems in sensor networks. It begins with key concepts in sensor networks like limited resources, distributed sensing, and canonical problems like localization and tracking. It then discusses approaches to localization and tracking like Bayesian estimation, Kalman filtering, and particle filtering. It highlights challenges like distributed representation of information and tracking multiple interacting targets. The document provides examples and explanations of fundamental concepts in collaborative signal and information processing for sensor networks.
This document discusses different types of switching fabrics used in computer networks, including time division switching, space division switching, and time-space switching. It provides examples of how each type works and their advantages and disadvantages. It also covers Clos networks and recursive constructions like Benes networks that enable building large non-blocking switches.
This document discusses different types of errors that can occur during data transmission and various error detection and correction techniques. It describes single-bit errors where one bit is changed and burst errors where multiple consecutive bits are changed. It then explains techniques like two-dimensional parity, checksums, and cyclic redundancy checks which add redundant bits to detect errors by checking for discrepancies between transmitted and received data. The document provides examples of how internet checksums and cyclic redundancy checks work to detect errors.
This document provides an overview of associative memories and discrete Hopfield networks. It begins with introductions to basic concepts like autoassociative and heteroassociative memory. It then describes linear associative memory, which uses a Hebbian learning rule to form associations between input-output patterns. Next, it covers Hopfield's autoassociative memory, a recurrent neural network for associating patterns to themselves. Finally, it discusses performance analysis of recurrent autoassociative memories. The document presents key concepts in associative memory theory and different models like linear associative memory and Hopfield networks.
This document presents an agenda for a talk on Petri nets. It begins with an introduction to Petri nets that defines their structure, including places, transitions, tokens, and firing rules. It then discusses several analysis methods for Petri nets, including reachability trees, incidence matrices, and reduction rules. Next, it covers high-level Petri nets and colored Petri nets. The document concludes by mentioning an application of Petri nets to rumor detection and blocking in online social networks, and introduces orbital Petri nets as a promising approach.
Scheduling refers to allocating computing resources like processor time and memory to processes. In cloud computing, scheduling maps jobs to virtual machines. There are two levels of scheduling - at the host level to distribute VMs, and at the VM level to distribute tasks. Common scheduling algorithms include first-come first-served (FCFS), shortest job first (SJF), round robin, and max-min. FCFS prioritizes older jobs but has high wait times. SJF prioritizes shorter jobs but can starve longer ones. Max-min prioritizes longer jobs to optimize resource use. The choice depends on goals like throughput, latency, and fairness.
Business models for business processes on IoTFabMinds
The document discusses business models for business processes on the Internet of Things. It covers key topics like IoT applications, business models, value creation using IoT, and business model scenarios for IoT. Business models need innovation to adapt to new customer access and interactions enabled by technologies like cloud computing and mobile communications. Value is created on IoT through addressing emergent needs, information convergence, and recurrent revenue from networked products. Example business model scenarios for IoT leverage data from multiple sources like sensors, M2M, and open data.
The document discusses the functions of a gateway in an IoT/M2M system. The gateway performs data enrichment, consolidation, and device management. It has several key functions including transcoding data formats, ensuring privacy and security, gathering and enriching data from devices, aggregating and compacting data, and managing device identities, configurations, and faults.
Chapter - 5 Data Mining Concepts and Techniques 2nd Ed slides Han & Kambererror007
The document discusses Chapter 5 from the book "Data Mining: Concepts and Techniques" which covers frequent pattern mining, association rule mining, and correlation analysis. It provides an overview of basic concepts such as frequent patterns and association rules. It also describes efficient algorithms for mining frequent itemsets such as Apriori and FP-growth, and discusses challenges and improvements to frequent pattern mining.
The document discusses congestion control in computer networks. It defines congestion as occurring when the load on a network is greater than the network's capacity. Congestion control aims to control congestion and keep the load below capacity. The document outlines two categories of congestion control: open-loop control, which aims to prevent congestion; and closed-loop control, which detects congestion and takes corrective action using feedback from the network. Specific open-loop techniques discussed include admission control, traffic shaping using leaky bucket and token bucket algorithms, and traffic scheduling.
This document discusses complex network analysis and several concepts related to social networks such as the small world phenomenon, friendship paradox, centrality measures, clustering coefficient, and degree distribution. It provides examples of applying complex network analysis to a friendship network of BITS students and a Twitter growth model. Power law distributions are found to describe properties of many real-world networks like degree distributions.
The document contains descriptions and figures about stop-and-wait, sliding window, and selective reject transmission protocols. Stop-and-wait uses acknowledgments to ensure frames are received correctly one at a time, while sliding window protocols allow multiple unacknowledged frames to be sent by keeping a window of outstanding frames. The figures demonstrate examples of how these protocols handle damaged frames, lost frames, and lost acknowledgments to ensure reliable data transmission.
This document discusses superscalar and super pipeline approaches to improving processor performance. Superscalar processors execute multiple independent instructions in parallel using multiple pipelines. Super pipelines break pipeline stages into smaller stages to reduce clock period and increase instruction throughput. While superscalar utilizes multiple parallel pipelines, super pipelines perform multiple stages per clock cycle in each pipeline. Super pipelines benefit from higher parallelism but also increase potential stalls from dependencies. Both approaches aim to maximize parallel instruction execution but face limitations from true data and other dependencies.
TCP uses congestion control to determine how much capacity is available in the network and regulate how many packets can be in transit. It uses additive increase/multiplicative decrease (AIMD) where the congestion window is increased slowly with each ACK but halved upon timeout. Slow start is used initially and after idle periods to grow the window exponentially until congestion is detected. Fast retransmit and fast recovery help detect and recover from packet loss without requiring a timeout.
Error control techniques allow for detection and correction of errors during data transmission. Error control is implemented at the data link layer using automatic repeat request (ARQ) protocols like stop-and-wait and sliding window. Stop-and-wait involves transmitting a single frame and waiting for an acknowledgment before sending the next frame. Sliding window protocols allow multiple unacknowledged frames to be transmitted by using frame numbers and acknowledging receipt of frames. These protocols detect errors when frames are received out of sequence and trigger retransmission of lost frames.
This document discusses extracting communities from web archives over time. It begins by defining key terms used, such as the web community chart and notations for time periods and communities. It then describes types of changes that can occur to communities over time, such as emerging, dissolving, growing, shrinking, splitting, and merging. It also defines metrics to measure a community's evolution, such as growth rate, stability, disappearance rate, and merge rate. The document explains how web archives are used to build web graphs and extract community structures over multiple time periods to analyze how the community structure changes dynamically over time.
These are the slides for a tutorial talk about "multilayer networks" that I gave at NetSci 2014.
I walk people through a review article that I wrote with my PLEXMATH collaborators: http://comnet.oxfordjournals.org/content/2/3/203
The buffer cache stores recently accessed disk blocks in memory to reduce disk I/O. When a process requests data from a file, the kernel checks if the data is already cached in memory before accessing the disk. If cached, the data is returned directly from memory. If not cached, the data is read from disk into the cache. The buffer cache is managed as a pool using structures like a free list and buffer headers to track cached blocks. Caching recently used data in memory improves performance by reducing disk access frequency.
The document discusses the different layers and modules involved in multicast routing protocols. It describes the MAC, routing and application layers and the modules handled by each. It then discusses different types of multicast routing protocols based on topology (tree vs mesh), initialization (source vs receiver initiated) and maintenance mechanisms (soft vs hard state). Specific protocols discussed include source tree, shared tree and mesh based approaches. It also covers tree construction, rejoining methods and pruning techniques.
The document discusses the Apriori algorithm, which is used for mining frequent itemsets from transactional databases. It begins with an overview and definition of the Apriori algorithm and its key concepts like frequent itemsets, the Apriori property, and join operations. It then outlines the steps of the Apriori algorithm, provides an example using a market basket database, and includes pseudocode. The document also discusses limitations of the algorithm and methods to improve its efficiency, as well as advantages and disadvantages.
Introduction To Markov Chains | Markov Chains in Python | EdurekaEdureka!
YouTube Link: https://youtu.be/Gs2xtNzogSY
** Python Data Science Training: https://www.edureka.co/data-science-python-certification-course **
This Edureka session on Introduction To Markov Chains will help you understand the basic idea behind Markov chains and how they can be modeled as a solution to real-world problems.
Here’s a list of topics that will be covered in this session:
1. What Is A Markov Chain?
2. What Is The Markov Property?
3. Understanding Markov Chains With An Example
4. What Is A Transition Matrix?
5. Markov Chain In Python
6. Markov Chain Applications
Follow us to never miss an update in the future.
YouTube: https://www.youtube.com/user/edurekaIN
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
Castbox: https://castbox.fm/networks/505?country=in
ATM uses fixed-sized cells to transfer data over both virtual channel connections (VCCs) and virtual path connections (VPCs). It supports multiple service categories including constant bit rate, variable bit rate, available bit rate and unspecified bit rate. The ATM protocol architecture defines layers for the user plane, control plane, and management plane. It uses the ATM adaptation layer to map data from upper layer services into ATM cells for transmission over the ATM layer and physical layer.
This document presents an agenda for a talk on Petri nets. It begins with an introduction to Petri nets that defines their structure, including places, transitions, tokens, and firing rules. It then discusses several analysis methods for Petri nets, including reachability trees, incidence matrices, and reduction rules. Next, it covers high-level Petri nets and colored Petri nets. The document concludes by mentioning an application of Petri nets to rumor detection and blocking in online social networks, and introduces orbital Petri nets as a promising approach.
Scheduling refers to allocating computing resources like processor time and memory to processes. In cloud computing, scheduling maps jobs to virtual machines. There are two levels of scheduling - at the host level to distribute VMs, and at the VM level to distribute tasks. Common scheduling algorithms include first-come first-served (FCFS), shortest job first (SJF), round robin, and max-min. FCFS prioritizes older jobs but has high wait times. SJF prioritizes shorter jobs but can starve longer ones. Max-min prioritizes longer jobs to optimize resource use. The choice depends on goals like throughput, latency, and fairness.
Business models for business processes on IoTFabMinds
The document discusses business models for business processes on the Internet of Things. It covers key topics like IoT applications, business models, value creation using IoT, and business model scenarios for IoT. Business models need innovation to adapt to new customer access and interactions enabled by technologies like cloud computing and mobile communications. Value is created on IoT through addressing emergent needs, information convergence, and recurrent revenue from networked products. Example business model scenarios for IoT leverage data from multiple sources like sensors, M2M, and open data.
The document discusses the functions of a gateway in an IoT/M2M system. The gateway performs data enrichment, consolidation, and device management. It has several key functions including transcoding data formats, ensuring privacy and security, gathering and enriching data from devices, aggregating and compacting data, and managing device identities, configurations, and faults.
Chapter - 5 Data Mining Concepts and Techniques 2nd Ed slides Han & Kambererror007
The document discusses Chapter 5 from the book "Data Mining: Concepts and Techniques" which covers frequent pattern mining, association rule mining, and correlation analysis. It provides an overview of basic concepts such as frequent patterns and association rules. It also describes efficient algorithms for mining frequent itemsets such as Apriori and FP-growth, and discusses challenges and improvements to frequent pattern mining.
The document discusses congestion control in computer networks. It defines congestion as occurring when the load on a network is greater than the network's capacity. Congestion control aims to control congestion and keep the load below capacity. The document outlines two categories of congestion control: open-loop control, which aims to prevent congestion; and closed-loop control, which detects congestion and takes corrective action using feedback from the network. Specific open-loop techniques discussed include admission control, traffic shaping using leaky bucket and token bucket algorithms, and traffic scheduling.
This document discusses complex network analysis and several concepts related to social networks such as the small world phenomenon, friendship paradox, centrality measures, clustering coefficient, and degree distribution. It provides examples of applying complex network analysis to a friendship network of BITS students and a Twitter growth model. Power law distributions are found to describe properties of many real-world networks like degree distributions.
The document contains descriptions and figures about stop-and-wait, sliding window, and selective reject transmission protocols. Stop-and-wait uses acknowledgments to ensure frames are received correctly one at a time, while sliding window protocols allow multiple unacknowledged frames to be sent by keeping a window of outstanding frames. The figures demonstrate examples of how these protocols handle damaged frames, lost frames, and lost acknowledgments to ensure reliable data transmission.
This document discusses superscalar and super pipeline approaches to improving processor performance. Superscalar processors execute multiple independent instructions in parallel using multiple pipelines. Super pipelines break pipeline stages into smaller stages to reduce clock period and increase instruction throughput. While superscalar utilizes multiple parallel pipelines, super pipelines perform multiple stages per clock cycle in each pipeline. Super pipelines benefit from higher parallelism but also increase potential stalls from dependencies. Both approaches aim to maximize parallel instruction execution but face limitations from true data and other dependencies.
TCP uses congestion control to determine how much capacity is available in the network and regulate how many packets can be in transit. It uses additive increase/multiplicative decrease (AIMD) where the congestion window is increased slowly with each ACK but halved upon timeout. Slow start is used initially and after idle periods to grow the window exponentially until congestion is detected. Fast retransmit and fast recovery help detect and recover from packet loss without requiring a timeout.
Error control techniques allow for detection and correction of errors during data transmission. Error control is implemented at the data link layer using automatic repeat request (ARQ) protocols like stop-and-wait and sliding window. Stop-and-wait involves transmitting a single frame and waiting for an acknowledgment before sending the next frame. Sliding window protocols allow multiple unacknowledged frames to be transmitted by using frame numbers and acknowledging receipt of frames. These protocols detect errors when frames are received out of sequence and trigger retransmission of lost frames.
This document discusses extracting communities from web archives over time. It begins by defining key terms used, such as the web community chart and notations for time periods and communities. It then describes types of changes that can occur to communities over time, such as emerging, dissolving, growing, shrinking, splitting, and merging. It also defines metrics to measure a community's evolution, such as growth rate, stability, disappearance rate, and merge rate. The document explains how web archives are used to build web graphs and extract community structures over multiple time periods to analyze how the community structure changes dynamically over time.
These are the slides for a tutorial talk about "multilayer networks" that I gave at NetSci 2014.
I walk people through a review article that I wrote with my PLEXMATH collaborators: http://comnet.oxfordjournals.org/content/2/3/203
The buffer cache stores recently accessed disk blocks in memory to reduce disk I/O. When a process requests data from a file, the kernel checks if the data is already cached in memory before accessing the disk. If cached, the data is returned directly from memory. If not cached, the data is read from disk into the cache. The buffer cache is managed as a pool using structures like a free list and buffer headers to track cached blocks. Caching recently used data in memory improves performance by reducing disk access frequency.
The document discusses the different layers and modules involved in multicast routing protocols. It describes the MAC, routing and application layers and the modules handled by each. It then discusses different types of multicast routing protocols based on topology (tree vs mesh), initialization (source vs receiver initiated) and maintenance mechanisms (soft vs hard state). Specific protocols discussed include source tree, shared tree and mesh based approaches. It also covers tree construction, rejoining methods and pruning techniques.
The document discusses the Apriori algorithm, which is used for mining frequent itemsets from transactional databases. It begins with an overview and definition of the Apriori algorithm and its key concepts like frequent itemsets, the Apriori property, and join operations. It then outlines the steps of the Apriori algorithm, provides an example using a market basket database, and includes pseudocode. The document also discusses limitations of the algorithm and methods to improve its efficiency, as well as advantages and disadvantages.
Introduction To Markov Chains | Markov Chains in Python | EdurekaEdureka!
YouTube Link: https://youtu.be/Gs2xtNzogSY
** Python Data Science Training: https://www.edureka.co/data-science-python-certification-course **
This Edureka session on Introduction To Markov Chains will help you understand the basic idea behind Markov chains and how they can be modeled as a solution to real-world problems.
Here’s a list of topics that will be covered in this session:
1. What Is A Markov Chain?
2. What Is The Markov Property?
3. Understanding Markov Chains With An Example
4. What Is A Transition Matrix?
5. Markov Chain In Python
6. Markov Chain Applications
Follow us to never miss an update in the future.
YouTube: https://www.youtube.com/user/edurekaIN
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
Castbox: https://castbox.fm/networks/505?country=in
ATM uses fixed-sized cells to transfer data over both virtual channel connections (VCCs) and virtual path connections (VPCs). It supports multiple service categories including constant bit rate, variable bit rate, available bit rate and unspecified bit rate. The ATM protocol architecture defines layers for the user plane, control plane, and management plane. It uses the ATM adaptation layer to map data from upper layer services into ATM cells for transmission over the ATM layer and physical layer.
Asynchronous Transfer Mode (ATM) is a switching technique that uses fixed-sized cells to encode data for transmission over telecommunication networks. ATM can handle both traditional high-speed data traffic as well as real-time, low-latency content like voice and video. It provides services at the data link layer and has similarities to both circuit switching and packet switching. ATM is commonly used for wide area networks and some DSL implementations also use ATM technology.
ATM (Asynchronous Transfer Mode) is a connection-oriented networking technology that transmits data in fixed-size cells and can support different types of data and applications with quality of service guarantees. ATM uses virtual connections identified by virtual path and channel identifiers to transport cells through a network of ATM switches. The ATM architecture includes physical, ATM, and adaptation layers to encapsulate data for transmission and ensure interoperability between network elements.
ATM is a connection-oriented protocol that transmits all types of data over WANs by dividing it into fixed-size cells. It provides quality of service guarantees through connection setup and traffic management. ATM aims to offer an integrated service for transmitting data, voice, and video simultaneously over high-speed networks using small, fixed-size cells and virtual connections between end systems.
One of the best suitable technology over Fiber serving different applications(to name a few) from Triple Play Services to Fiber to the Home(FTTH) Creating BackBone Network for Telco's. Having the Optical Distribution Network as a totally passive one it scores points over active network.
Asynchronous Transfer Mode (ATM) is a protocol developed for broadband ISDN that supports high data transmission rates. It uses fixed-size cells called ATM cells that are 53 bytes long, with 5 bytes for header and 48 bytes for payload. ATM cells allow data to be organized into logical connections identified by Virtual Channel Identifier and Virtual Path Identifier values. These logical connections support quality of service guarantees and efficient transmission of data, making ATM well-suited for real-time multimedia applications.
The document provides an overview of software development principles including introduction to programming, problem solving and the software development approach, and algorithm representation. It discusses programming languages, problem statements, software development methods, types of errors, documentation, and techniques for representing algorithms like pseudocode and flowcharts. Examples are provided to illustrate these concepts.
The document discusses Asynchronous Transfer Mode (ATM) networking. It describes the five ATM service categories including Constant Bit Rate, Variable Bit Rate, Available Bit Rate, and Unspecified Bit Rate. It also outlines the ATM layer diagram and Adaptation Layer which supports these different service categories with various quality of service levels and priority settings for real-time and non-real-time traffic. References to additional ATM resources are provided at the end.
ATLAS Consulting & Services provides soft skills training and management services. It was founded in 2006 and has experienced steady growth. The company vision is to be committed to excellence and building long-term client relationships. The company offers customized training packages in areas such as leadership, communication, problem solving and customer service. It aims to help clients maximize learning and workplace application.
This document provides a summary of foreign direct investment (FDI) in retail in India. It discusses India's partial opening of the retail sector to FDI, allowing up to 51% FDI in single-brand retail but prohibiting FDI in multi-brand retail. It outlines the government's concerns about fully opening retail to FDI and the limitations of India's current retail setup in infrastructure and the dominance of intermediaries in the supply chain.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise boosts blood flow, releases endorphins, and promotes changes in the brain which help enhance one's emotional well-being and mental clarity.
This document contains information about Mohamed Abd Elhay and his skills and experience with MATLAB. It provides an overview of MATLAB including its main components and applications. It describes the MATLAB development environment and some basic functions for vectors and matrices, plotting, conditional statements, and loops. It also lists some common MATLAB toolboxes for tasks like signal processing, neural networks, optimization, and more. It briefly introduces Simulink and discusses file types and GUIDE for building GUIs.
SP Software is an IT company established in 1995 that provides application development, maintenance, gaming, and mobile application services. It has over 400 employees and has experienced rapid growth, with revenues increasing from $5 million in 2007 to $15 million in 2011. The company aims to exceed customer expectations with quality and aims to provide integrated, cost-effective solutions. It has achieved several process certifications like CMMI Level 3, ISO 9001:2008, and ISO 27001:2005.
The document discusses a presentation given at Bournville College on unlocking funding opportunities for the third sector. It provides an overview of different funding streams such as social investment funds, grants from organizations like Big Lottery Fund, and an introduction to concepts of commissioning. The presentation advises organizations to diversify their services, funding sources, and commissioning relationships to be successful in applications. Support Solutions and Bournville College offer resources to help organizations strengthen their funding bids.
Data Communications,Data Networks,computer communications,multiplexing,spread spectrum,protocol architecture,data link protocols,signal encoding techniques,transmission media,asynchronous transfer mode
Asynchronous Transfer Mode (ATM) is a streamlined packet transfer interface that transfers data in discrete chunks like packet switching. It uses fixed size packets called cells with minimal error and flow control. ATM supports multiple logical connections over a single physical interface and uses virtual channel connections (VCCs) and virtual path connections (VPCS) to bundle connections. ATM provides different service categories like constant bit rate (CBR) for fixed rate applications and real-time variable bit rate (rt-VBR) for time sensitive variable rate applications.
ATM is a connection-oriented transfer mode that uses fixed-length cells. It was originally developed for B-ISDN networks. Key aspects of ATM include:
- Cells are 53 bytes, with a 5-byte header and 48-byte payload
- Connections can be permanent (PVC) or switched (SVC)
- Four service categories provide different quality of service guarantees
- Segmentation and reassembly is performed by the ATM adaptation layer to map various data types to cells
Asynchronous Transfer Mode (ATM) is a cell-based switching and multiplexing technology that was designed in the early 1990s to expedite the transmission of voice, video, and data over digital networks. ATM uses fixed-length cells of 53 bytes to carry traffic. It establishes virtual connections between endpoints to guarantee quality of service. ATM works by segmenting data into fixed-size cells at the source, transporting cells through a switch network via virtual circuits, and reassembling them at the destination. It provides benefits like high performance, integration of multiple data types, and adaptability to different network speeds.
Asynchronous Transfer Mode (ATM) is a cell-switching and multiplexing technology that uses fixed-length packets called cells to carry different types of traffic like voice, video, and data. ATM works by segmenting data into these fixed-size cells which are then transmitted through virtual connections set up across an ATM network and reassembled at their destination. It provides benefits like high performance, Quality of Service guarantees, and the ability to handle different traffic types.
ATM is a connection-oriented multi-service network architecture that can carry voice, data and video simultaneously over the same network. It uses fixed-length cells consisting of a 5-byte header and 48-byte payload. Virtual connections called virtual channel connections (VCC) and virtual path connections (VPC) establish logical connections between end users for transmitting data through the network. ATM provides quality of service guarantees and efficient traffic management through these virtual connections and different service categories like constant bit rate, variable bit rate, available bit rate and unspecified bit rate.
Traffic management provides optimal utilization of network resources by managing network traffic and providing service guarantees to user connections. It includes functions such as traffic contract management, traffic shaping, traffic policing, priority control, flow control, and congestion control. Connection admission control is used to determine whether new connection requests can be accepted while ensuring sufficient resources and quality of service for existing connections. Traffic shaping techniques such as leaky bucket algorithm alter traffic characteristics to make them more predictable and conforming to network requirements.
- Asynchronous transfer mode (ATM) is a switching technique that uses fixed-sized cells to encode data and is used in telecommunication networks. It is different from variable packet size techniques like Ethernet.
- ATM uses synchronous optical network as a backbone and forms the core protocol of integrated digital services networks. It establishes connections using virtual circuits before transmitting data between endpoints like routers and switches.
- ATM cells have a header containing a virtual path/channel identifier pair to identify the destination as cells pass through switches on their way to the final destination. Quality of service is ensured through traffic contracts specifying parameters like constant or variable bit rates.
ATM is a cell relay protocol designed to optimize fiber optic networks. It breaks data into fixed-size cells for uniform transmission. ATM aims to maximize bandwidth, interface existing systems, be inexpensive, support telecom hierarchies, ensure reliable delivery, and minimize software functions. Connections between endpoints are established through virtual paths and circuits identified by header fields. Cells contain a 5-byte header and 48-byte payload. Connections can be permanent or switched. ATM defines layers for applications, cell processing, and physical transmission. It supports various quality of service levels through parameters like cell error and loss rates.
Asynchronous Transfer ModeATM is originally the transfer mode for implementin...JebaRaj26
ATM is a connection-oriented, high-speed, low-delay switching and transmission technology that uses short and fixed-size packets, called cells, to transport information.
Q1: What is the use of Asynchronous Transfer mode switching(ATM)?
ATM as a Backbone technology:
ATM Devices:
ATM network interface:
User to Network Interface (UNI):
Network to Node Interface (NNI):
ATM reference model:
ATM services:
ATM Virtual Connections:
ATM CLASS OF SERVICES:
ATM CONCEPTS SERIVES CATEGORIES:
Asynchronous transfer mode (atm) in computer networkMH Shihab
Asynchronous Transfer Mode (ATM) is a telecommunications standard that allows multiple data types like voice, video, and data to be transmitted over the same network. ATM breaks information into small, fixed-size cells and transmits them asynchronously. It is connection-oriented and supports services with different quality of service requirements. ATM cells are 53 bytes long, with a 5 byte header containing information like virtual path/channel identifiers and an 8-bit checksum, and 48 bytes of payload. ATM supports both constant and variable rate traffic through its connection-oriented virtual circuits.
This document provides an overview of Asynchronous Transfer Mode (ATM) technology. It discusses:
- What ATM is and why it was developed to provide high-speed, low delay networking for various traffic types like voice, video, and data.
- Key aspects of ATM including fixed-length 53-byte cells, virtual connections, connection-oriented and connectionless modes, and quality of service guarantees.
- Components of the ATM protocol stack including the physical layer, ATM layer, and ATM adaptation layer (AAL). It describes the different AAL types.
- ATM network architecture including interfaces like UNI and NNI and the use of virtual paths and channels for
This document discusses Asynchronous Transfer Mode (ATM) as a connection-oriented, high-speed switching and transmission technology that uses fixed-size cells. It describes ATM's architecture including its layers, cell format, connection types, and quality of service categories. ATM evolved from B-ISDN standards and uses cells to transport information across networks while avoiding issues of mixed frame sizes.
Data Communications,Data Networks,computer communications,multiplexing,spread spectrum,protocol architecture,data link protocols,signal encoding techniques,transmission media
This document provides an overview of Asynchronous Transfer Mode (ATM) networking. It begins by stating the objectives of explaining ATM and then provides background on ATM, noting that it transfers information in small, fixed-size cells. It describes the benefits of ATM, including dynamic bandwidth allocation and support for multimedia traffic. It then explains the basic components of an ATM network, including switches and endpoints, and describes the ATM cell format and header fields. Finally, it introduces the concept of virtual connections in ATM networks.
This document discusses traffic and congestion control in ATM networks. It covers key issues like congestion problems, frameworks adopted, requirements for ATM traffic and congestion control, problems with ATM congestion control, key performance issues related to latency and speed effects, and cell delay variation. It also summarizes traffic management frameworks, traffic control and congestion functions, algorithms like explicit rate feedback schemes, and enhanced proportional rate control algorithm.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
2. Asynchronous Transfer ModeAsynchronous Transfer Mode
One man had a vision of railways that would link all theOne man had a vision of railways that would link all the
mainline railroad termini. His name was Charles Pearsonmainline railroad termini. His name was Charles Pearson
and, though born the son of an upholsterer, he becameand, though born the son of an upholsterer, he became
Solicitor to the city of London. There had previously been aSolicitor to the city of London. There had previously been a
plan for gaslit subway streets through which horse-drawnplan for gaslit subway streets through which horse-drawn
traffic could pass. This was rejected on the grounds that suchtraffic could pass. This was rejected on the grounds that such
sinister tunnels would become lurking places for thieves.sinister tunnels would become lurking places for thieves.
Twenty years before his system was built, Pearson envisaged aTwenty years before his system was built, Pearson envisaged a
line running through "a spacious archway," well-lit and well-line running through "a spacious archway," well-lit and well-
ventilated.ventilated.
His was a scheme for trains in a drain.His was a scheme for trains in a drain.
—King Solomon's Carpet, Barbara Vine (Ruth Rendell)
3. ATMATM
a streamlined packet transfer interfacea streamlined packet transfer interface
similarities to packet switchingsimilarities to packet switching
transfers data in discrete chunkstransfers data in discrete chunks
supports multiple logical connections over asupports multiple logical connections over a
single physical interfacesingle physical interface
ATM uses fixed sized packets called cellsATM uses fixed sized packets called cells
with minimal error and flow controlwith minimal error and flow control
data rates of 25.6Mbps to 622.08Mbpsdata rates of 25.6Mbps to 622.08Mbps
5. Reference Model PlanesReference Model Planes
user planeuser plane
provides for user information transferprovides for user information transfer
control planecontrol plane
call and connection controlcall and connection control
management planemanagement plane
plane managementplane management
• whole system functionswhole system functions
layer managementlayer management
• Resources and parameters in protocol entitiesResources and parameters in protocol entities
6. ATM Logical ConnectionsATM Logical Connections
virtual channel connections (VCC)virtual channel connections (VCC)
analogous to virtual circuit in X.25analogous to virtual circuit in X.25
basic unit of switching between two end usersbasic unit of switching between two end users
full duplexfull duplex
fixed size cellsfixed size cells
also foralso for
user-network exchange (control)user-network exchange (control)
network-network exchange (network mgmt & routing)network-network exchange (network mgmt & routing)
7. ATM Virtual Path ConnectionATM Virtual Path Connection
virtual path connection (VPC)virtual path connection (VPC)
bundle of VCC with same end pointsbundle of VCC with same end points
8. Advantages of Virtual PathsAdvantages of Virtual Paths
simplified network architecturesimplified network architecture
increased network performance andincreased network performance and
reliabilityreliability
reduced processingreduced processing
short connection setup timeshort connection setup time
enhanced network servicesenhanced network services
10. Virtual Channel ConnectionVirtual Channel Connection
UsesUses
between end usersbetween end users
end to end user dataend to end user data
Control signalsControl signals
VPC provides overall capacityVPC provides overall capacity
• VCC organization done by usersVCC organization done by users
between end user and networkbetween end user and network
control signalingcontrol signaling
between network entitiesbetween network entities
network traffic managementnetwork traffic management
routingrouting
11. VP/VC CharacteristicsVP/VC Characteristics
quality of servicequality of service
switched and semi-permanent channelswitched and semi-permanent channel
connectionsconnections
call sequence integritycall sequence integrity
traffic parameter negotiation and usagetraffic parameter negotiation and usage
monitoringmonitoring
VPC onlyVPC only
virtual channel identifier restriction within VPCvirtual channel identifier restriction within VPC
12. Control Signaling - VCCControl Signaling - VCC
to establish or release VCCs & VPCsto establish or release VCCs & VPCs
uses a separate connectionuses a separate connection
methods are:methods are:
1.1. semi-permanent VCCsemi-permanent VCC
2.2. meta-signaling channelmeta-signaling channel
3.3. user to network signaling virtual channeluser to network signaling virtual channel
4.4. user to user signaling virtual channeluser to user signaling virtual channel
13. Control Signaling - VPCControl Signaling - VPC
methods for control signalling for VPCs:methods for control signalling for VPCs:
1.1. Semi-permanentSemi-permanent
2.2. Customer controlledCustomer controlled
3.3. Network controlledNetwork controlled
15. ATM Header FieldsATM Header Fields
generic flow controlgeneric flow control
Virtual path identifierVirtual path identifier
Virtual channel identifierVirtual channel identifier
payload typepayload type
cell loss prioritycell loss priority
header error controlheader error control
16. Generic Flow Control (GFC)Generic Flow Control (GFC)
control traffic flow at user to network interfacecontrol traffic flow at user to network interface
(UNI) to alleviate short term overload(UNI) to alleviate short term overload
two sets of procedurestwo sets of procedures
uncontrolled transmissionuncontrolled transmission
controlled transmissioncontrolled transmission
every connection subject to flow control or notevery connection subject to flow control or not
if subject to flow controlif subject to flow control
may be one group (A) defaultmay be one group (A) default
may be two groups (A and B)may be two groups (A and B)
flow control is from subscriber to networkflow control is from subscriber to network
17. GFC - Single Group ofGFC - Single Group of
ConnectionsConnections
1.1. If TRANSMIT=1 send uncontrolled cells anyIf TRANSMIT=1 send uncontrolled cells any
time. If TRANSMIT=0 no cells may be senttime. If TRANSMIT=0 no cells may be sent
2.2. If HALT received, TRANSMIT=0 untilIf HALT received, TRANSMIT=0 until
NO_HALTNO_HALT
3.3. If TRANSMIT=1 & no uncontrolled cell to send:If TRANSMIT=1 & no uncontrolled cell to send:
1.1. If GO_CNTR>0, TE may send controlled cell andIf GO_CNTR>0, TE may send controlled cell and
decrement GO_CNTRdecrement GO_CNTR
2.2. If GO_CNTR=0, TE may not send controlled cellsIf GO_CNTR=0, TE may not send controlled cells
4.4. TE sets GO_CNTR to GO_VALUE uponTE sets GO_CNTR to GO_VALUE upon
receiving SET signalreceiving SET signal
18. Use of HALTUse of HALT
to limit effective data rate on ATMto limit effective data rate on ATM
should be cyclicshould be cyclic
to reduce data rate by half, HALT issuedto reduce data rate by half, HALT issued
to be in effect 50% of timeto be in effect 50% of time
done on regular pattern over lifetime ofdone on regular pattern over lifetime of
connectionconnection
19. Two Queue ModelTwo Queue Model
uses two counters each with current &uses two counters each with current &
initial values:initial values:
GO_CNTR_AGO_CNTR_A
GO_VALUE_AGO_VALUE_A
GO_CNTR_BGO_CNTR_B
GO_VALUE_BGO_VALUE_B
22. Impact of Random Bit ErrorsImpact of Random Bit Errors
on HEC Performanceon HEC Performance
23. Transmission of ATM CellsTransmission of ATM Cells
I.432 specifies several data rates:I.432 specifies several data rates:
622.08Mbps622.08Mbps
155.52Mbps155.52Mbps
51.84Mbps51.84Mbps
25.6Mbps25.6Mbps
two choices of transmission structure:two choices of transmission structure:
Cell based physical layerCell based physical layer
SDH based physical layerSDH based physical layer
24. Cell Based Physical LayerCell Based Physical Layer
no framing imposedno framing imposed
vontinuous stream of 53 octet cellsvontinuous stream of 53 octet cells
cell delineation based on header errorcell delineation based on header error
control fieldcontrol field
28. SDH Based Physical LayerSDH Based Physical Layer
imposes structure on ATM streamimposes structure on ATM stream
eg. for 155.52Mbpseg. for 155.52Mbps
use STM-1 (STS-3) frameuse STM-1 (STS-3) frame
can carry ATM and STM payloadscan carry ATM and STM payloads
specific connections can be circuitspecific connections can be circuit
switched using SDH channelswitched using SDH channel
SDH multiplexing techniques can combineSDH multiplexing techniques can combine
several ATM streamsseveral ATM streams
29. STM-1 Payload for SDH-STM-1 Payload for SDH-
Based ATM Cell TransmissionBased ATM Cell Transmission
30. ATM Service CategoriesATM Service Categories
Real time - limit amount/variation of delayReal time - limit amount/variation of delay
Constant bit rate (CBR)Constant bit rate (CBR)
Real time variable bit rate (rt-VBR)Real time variable bit rate (rt-VBR)
Non-real time - for bursty trafficNon-real time - for bursty traffic
Non-real time variable bit rate (nrt-VBR)Non-real time variable bit rate (nrt-VBR)
Available bit rate (ABR)Available bit rate (ABR)
Unspecified bit rate (UBR)Unspecified bit rate (UBR)
Guaranteed frame rate (GFR)Guaranteed frame rate (GFR)
31. Constant Bit Rate (CBR)Constant Bit Rate (CBR)
fixed data rate continuously availablefixed data rate continuously available
tight upper bound on delaytight upper bound on delay
uncompressed audio and videouncompressed audio and video
video conferencingvideo conferencing
interactive audiointeractive audio
A/V distribution and retrievalA/V distribution and retrieval
32. Real-Time Variable Bit RateReal-Time Variable Bit Rate
(rt-VBR)(rt-VBR)
for time sensitive applicationsfor time sensitive applications
tightly constrained delay and delay variationtightly constrained delay and delay variation
rt-VBR applications transmit data at a rate thatrt-VBR applications transmit data at a rate that
varies with timevaries with time
eg. compressed videoeg. compressed video
produces varying sized image framesproduces varying sized image frames
original (uncompressed) frame rate constantoriginal (uncompressed) frame rate constant
so compressed data rate variesso compressed data rate varies
hence can statistically multiplex connectionshence can statistically multiplex connections
33. Non-Real-Time Variable Bit RateNon-Real-Time Variable Bit Rate
(nrt-VBR)(nrt-VBR)
if can characterize expected bursty traffic flowif can characterize expected bursty traffic flow
eg. airline reservations, banking transactionseg. airline reservations, banking transactions
ATM net allocates resources based on thisATM net allocates resources based on this
to meet critical response-time requirementsto meet critical response-time requirements
giving improve QoS in loss and delaygiving improve QoS in loss and delay
end system specifies:end system specifies:
peak cell ratepeak cell rate
sustainable or average ratesustainable or average rate
measure of how bursty traffic ismeasure of how bursty traffic is
34. Unspecified Bit Rate (UBR)Unspecified Bit Rate (UBR)
may be additional capacity over and above thatmay be additional capacity over and above that
used by CBR and VBR trafficused by CBR and VBR traffic
not all resources dedicated to CBR/VBR trafficnot all resources dedicated to CBR/VBR traffic
unused cells due to bursty nature of VBRunused cells due to bursty nature of VBR
for application that can tolerate some cell loss orfor application that can tolerate some cell loss or
variable delaysvariable delays
eg. TCP based trafficeg. TCP based traffic
cells forwarded on FIFO basiscells forwarded on FIFO basis
best effort servicebest effort service
35. Available Bit Rate (ABR)Available Bit Rate (ABR)
application specifies peak cell rate (PCR)application specifies peak cell rate (PCR)
and minimum cell rate (MCR)and minimum cell rate (MCR)
resources allocated to give at least MCRresources allocated to give at least MCR
spare capacity shared among all ARBspare capacity shared among all ARB
sourcessources
eg. LAN interconnectioneg. LAN interconnection
37. SummarySummary
Asynchronous Transfer Mode (ATM)Asynchronous Transfer Mode (ATM)
architecture & logical connectionsarchitecture & logical connections
ATM Cell formatATM Cell format
transmission of ATM cellstransmission of ATM cells
ATM servicesATM services
Editor's Notes
Lecture slides prepared by Dr Lawrie Brown (UNSW@ADFA) for “Data and Computer Communications”, 8/e, by William Stallings, Chapter 11 “ Asynchronous Transfer Mode ”.
This quote is from the start of Stallings DCC8e 11, hints at the concept of a universal transport mechanism that ATM attempted to fill in data communications.
ATM, also known as cell relay, is a streamlined packet transfer interface which takes advantage of the reliability and fidelity of modern digital facilities to provide faster packet switching than X.2. Like packet switching and frame relay, ATM involves the transfer of data in discrete chunks, and allows multiple logical connections to be multiplexed over a single physical interface. In the case of ATM, the information flow on each logical connection is organized into fixed-size packets, called cells . ATM is a streamlined protocol with minimal error and flow control capabilities. This reduces the overhead of processing ATM cells and reduces the number of overhead bits required with each cell, thus enabling ATM to operate at high data rates. Further, the use of fixed-size cells simplifies the processing required at each ATM node, again supporting the use of ATM at high data rates.
The standards issued for ATM by ITU-T are based on the protocol architecture shown in Stallings DCC8e Figure 11.1, which illustrates the basic architecture for an interface between user and network. The physical layer involves the specification of a transmission medium and a signal encoding scheme. The data rates specified at the physical layer range from 25.6 Mbps to 622.08 Mbps. Other data rates, both higher and lower, are possible. Two layers of the protocol architecture relate to ATM functions. There is an ATM layer common to all services that provides packet transfer capabilities, and an ATM adaptation layer (AAL) that is service dependent. The ATM layer defines the transmission of data in fixed-size cells and defines the use of logical connections. The use of ATM creates the need for an adaptation layer to support information transfer protocols not based on ATM. The AAL maps higher-layer information into ATM cells to be transported over an ATM network, then collects information from ATM cells for delivery to higher layers.
The protocol reference model involves three separate planes: • User plane: Provides for user information transfer, along with associated controls (e.g., flow control, error control) • Control plane: Performs call control and connection control functions • Management plane: Includes plane management, which performs management functions related to a system as a whole and provides coordination between all the planes, and layer management, which performs management functions relating to resources and parameters residing in its protocol entities
Logical connections in ATM are referred to as virtual channel connections (VCCs). A VCC is analogous to a virtual circuit in X.25; it is the basic unit of switching in an ATM network. A VCC is set up between two end users through the network and a variable-rate, full-duplex flow of fixed-size cells is exchanged over the connection. VCCs are also used for user-network exchange (control signaling) and network-network exchange (network management and routing).
For ATM, a second sublayer of processing has been introduced that deals with the concept of virtual path (Figure 11.2). A virtual path connection (VPC) is a bundle of VCCs that have the same endpoints. Thus, all of the cells flowing over all of the VCCs in a single VPC are switched together. The virtual path concept was developed in response to a trend in high-speed networking in which the control cost of the network is becoming an increasingly higher proportion of the overall network cost. The virtual path technique helps contain the control cost by grouping connections sharing common paths through the network into a single unit. Network management actions can then be applied to a small number of groups of connections instead of a large number of individual connections. The terminology of virtual paths and virtual channels used in the standard is a bit confusing and is summarized in Stallings DCC8e Table 11.1. Whereas most of the network-layer protocols that we deal with in this book relate only to the user-network interface, the concepts of virtual path and virtual channel are defined in the ITU-T Recommendations with reference to both the user-network interface and the internal network operation.
Several advantages can be listed for the use of virtual paths: • Simplified network architecture: Network transport functions can be separated into those related to an individual logical connection (virtual channel) and those related to a group of logical connections (virtual path). • Increased network performance and reliability: The network deals with fewer, aggregated entities. • Reduced processing and short connection setup time: Much of the work is done when the virtual path is set up. By reserving capacity on a virtual path connection in anticipation of later call arrivals, new virtual channel connections can be established by executing simple control functions at the endpoints of the virtual path connection; no call processing is required at transit nodes. Thus, the addition of new virtual channels to an existing virtual path involves minimal processing. • Enhanced network services: The virtual path is used internal to the network but is also visible to the end user. Thus, the user may define closed user groups or closed networks of virtual channel bundles.
Stallings DCC8e Figure 11.3 suggests in a general way the call establishment process using virtual channels and virtual paths. The process of setting up a virtual path connection is decoupled from the process of setting up an individual virtual channel connection: • The virtual path control mechanisms include calculating routes, allocating capacity, and storing connection state information. • To set up a virtual channel, there must first be a virtual path connection to the required destination node with sufficient available capacity to support the virtual channel, with the appropriate quality of service. A virtual channel is set up by storing the required state information (virtual channel/virtual path mapping).
The endpoints of a VCC may be end users, network entities, or an end user and a network entity. In all cases, cell sequence integrity is preserved within a VCC: that is, cells are delivered in the same order in which they are sent. Let us consider examples of the three uses of a VCC: • Between end users: Can be used to carry end-to-end user data; can also be used to carry control signaling between end users, as explained later. A VPC between end users provides them with an overall capacity; the VCC organization of the VPC is up to the two end users, provided the set of VCCs does not exceed the VPC capacity. • Between an end user and a network entity: Used for user-to-network control signaling, as discussed subsequently. A user-to-network VPC can be used to aggregate traffic from an end user to a network exchange or network server. • Between two network entities: Used for network traffic management and routing functions. A network-to-network VPC can be used to define a common route for the exchange of network management information.
ITU-T Recommendation I.150 lists the following as characteristics of virtual channel connections: • Quality of service (QoS): A user of a VCC is provided with a QoS specified by parameters such as cell loss ratio and cell delay variation. • Switched and semipermanent virtual channel connections: A switched VCC is an on-demand connection, which requires a call control signaling for setup and tearing down. A semipermanent VCC is one that is of long duration and is set up by configuration or network management action. • Cell sequence integrity: sequence of cells sent within a VCC is preserved. • Traffic parameter negotiation and usage monitoring: Traffic parameters can be negotiated between a user and the network for each VCC, including average rate, peak rate, burstiness, and peak duration. The network monitors the input of cells to the VCC, ensuring negotiated parameters are not violated. I.150 also lists characteristics of VPCs. The first four characteristics listed are identical to those for VCCs. The fifth characteristic listed for VPCs is: Virtual channel identifier restriction within a VPC: One or more virtual channel identifiers, or numbers, may not be available to the user of the VPC but may be reserved for network use. Examples include VCCs used for network management.
In ATM, a mechanism is needed for the establishment and release of VPCs and VCCs. The exchange of information involved in this process is referred to as control signaling and takes place on separate connections from those that are being managed. For VCCs, I.150 specifies four methods for providing an establishment/release facility. One or a combination of these methods will be used in any particular network: 1. Semipermanent VCCs may be used for user-to-user exchange. In this case, no control signaling is required. 2. If there is no preestablished call control signaling channel, then one must be set up. For that purpose, a control signaling exchange must take place between the user and the network on some channel. Hence we need a permanent channel, probably of low data rate, that can be used to set up VCCs that can be used for call control. Such a channel is called a meta-signaling channel , as the channel is used to set up signaling channels. 3. The meta-signaling channel can be used to set up a VCC between the user and the network for call control signaling. This user-to-network signaling virtual channel can then be used to set up VCCs to carry user data. 4. The meta-signaling channel can also be used to set up a user-to-user signaling virtual channel . Such a channel must be set up within a preestablished VPC. It can be used to allow the two end users, without network intervention, to establish and release user-to-user VCCs to carry user data.
For VPCs, three methods are defined in I.150: 1. A VPC can be established on a semipermanent basis by prior agreement. In this case, no control signaling is required. VPC establishment/release may be customer controlled . In this case, the customer uses a signaling VCC to request the VPC from the network. 3. VPC establishment/release may be network controlled . In this case, the network establishes a VPC for its own convenience. The path may be network-to-network, user-to-network, or user-to-user.
The asynchronous transfer mode makes use of fixed-size cells, consisting of a 5-octet header and a 48-octet information field. There are several advantages to the use of small, fixed-size cells. First, the use of small cells may reduce queuing delay for a high-priority cell, because it waits less if it arrives slightly behind a lower-priority cell that has gained access to a resource (e.g., the transmitter). Second, it appears that fixed-size cells can be switched more efficiently, which is important for the very high data rates of ATM. With fixed-size cells, it is easier to implement the switching mechanism in hardware. Stallings DCC8e Figure 11.4a shows the cell header format at the user-network interface, & Figure 11.4b shows the cell header format internal to the network.
The Generic Flow Control (GFC) field does not appear in the cell header internal to the network, but only at the user-network interface, and is used for control of cell flow only at the local user-network interface, to alleviate short-term overload conditions in the network. The Virtual Path Identifier (VPI) constitutes a routing field for the network. It is 8 bits at the user-network interface and 12 bits at the network-network interface. The Virtual Channel Identifier (VCI) is used for routing to and from the end user. The Payload Type (PT) field indicates the type of information in the information field. A value of 0 in the first bit indicates user information. In this case, the second bit indicates whether congestion has been experienced; the third bit, known as the Service Data Unit (SDU) type bit, is a one-bit field that can be used to discriminate two types of ATM SDUs associated with a connection. Thus, the PT field can provide inband control information. The cell loss priority (CLP) bit is used to provide guidance to the network in the event of congestion. A value of 0 indicates a cell of relatively higher priority, which should not be discarded unless no other alternative is available. A value of 1 indicates that this cell is subject to discard within the network. The Header Error Control (HEC) field is used for both error control and synchronization, as explained subsequently.
I.150 specifies the use of the GFC field to control traffic flow at the user-network interface (UNI) in order to alleviate short-term overload conditions. The actual flow control mechanism is defined in I.361. CCT is intended to provide good service for high-volume bursty traffic with variable-length messages. When the equipment at the UNI is configured to support the GFC mechanism, two sets of procedures are used: uncontrolled transmission and controlled transmission. In essence, every connection is identified as either subject to flow control or not. Of those subject to flow control, there may be one group of controlled connections (Group A) that is the default, or controlled traffic may be classified into two groups of controlled connections (Group A and Group B); these are known, respectively, as the one-queue and two-queue models. Flow control is exercised in the direction from the subscriber to the network by the network side.
First consider the operation of the GFC mechanism when there is only one group of controlled connections. The controlled equipment (terminal equipment - TE), initializes two variables: TRANSMIT is a flag initialized to SET (1), and GO_CNTR, which is a credit counter, is initialized to 0. A third variable, GO_VALUE, is either initialized to 1 or set to some larger value at configuration time. The rules for transmission are as follows: 1. If TRANSMIT = 1, cells on uncontrolled connections may be sent at any time. If TRANSMIT = 0, no cells may be sent on either controlled or uncontrolled connections. 2. If a HALT signal is received from the controlling equipment, TRANSMIT is set to 0 and remains at zero until a NO_HALT signal is received, at which time TRANSMIT is set to 1. 3. If TRANSMIT = 1 and there is no cell to transmit on any uncontrolled connections, then — If GO_CNTR > 0, then the TE may send a cell on a controlled connection. The TE marks that cell as a cell on a controlled connection and decrements GO_CNTR. — If GO_CNTR = 0, then the TE may not send a cell on a controlled connection. 4. The TE sets GO_CNTR to GO_VALUE upon receiving a SET signal; a null signal has no effect on GO_CNTR.
The HALT signal is used logically to limit the effective ATM data rate and should be cyclic. For example, to reduce the data rate over a link by half, the HALT command is issued by the controlling equipment so as to be in effect 50% of the time. This is done in a predictable, regular pattern over the lifetime of the physical connection.
For the two-queue model, there are two counters, each with a current counter value and an initialization value: GO_CNTR_A, GO_VALUE_A, GO_CNTR_B, and GO_VALUE_B. This enables the network to control two separate groups of connections.
Each ATM cell includes an 8-bit HEC field that is calculated based on the remaining 32 bits of the header. The polynomial used to generate the code is X 8 + X 2 + X + 1. In most existing protocols that include an error control field, such as HDLC, the data that serve as input to the error code calculation are in general much longer than the size of the resulting error code. This allows for error detection. In the case of ATM, the input to the calculation is only 32 bits, compared to 8 bits for the code. The fact that the input is relatively short allows the code to be used not only for error detection but also, in some cases, for actual error correction. This is because there is sufficient redundancy in the code to recover from certain error patterns. Stallings DCC8e Figure 11.5 depicts the operation of the HEC algorithm at the receiver. At initialization, the receiver's error correction algorithm is in the default mode for single-bit error correction. As each cell is received, the HEC calculation and comparison is performed. As long as no errors are detected, the receiver remains in error correction mode. When an error is detected, the receiver will correct the error if it is a single-bit error or will detect that a multibit error has occurred. In either case, the receiver now moves to detection mode.
In correction mode, no attempt is made to correct errors. The reason for this change is a recognition that a noise burst or other event might cause a sequence of errors, a condition for which the HEC is insufficient for error correction. The receiver remains in detection mode as long as errored cells are received. When a header is examined and found not to be in error, the receiver switches back to correction mode. The flowchart of Stallings DCC8e Figure 11.6 shows the consequence of errors in the cell header.
The error-protection function provides both recovery from single-bit header errors and a low probability of the delivery of cells with errored headers under bursty error conditions. The error characteristics of fiber-based transmission systems appear to be a mix of single-bit errors and relatively large burst errors. For some transmission systems, the error correction capability, which is more time-consuming, might not be invoked. Stallings DCC8e Figure 11.7, based on one in ITU-T I.432, indicates how random bit errors impact the probability of occurrence of discarded cells and valid cells with errored headers when HEC is employed.
I.432 specifies that ATM cells may be transmitted at one of several data rates: 622.08 Mbps, 155.52 Mbps, 51.84 Mbps, or 25.6 Mbps. We need to specify the transmission structure that will be used to carry this payload. Two approaches are defined in I.432: a cell-based physical layer and an SDH-based physical layer. We examine each of these approaches in turn.
For the cell-based physical layer, no framing is imposed. The interface structure consists of a continuous stream of 53-octet cells. Because there is no external frame imposed in the cell-based approach, some form of synchronization is needed. Synchronization is achieved on the basis of the HEC field in the cell header.
Figure 11.8 shows the procedure used as follows: 1. In the HUNT state, a cell delineation algorithm is performed bit by bit to determine if the HEC coding law is observed (i.e., match between received HEC and calculated HEC). Once a match is achieved, it is assumed that one header has been found, and the method enters the PRESYNC state. 2. In the PRESYNC state, a cell structure is now assumed. The cell delineation algorithm is performed cell by cell until the encoding law has been confirmed consecutively times. 3. In the SYNC state, the HEC is used for error detection and correction (see Figure 11.5). Cell delineation is assumed to be lost if the HEC coding law is recognized consecutively as incorrect times. The values of and are design parameters. Greater values of result in longer delays in establishing synchronization but in greater robustness against false delineation. Greater values of result in longer delays in recognizing a misalignment but in greater robustness against false misalignment. The advantage of using a cell-based transmission scheme is the simplified interface that results when both transmission and transfer mode functions are based on a common structure.
Stallings DCC8e Figures 11.9 and 11.10, based on I.432, show the impact of random bit errors on cell delineation performance for various values of and . The first figure shows the average amount of time that the receiver will maintain synchronization in the face of errors, with as a parameter.
Stallings DCC8e Figures 11.9 and 11.10, based on I.432, show the impact of random bit errors on cell delineation performance for various values of and . The second figure shows the average amount of time to acquire synchronization as a function of error rate, with as a parameter.
The SDH-based physical layer imposes a structure on the ATM cell stream. In this section, we look at the I.432 specification for 155.52 Mbps; similar structures are used at other data rates. The advantages of the SDH-based approach include: • It can be used to carry either ATM-based or STM-based (synchronous transfer mode) payloads, making it possible to initially deploy a high-capacity fiber-based transmission infrastructure for a variety of circuit-switched and dedicated applications and then readily migrate to the support of ATM. • Some specific connections can be circuit switched using an SDH channel. For example, a connection carrying constant-bit-rate video traffic can be mapped into its own exclusive payload envelope of the STM-1 signal, which can be circuit switched. This may be more efficient than ATM switching. • Using SDH synchronous multiplexing techniques, several ATM streams can be combined to build interfaces with higher bit rates than those supported by the ATM layer at a particular site. For example, four separate ATM streams, each with a bit rate of 155 Mbps (STM-1), can be combined to build a 622-Mbps (STM-4) interface. This arrangement may be more cost effective than one using a single 622-Mbps ATM stream.
For the SDH-based physical layer, framing is imposed using the STM-1 (STS-3) frame. Stallings DCC8e Figure 11.11 shows the payload portion of an STM-1 frame (for comparison, see Stallings DCC8e Figure 8.11). This payload may be offset from the beginning of the frame, as indicated by the pointer in the section overhead of the frame. As can be seen, the payload consists of a 9-octet path overhead portion and the remainder, which contains ATM cells. Because the payload capacity (2340 octets) is not an integer multiple of the cell length (53 octets), a cell may cross a payload boundary. The H4 octet in the path overhead is set at the sending side to indicate the next occurrence of a cell boundary. That is, the value in the H4 field indicates the number of octets to the first cell boundary following the H4 octet. The permissible range of values is 0 to 52.
An ATM network is designed to be able to transfer many different types of traffic simultaneously, including real-time flows such as voice, video, and bursty TCP flows. Although each such traffic flow is handled as a stream of 53-octet cells traveling through a virtual channel, the way in which each data flow is handled within the network depends on the characteristics of the traffic flow and the requirements of the application. For example, real-time video traffic must be delivered within minimum variation in delay. In this section, we summarize ATM service categories, which are used by an end system to identify the type of service required. The following service categories have been defined by the ATM Forum: Real-Time Service : The most important distinction among applications concerns the amount of delay and the variability of delay, referred to as jitter, that the application can tolerate. Real-time applications typically involve a flow of information to a user that is intended to reproduce that flow at a source. Variants include Constant bit rate (CBR) & Real-time variable bit rate (rt-VBR) Non-Real-Time Service : are intended for applications that have bursty traffic characteristics and do not have tight constraints on delay and delay variation. Accordingly, the network has greater flexibility in handling such traffic flows and can make greater use of statistical multiplexing to increase network efficiency. Variants include: Non-real-time variable bit rate (nrt-VBR), Available bit rate (ABR), Unspecified bit rate (UBR), & Guaranteed frame rate (GFR)
Real-time applications typically involve a flow of information to a user that is intended to reproduce that flow at a source. For example, a user expects a flow of audio or video information to be presented in a continuous, smooth fashion. A lack of continuity or excessive loss results in significant loss of quality. Applications that involve interaction between people have tight constraints on delay. Typically, any delay above a few hundred milliseconds becomes noticeable and annoying. The CBR service is perhaps the simplest real-time service to define. It is used by applications that require a fixed data rate that is continuously available during the connection lifetime and a relatively tight upper bound on transfer delay. CBR is commonly used for uncompressed audio and video information. Example of CBR applications include: • Videoconferencing • Interactive audio (e.g., telephony) • Audio/video distribution (e.g., television, distance learning, pay-per-view) Audio/video retrieval (e.g., video-on-demand, audio library)
The rt-VBR category is intended for time-sensitive applications; that is, those requiring tightly constrained delay and delay variation. The principal difference between applications appropriate for rt-VBR and those appropriate for CBR is that rt-VBR applications transmit at a rate that varies with time. Equivalently, an rt-VBR source can be characterized as somewhat bursty. For example, the standard approach to video compression results in a sequence of image frames of varying sizes. Because real-time video requires a uniform frame transmission rate, the actual data rate varies. The rt-VBR service allows the network more flexibility than CBR. The network is able to statistically multiplex a number of connections over the same dedicated capacity and still provide the required service to each connection.
Non-real-time services are intended for applications that have bursty traffic characteristics and do not have tight constraints on delay and delay variation. Accordingly, the network has greater flexibility in handling such traffic flows and can make greater use of statistical multiplexing to increase network efficiency. For some non-real-time applications, it is possible to characterize the expected traffic flow so that the network can provide substantially improved QoS in the areas of loss and delay. Such applications can use the nrt-VBR service. With this service, the end system specifies a peak cell rate, a sustainable or average cell rate, and a measure of how bursty or clumped the cells may be. With this information, the network can allocate resources to provide relatively low delay and minimal cell loss. The nrt-VBR service can be used for data transfers that have critical response-time requirements. Examples include airline reservations, banking transactions, and process monitoring.
At any given time, a certain amount of the capacity of an ATM network is consumed in carrying CBR and the two types of VBR traffic. Additional capacity is available for one or both of the following reasons: (1) Not all of the total resources have been committed to CBR and VBR traffic, and (2) the bursty nature of VBR traffic means that at some times less than the committed capacity is being used. All of this unused capacity could be made available for the UBR service. This service is suitable for applications that can tolerate variable delays and some cell losses, which is typically true of TCP-based traffic. With UBR, cells are forwarded on a first-in-first-out (FIFO) basis using the capacity not consumed by other services; both delays and variable losses are possible. No initial commitment is made to a UBR source and no feedback concerning congestion is provided; this is referred to as a best-effort service . Examples of UBR applications include: Text/data/image transfer, messaging, distribution, retrieval; or Remote terminal (e.g., telecommuting).
Bursty applications that use a reliable end-to-end protocol such as TCP can detect congestion in a network by means of increased round-trip delays and packet discarding. This is discussed in Stallings DCC8e Chapter 20. However, TCP has no mechanism for causing the resources within the network to be shared fairly among many TCP connections. Further, TCP does not minimize congestion as efficiently as is possible using explicit information from congested nodes within the network. To improve the service provided to bursty sources that would otherwise use UBR, the ABR service has been defined. An application using ABR specifies a peak cell rate (PCR) that it will use and a minimum cell rate (MCR) that it requires. The network allocates resources so that all ABR applications receive at least their MCR capacity. Any unused capacity is then shared in a fair and controlled fashion among all ABR sources. The ABR mechanism uses explicit feedback to sources to assure that capacity is fairly allocated. Any capacity not used by ABR sources remains available for UBR traffic. An example of an application using ABR is LAN interconnection. In this case, the end systems attached to the ATM network are routers.
Stallings DCC8e Figure 11.12 suggests how a network allocates resources during a steady-state period of time (no additions or deletions of virtual channels).