An apparatus and method for dynamic assignment of classes of traffic to a priority queue. Bandwidth consumption by one or more types of packet traffic received in the packet forwarding device is monitored to determine whether the bandwidth consumption exceeds a threshold. If the bandwidth consumption exceeds the threshold, assignment of at least one type of packet traffic of the one or more types of packet traffic is changed from a queue having a first priority to a queue having a second priority.
https://www.google.com/patents/EP1142213B1?cl=en&dq=EP+1142213&hl=en&sa=X&ei=4MNTVPL3GoGlmQWhxICACw&ved=0CB8Q6AEwAA
Dynamic assignment of traffic classes to a priority queue in a packet forward...Tal Lavian Ph.D.
An apparatus and method for dynamic assignment of classes of traffic to a priority queue. Bandwidth consumption by one or more types of packet traffic received in the packet forwarding device is monitored to determine whether the bandwidth consumption exceeds a threshold. If the bandwidth consumption exceeds the threshold, assignment of at least one type of packet traffic of the one or more types of packet traffic is changed from a queue having a first priority to a queue having a second priority.
https://www.google.com/patents/US7710871?dq=US+7710871&hl=en&sa=X&ei=O3RSVL_0L4LMmAXF8ILQBQ&ved=0CB8Q6AEwAA
This document describes a universal baseband processing unit (UBBP) that can be installed in BBU3900 or BBU3910 base stations. It discusses the types of UBBP boards, their specifications for supporting different modes like GSM, UMTS, LTE, and the number of carriers, cells and throughput they can handle. Tables provide details on the board types, applicable modes, carrier and cell specifications, maximum throughput and UEs supported under different configurations.
Qualcomm developed an LTE essential patent portfolio through patent engineering of its existing patents related to channel coding and modulation schemes. The document describes Qualcomm's strategy to design claim terms in continuation applications of issued patents to be essential to 3GPP LTE specifications for channel coding and modulation. This allowed Qualcomm to strengthen its patent portfolio in an area of LTE that will be widely implemented in LTE devices.
A study of throughput for iu cs and iu-ps interface in umts core networkPfedya
This document proposes algorithms to calculate throughput for interfaces in the UMTS core network, specifically Iu-CS and Iu-PS interfaces. It describes the protocol stacks and overhead for each interface. Formulas are provided to calculate bandwidth for a voice channel and video call on Iu-CS based on codec type and efficiency of protocols like AAL2. The document also provides a formula to calculate total bandwidth of the Iu-CS interface based on number of subscribers and traffic proportions. A similar approach is taken for the Iu-PS interface. Finally, a case study applies the algorithms to dimension a UMTS core network topology with specific traffic parameters.
The document provides an overview of LTE (Long Term Evolution) Release 8. It discusses key requirements for LTE such as supporting high data rates, low latency, and an all-IP network. It describes the network architecture including components like eNodeB, MME, S-GW, and P-GW. It also covers functionality of these components and the protocol stack consisting of PDCP, RLC, MAC, and RRC layers. Mobility management, QoS, and comparisons to other technologies like HSPA+ and WiMAX are also summarized.
This document discusses advanced topics in LTE including MIMO modes, codebook-based precoding, closed loop operation, CQI reporting modes, and using antenna port 5 techniques. It provides details on codebook-based spatial multiplexing, CQI reporting tables, adaptive coding and modulation, MIMO channel estimation, and MIMO transmission modes in LTE. It aims to outline these advanced LTE techniques and their operation.
This document provides an overview of Plesiochronous Digital Hierarchy (PDH) and Synchronous Digital Hierarchy (SDH) networks. It discusses the limitations of PDH networks and how SDH was developed to address these. The key aspects of SDH covered include the frame structure, overhead analysis, multiplexing structure, tributary units, and network protection mechanisms such as linear and ring-based protection.
Dynamic assignment of traffic classes to a priority queue in a packet forward...Tal Lavian Ph.D.
An apparatus and method for dynamic assignment of classes of traffic to a priority queue. Bandwidth consumption by one or more types of packet traffic received in the packet forwarding device is monitored to determine whether the bandwidth consumption exceeds a threshold. If the bandwidth consumption exceeds the threshold, assignment of at least one type of packet traffic of the one or more types of packet traffic is changed from a queue having a first priority to a queue having a second priority.
https://www.google.com/patents/US7710871?dq=US+7710871&hl=en&sa=X&ei=O3RSVL_0L4LMmAXF8ILQBQ&ved=0CB8Q6AEwAA
This document describes a universal baseband processing unit (UBBP) that can be installed in BBU3900 or BBU3910 base stations. It discusses the types of UBBP boards, their specifications for supporting different modes like GSM, UMTS, LTE, and the number of carriers, cells and throughput they can handle. Tables provide details on the board types, applicable modes, carrier and cell specifications, maximum throughput and UEs supported under different configurations.
Qualcomm developed an LTE essential patent portfolio through patent engineering of its existing patents related to channel coding and modulation schemes. The document describes Qualcomm's strategy to design claim terms in continuation applications of issued patents to be essential to 3GPP LTE specifications for channel coding and modulation. This allowed Qualcomm to strengthen its patent portfolio in an area of LTE that will be widely implemented in LTE devices.
A study of throughput for iu cs and iu-ps interface in umts core networkPfedya
This document proposes algorithms to calculate throughput for interfaces in the UMTS core network, specifically Iu-CS and Iu-PS interfaces. It describes the protocol stacks and overhead for each interface. Formulas are provided to calculate bandwidth for a voice channel and video call on Iu-CS based on codec type and efficiency of protocols like AAL2. The document also provides a formula to calculate total bandwidth of the Iu-CS interface based on number of subscribers and traffic proportions. A similar approach is taken for the Iu-PS interface. Finally, a case study applies the algorithms to dimension a UMTS core network topology with specific traffic parameters.
The document provides an overview of LTE (Long Term Evolution) Release 8. It discusses key requirements for LTE such as supporting high data rates, low latency, and an all-IP network. It describes the network architecture including components like eNodeB, MME, S-GW, and P-GW. It also covers functionality of these components and the protocol stack consisting of PDCP, RLC, MAC, and RRC layers. Mobility management, QoS, and comparisons to other technologies like HSPA+ and WiMAX are also summarized.
This document discusses advanced topics in LTE including MIMO modes, codebook-based precoding, closed loop operation, CQI reporting modes, and using antenna port 5 techniques. It provides details on codebook-based spatial multiplexing, CQI reporting tables, adaptive coding and modulation, MIMO channel estimation, and MIMO transmission modes in LTE. It aims to outline these advanced LTE techniques and their operation.
This document provides an overview of Plesiochronous Digital Hierarchy (PDH) and Synchronous Digital Hierarchy (SDH) networks. It discusses the limitations of PDH networks and how SDH was developed to address these. The key aspects of SDH covered include the frame structure, overhead analysis, multiplexing structure, tributary units, and network protection mechanisms such as linear and ring-based protection.
This document discusses essential patents for LTE standards. It identifies 71 patents as strong candidates for LTE PHY essential patents held by companies like Qualcomm, Samsung, LG, InterDigital and others. It analyzes a potential essential patent for downlink reference signals and discusses strategies for drafting essential patent claims and developing patent portfolios, using Qualcomm's strategy as a case study.
1) The document describes the downlink physical channels of LTE including the DL-SCH, PBCH, PDSCH, PDCCH, PCFICH, and PHICH.
2) It discusses design constraints for LTE including keeping the cyclic prefix smaller than the symbol length and larger than the delay spread to avoid overhead and interference. The subcarrier spacing must also be large enough to overcome Doppler shifts from UE motion.
3) The placement of reference signals is described, needing to be spaced at least every 0.5ms in time to track fast channels and every 6 subcarriers (45kHz) in frequency to resolve variations.
SDH is a standard protocol for transmitting data over optical fiber at high speeds up to 40Gb/s. It uses multiplexing to transmit multiple light streams over a single fiber, allowing reliable transmission of many PDH signals. The basic unit of SDH is STM-1, which transports data in multiples of 155.53Mb/s. SDH provides bandwidth flexibility and allows interconnection between networks from multiple vendors.
The document discusses the differences between SDH and PDH, as well as key aspects of SDH. SDH provides higher transmission rates up to 40 Gbit/s, simplified add and drop functions, high availability and capacity matching, reliability, and is a future-proof platform for new services compared to PDH. SDH uses synchronous multiplexing where data from multiple sources is byte interleaved at fixed locations in the frame. This allows single channels to be dropped from the data stream without demultiplexing intermediate rates as required in PDH.
The document outlines various digital communication standards including their nominal bit rates, actual line rates, equivalent voice channels supported, and corresponding PDH, SONET, and SDH terminology. It provides a mapping between common digital communication standards like T1, E1, E3, OC-1, and others in both North America and Europe/Japan.
Us4146749 telecommunications network having multi function spare network blocksatyanpitroda
This patent describes a telecommunications switching system with an "all time" digital network. The network is divided into multiple primary blocks that are assigned groups of access ports. A single spare network block is provided that has a programmable identity and can function in place of any faulty primary block. If a primary block malfunctions, the spare block is assigned its identity, enabled to take over call processing, and the faulty block is disabled, allowing call processing to continue seamlessly.
The document provides an overview of Packet over SONET/SDH (PoS) and related technologies. It discusses the OSI model and various internet protocols like TCP, UDP, IP, and how they relate to PoS. PoS allows efficient transport of IP traffic over SONET/SDH networks. It offers benefits like utilizing existing infrastructure while efficiently transporting various data, voice, and video traffic with less overhead than alternative protocols. The document also covers applications and measurements of PoS performance and connectivity.
This document discusses WCDMA channels at different levels including logical channels, transport channels, and physical channels. It provides details on:
- Logical channels describe the type of information transferred and include control and traffic channels.
- Transport channels describe how logical channels are transferred over the interface and include dedicated and common channels.
- Physical channels provide the transmission medium and are defined by specific codes. They include channels like DPDCH, DPCCH, PDSCH, PRACH, and CPICH.
- The document also discusses the radio frame structure in WCDMA and details on different physical channel types and their characteristics.
- LTE technology and testing requirements were discussed. Key aspects of LTE include OFDMA in the downlink, SC-FDMA in the uplink, and support for MIMO.
- The document outlined the motivation for LTE to increase data rates and improve latency compared to previous standards. It also described the basic LTE frame structure and multiple access schemes.
- Testing requirements for LTE included conformance testing of devices as well as field testing during network deployment and optimization.
This document outlines an agenda for a presentation on LTE basics and advanced topics. The presentation will cover LTE fundamentals including frame structures, reference signals, physical channels, signal processing architecture, and UE categories. It will then discuss advanced LTE topics such as MIMO modes, precoding techniques, CQI reporting, and LTE-Advanced developments. Diagrams and explanations are provided on key aspects of the LTE physical layer such as OFDMA transmission schemes, frame formats, reference signal patterns, and the transmitter and receiver processing chains.
This document provides an overview of the LTE physical channel structure and procedures between the eNB and UE. It describes the LTE architecture and introduces the main physical channels including downlink channels like PBCH, PDCCH, PDSCH and uplink channels like PUSCH, PUCCH, PRACH. It explains the channel mapping and provides examples of the initial access procedure and synchronization signal transmission. Key concepts covered are radio interface protocol stacks, channel coding, multiple access, and reference signals.
The document is a tutorial on Long Term Evolution (LTE) technology. It provides an overview of LTE architecture, which includes the Evolved Packet Core and E-UTRAN access network. It also describes the LTE radio interface protocol layers and channels. The document discusses topics like LTE scheduling, hybrid ARQ, and the Multimedia Broadcast Multicast Service in LTE.
SDH (Synchronous Digital Hierarchy) & Its Architectureijsrd.com
The SDH (Synchronous Digital Hierarchy) tell us about transferring large amount of data over an same optical fiber and this document gives us the information about the structure and architecture of SDH.
The document discusses PCI (Physical Cell Identity) planning in LTE networks. It describes the cell search process where the UE detects the PCI from the PSS and SSS. The PCI is used to determine the location of reference signals and avoid interference. The document recommends strategies for PCI planning such as assigning color groups to sectors and code groups to sites to avoid conflicting PCI combinations in adjacent cells. It also discusses tools to analyze potential PCI interference and make changes to mitigate issues.
The document discusses the installation and configuration of an STM-16 synchronous digital hierarchy (SDH) transmission link between Ramna and SBN sites. Key steps included installing single-mode fiber, optical multiplexer equipment, electrical interfaces and cross-connects. Testing validated the optical power levels, fiber continuity and service commissioning over the 2.5 Gbps link. Minor issues were addressed during installation and the STM-16 SDH connection was completed successfully.
This document provides an introduction to LTE/E-UTRA technology, including both FDD and TDD modes of operation. It describes the key requirements for UMTS Long Term Evolution such as high data rates, low latency, and improved spectrum efficiency compared to previous standards. The document then covers various aspects of the LTE standard, including the OFDMA downlink and SC-FDMA uplink transmission schemes, MIMO concepts, protocol architecture, UE capabilities, and testing considerations. Abbreviations used and additional references are also provided.
The document discusses the development of 40 Gigabit Ethernet and 100 Gigabit Ethernet standards. It notes that in 2006, the IEEE determined these faster speeds were needed - 40 Gbps for computing and 100 Gbps for network aggregation. The IEEE formed a task force in 2008 to develop these standards. Key aspects included preserving the Ethernet frame format while supporting faster speeds over fiber and copper cable. The physical coding sublayer implements a multilane distribution scheme to help meet engineering challenges, distributing data across multiple "lanes" to support various interface widths.
TTI bundling is a technique used in LTE to improve uplink coverage for voice calls by transmitting the same transport block containing voice data over multiple consecutive subframes without waiting for HARQ feedback. This provides a coding gain of up to 4dB compared to single subframe transmission, allowing power-limited UEs at the cell edge to be received with sufficient quality. TTI bundling can be implemented in both FDD and TDD LTE networks but with some differences due to limitations on consecutive uplink subframes in TDD configurations. It provides lower latency voice transmission compared to alternatives like RLC segmentation while reducing overhead.
Synchronous optical networking (SONET) and synchronous digital hierarchy (SDH) are standardized protocols that transfer multiple digital bit streams synchronously over optical fiber using lasers or highly coherent light from light-emitting diodes (LEDs). At low transmission rates data can also be transferred via an electrical interface. The method was developed to replace the plesiochronous digital hierarchy (PDH) system for transporting large amounts of telephone calls and data traffic over the same fiber without synchronization problems.
The document discusses the frame structure of Synchronous Digital Hierarchy (SDH). It explains that an SDH frame is transmitted every 125 microseconds and contains 9 rows and 270 columns of bytes for a total of 19,440 bits. This equates to a basic data rate of 155.52 megabits per second. The frame contains sections for regenerator and multiplexer section overhead as well as a payload area. Lower level signals can be mapped and multiplexed into the payload area through a process that includes mapping, aligning, pointer processing and multiplexing.
This short document welcomes the reader to a villa and hints that it may be their last visit, encouraging them to remember it. It concludes by saying see you next time, implying the reader is welcome to return to the villa in the future.
SHAHID INTERNATINAL TRADERS is Worldwide Exporters All kind of Surgical & Dental Instruments and Beauty care Instruments Manicure & Pedicure Instruments .Our quality is best and better than a similar product/brand available. Please furnish us with your demand of Instruments Name & Number of with complete specifications.
INSPECTION OF INSTRUMENTS: - Surgical Instruments Manufacturers Association of Pakistan
Certificate from Sialkot Material Testing Laboratory
QUANTITY: - Minimum and Maximum orders are accepted
PAYMENT TERMS: - 50% Amount T/T, Balance Amount L/C on sight Irrevocable
SHIPMENTS: - We can deliver within 30 days after getting T.T & L/C
SAMPLES AVAILABILITY: - Yes we can send you Sample but please send us 100 pound then we will send you sample for quality approval. When you give the order. We will refund your the sample payments hope you understand
If you need any further information please write us.
Thankful & Best Regards
Chairman: - Muhammad Shahid
SHAHID INTERNATIONAL TRADERS
This document discusses essential patents for LTE standards. It identifies 71 patents as strong candidates for LTE PHY essential patents held by companies like Qualcomm, Samsung, LG, InterDigital and others. It analyzes a potential essential patent for downlink reference signals and discusses strategies for drafting essential patent claims and developing patent portfolios, using Qualcomm's strategy as a case study.
1) The document describes the downlink physical channels of LTE including the DL-SCH, PBCH, PDSCH, PDCCH, PCFICH, and PHICH.
2) It discusses design constraints for LTE including keeping the cyclic prefix smaller than the symbol length and larger than the delay spread to avoid overhead and interference. The subcarrier spacing must also be large enough to overcome Doppler shifts from UE motion.
3) The placement of reference signals is described, needing to be spaced at least every 0.5ms in time to track fast channels and every 6 subcarriers (45kHz) in frequency to resolve variations.
SDH is a standard protocol for transmitting data over optical fiber at high speeds up to 40Gb/s. It uses multiplexing to transmit multiple light streams over a single fiber, allowing reliable transmission of many PDH signals. The basic unit of SDH is STM-1, which transports data in multiples of 155.53Mb/s. SDH provides bandwidth flexibility and allows interconnection between networks from multiple vendors.
The document discusses the differences between SDH and PDH, as well as key aspects of SDH. SDH provides higher transmission rates up to 40 Gbit/s, simplified add and drop functions, high availability and capacity matching, reliability, and is a future-proof platform for new services compared to PDH. SDH uses synchronous multiplexing where data from multiple sources is byte interleaved at fixed locations in the frame. This allows single channels to be dropped from the data stream without demultiplexing intermediate rates as required in PDH.
The document outlines various digital communication standards including their nominal bit rates, actual line rates, equivalent voice channels supported, and corresponding PDH, SONET, and SDH terminology. It provides a mapping between common digital communication standards like T1, E1, E3, OC-1, and others in both North America and Europe/Japan.
Us4146749 telecommunications network having multi function spare network blocksatyanpitroda
This patent describes a telecommunications switching system with an "all time" digital network. The network is divided into multiple primary blocks that are assigned groups of access ports. A single spare network block is provided that has a programmable identity and can function in place of any faulty primary block. If a primary block malfunctions, the spare block is assigned its identity, enabled to take over call processing, and the faulty block is disabled, allowing call processing to continue seamlessly.
The document provides an overview of Packet over SONET/SDH (PoS) and related technologies. It discusses the OSI model and various internet protocols like TCP, UDP, IP, and how they relate to PoS. PoS allows efficient transport of IP traffic over SONET/SDH networks. It offers benefits like utilizing existing infrastructure while efficiently transporting various data, voice, and video traffic with less overhead than alternative protocols. The document also covers applications and measurements of PoS performance and connectivity.
This document discusses WCDMA channels at different levels including logical channels, transport channels, and physical channels. It provides details on:
- Logical channels describe the type of information transferred and include control and traffic channels.
- Transport channels describe how logical channels are transferred over the interface and include dedicated and common channels.
- Physical channels provide the transmission medium and are defined by specific codes. They include channels like DPDCH, DPCCH, PDSCH, PRACH, and CPICH.
- The document also discusses the radio frame structure in WCDMA and details on different physical channel types and their characteristics.
- LTE technology and testing requirements were discussed. Key aspects of LTE include OFDMA in the downlink, SC-FDMA in the uplink, and support for MIMO.
- The document outlined the motivation for LTE to increase data rates and improve latency compared to previous standards. It also described the basic LTE frame structure and multiple access schemes.
- Testing requirements for LTE included conformance testing of devices as well as field testing during network deployment and optimization.
This document outlines an agenda for a presentation on LTE basics and advanced topics. The presentation will cover LTE fundamentals including frame structures, reference signals, physical channels, signal processing architecture, and UE categories. It will then discuss advanced LTE topics such as MIMO modes, precoding techniques, CQI reporting, and LTE-Advanced developments. Diagrams and explanations are provided on key aspects of the LTE physical layer such as OFDMA transmission schemes, frame formats, reference signal patterns, and the transmitter and receiver processing chains.
This document provides an overview of the LTE physical channel structure and procedures between the eNB and UE. It describes the LTE architecture and introduces the main physical channels including downlink channels like PBCH, PDCCH, PDSCH and uplink channels like PUSCH, PUCCH, PRACH. It explains the channel mapping and provides examples of the initial access procedure and synchronization signal transmission. Key concepts covered are radio interface protocol stacks, channel coding, multiple access, and reference signals.
The document is a tutorial on Long Term Evolution (LTE) technology. It provides an overview of LTE architecture, which includes the Evolved Packet Core and E-UTRAN access network. It also describes the LTE radio interface protocol layers and channels. The document discusses topics like LTE scheduling, hybrid ARQ, and the Multimedia Broadcast Multicast Service in LTE.
SDH (Synchronous Digital Hierarchy) & Its Architectureijsrd.com
The SDH (Synchronous Digital Hierarchy) tell us about transferring large amount of data over an same optical fiber and this document gives us the information about the structure and architecture of SDH.
The document discusses PCI (Physical Cell Identity) planning in LTE networks. It describes the cell search process where the UE detects the PCI from the PSS and SSS. The PCI is used to determine the location of reference signals and avoid interference. The document recommends strategies for PCI planning such as assigning color groups to sectors and code groups to sites to avoid conflicting PCI combinations in adjacent cells. It also discusses tools to analyze potential PCI interference and make changes to mitigate issues.
The document discusses the installation and configuration of an STM-16 synchronous digital hierarchy (SDH) transmission link between Ramna and SBN sites. Key steps included installing single-mode fiber, optical multiplexer equipment, electrical interfaces and cross-connects. Testing validated the optical power levels, fiber continuity and service commissioning over the 2.5 Gbps link. Minor issues were addressed during installation and the STM-16 SDH connection was completed successfully.
This document provides an introduction to LTE/E-UTRA technology, including both FDD and TDD modes of operation. It describes the key requirements for UMTS Long Term Evolution such as high data rates, low latency, and improved spectrum efficiency compared to previous standards. The document then covers various aspects of the LTE standard, including the OFDMA downlink and SC-FDMA uplink transmission schemes, MIMO concepts, protocol architecture, UE capabilities, and testing considerations. Abbreviations used and additional references are also provided.
The document discusses the development of 40 Gigabit Ethernet and 100 Gigabit Ethernet standards. It notes that in 2006, the IEEE determined these faster speeds were needed - 40 Gbps for computing and 100 Gbps for network aggregation. The IEEE formed a task force in 2008 to develop these standards. Key aspects included preserving the Ethernet frame format while supporting faster speeds over fiber and copper cable. The physical coding sublayer implements a multilane distribution scheme to help meet engineering challenges, distributing data across multiple "lanes" to support various interface widths.
TTI bundling is a technique used in LTE to improve uplink coverage for voice calls by transmitting the same transport block containing voice data over multiple consecutive subframes without waiting for HARQ feedback. This provides a coding gain of up to 4dB compared to single subframe transmission, allowing power-limited UEs at the cell edge to be received with sufficient quality. TTI bundling can be implemented in both FDD and TDD LTE networks but with some differences due to limitations on consecutive uplink subframes in TDD configurations. It provides lower latency voice transmission compared to alternatives like RLC segmentation while reducing overhead.
Synchronous optical networking (SONET) and synchronous digital hierarchy (SDH) are standardized protocols that transfer multiple digital bit streams synchronously over optical fiber using lasers or highly coherent light from light-emitting diodes (LEDs). At low transmission rates data can also be transferred via an electrical interface. The method was developed to replace the plesiochronous digital hierarchy (PDH) system for transporting large amounts of telephone calls and data traffic over the same fiber without synchronization problems.
The document discusses the frame structure of Synchronous Digital Hierarchy (SDH). It explains that an SDH frame is transmitted every 125 microseconds and contains 9 rows and 270 columns of bytes for a total of 19,440 bits. This equates to a basic data rate of 155.52 megabits per second. The frame contains sections for regenerator and multiplexer section overhead as well as a payload area. Lower level signals can be mapped and multiplexed into the payload area through a process that includes mapping, aligning, pointer processing and multiplexing.
This short document welcomes the reader to a villa and hints that it may be their last visit, encouraging them to remember it. It concludes by saying see you next time, implying the reader is welcome to return to the villa in the future.
SHAHID INTERNATINAL TRADERS is Worldwide Exporters All kind of Surgical & Dental Instruments and Beauty care Instruments Manicure & Pedicure Instruments .Our quality is best and better than a similar product/brand available. Please furnish us with your demand of Instruments Name & Number of with complete specifications.
INSPECTION OF INSTRUMENTS: - Surgical Instruments Manufacturers Association of Pakistan
Certificate from Sialkot Material Testing Laboratory
QUANTITY: - Minimum and Maximum orders are accepted
PAYMENT TERMS: - 50% Amount T/T, Balance Amount L/C on sight Irrevocable
SHIPMENTS: - We can deliver within 30 days after getting T.T & L/C
SAMPLES AVAILABILITY: - Yes we can send you Sample but please send us 100 pound then we will send you sample for quality approval. When you give the order. We will refund your the sample payments hope you understand
If you need any further information please write us.
Thankful & Best Regards
Chairman: - Muhammad Shahid
SHAHID INTERNATIONAL TRADERS
Jeff created a PowerPoint presentation with music by Barbara Streisand titled "Memories". The presentation was about the art of photography. An additional link was provided to a blog about photopoetry.
This short document welcomes the reader to a villa and hints that it may be their last visit, encouraging them to remember it. It concludes by saying see you next time, implying future visits are hoped for.
The International Consortium on Governmental Financial Management (ICGFM) is a non-profit founded in 1978 that brings together leaders and practitioners in public financial management (PFM) to facilitate sharing best practices. It works to improve PFM at local, state/provincial, and national government levels through exchanging information via conferences, training programs, and its journal. The ICGFM also develops guidance on PFM topics like accrual-based IPSAS implementation. It has members from both organizations and individuals working in the PFM field.
This document contains captions for various astronomical images taken by the Hubble Space Telescope, including images of Andromeda Galaxy, stellar dust, gas clouds, black holes, nebulae like the Boomerang Nebula and Carina Nebula, comets, galaxies, auroras, stars, Jupiter, planets, and pulsars.
The document discusses recommendations for improving the ICGFM website and increasing collaboration and knowledge sharing among members. It proposes a staged approach:
1) Updating the existing website to improve navigation, search, and break content into smaller pieces.
2) Publishing content to social media sites like SlideShare and Scribd.
3) Creating a wiki for members to capture expertise and engage in discussion.
4) Developing an ICGFM social network on a site like Ning or Facebook.
This staged approach would modernize how ICGFM shares information and allow it to see engagement with each new technology.
The document discusses various angle properties in polygons. It defines different types of polygons and explains properties such as the sum of the interior angles of a polygon being 180(n-2) degrees, where n is the number of sides. The document also covers exterior angles and their relationship to interior angles, as well as properties of regular polygons where all interior angles are equal.
This document provides instructions for setting your email signature in Outlook to the Six Senses standard format. The steps are:
1. Click Tools > Options > Mail Format > Fonts and set the font to Comic Sans MS, size 10, and navy blue color.
2. Click Signatures to edit your signature.
3. Open the signature template file from the provided location, copy it, paste it into your signature, and save it as a web page.
4. Click OK to apply the new signature.
This poem reflects on quietly looking back at one's life. It describes remembering the past with nostalgia, from enjoying simple pleasures in their youth to gaining experience and wisdom as they grew older. The poem ends by expressing that life is a journey that should be cherished at each stage.
Final may 112011 ultimate dream destination evason six sesnes thailnadDanai Thongsin
This document provides room rates and package deals for three resort properties in Thailand: Six Senses Samui, Six Senses Yao Noi, and Evason Hua Hin. At Six Senses Samui, nightly room rates in May-July start at 9,500 Baht for a Hideaway Villa and 13,000 Baht for a Pool Villa. The two-night Ultimate Package includes daily breakfast, transfers, massages, and a dinner. At Six Senses Yao Noi, nightly Garden and Ocean Pool Villa rates are 9,600-13,200 Baht. The two-night package includes breakfast, transfers and a dinner. Rates at Evason Hua Hin start at 9,
Financial Sector Strategies for Public Financial Managersicgfmconference
Debra C. von Koch, Associate Director for Government Debt Issuance and Management,
Office of Technical Assistance, United States Department of the Treasury
“Financial Sector Strategies for Public Financial Managers”
The presenters will discuss strategies and tactics that can be utilized in responding to
current financial sector and debt market challenges.
Culcha Candela is a German hip hop group formed in 2001. The group blends dancehall, reggae, and hip hop styles and raps in English, German, Spanish and Patois. Original members included Johnny Strange, Itchyban, and Lafrotino. Current members are Johnny Strange, Itchyban, Laristo, Mr. Reedoo, and Don Cali. They have released several successful albums between 2004-2009 and have singles that range from serious topics to funny songs.
Recent public financial management publications and other resourcesicgfmconference
As initiated in our last issue, we end this issue with a section introducing recent public financial management publications and other resources which we hope will be of interest to readers of the Journal. We would be pleased to receive reviews and suggestions of other resources which we should refer to in future issues.
Dynamic assignment of traffic classes to a priority queue in a packet forward...Tal Lavian Ph.D.
An apparatus and method for dynamic assignment of classes of traffic to a priority queue. Bandwidth consumption by one or more types of packet traffic received in the packet forwarding device is monitored to determine whether the bandwidth consumption exceeds a threshold. If the bandwidth consumption exceeds the threshold, assignment of at least one type of packet traffic of the one or more types of packet traffic is changed from a queue having a first priority to a queue having a second priority.
https://www.google.com/patents/US20040076161?dq=20040076161&hl=en&sa=X&ei=yONVVPbCJ8OXuQS_7oGAAg&ved=0CB8Q6AEwAA
Dynamic assignment of traffic classes to a priority queue in a packet forward...Tal Lavian Ph.D.
This document describes a method for dynamically assigning traffic classes to priority queues in a packet forwarding device. The device monitors bandwidth consumption of different types of packet traffic and compares it to thresholds. If bandwidth consumption of a type exceeds a threshold, the traffic class is automatically moved from a lower priority queue to a higher priority queue. This helps ensure important traffic like data from servers is not impeded by less important traffic when network usage is high.
Us3997874 time divided switching and concentration apparatussatyanpitroda
This patent describes a time divided switching apparatus for switching time division multiplexed channels carrying PCM encoded signals. The apparatus includes two data memories - a buffer memory connected to a subscriber channel bus, and an information memory connected to a central office channel bus. A clock and control unit selectively transfers addressing codes between the data memories under control of a local processing unit. This allows the transfer of PCM codes between the memories to effect time divided connections without the need for conventional crosspoints or time switched points.
Dynamic Assignment of Traffic Classes to a Priority Queue in a Packet Forward...Tal Lavian Ph.D.
Responsive to detecting that bandwidth consumption of a packet flow has exceeded a threshold, packet forwarding treatment is changed in accordance with at least one class of packet flow from a first packet forwarding treatment to a second packet forwarding treatment.
https://www.google.com/patents/US20140105012?dq=US+20140105012&hl=en&sa=X&ei=FrZXVJr-G4ymuQT_74GgAg&ved=0CB8Q6AEwAA
On the Performance of Carrier Interferometry OFDM by Wavelet TransformIRJET Journal
This document presents a novel carrier interferometry (CI) spreading code design for orthogonal frequency division multiplexing (OFDM) systems using a wavelet transform instead of a fast Fourier transform (FFT). The proposed wavelet-based CI/OFDM system is simulated over Rayleigh and Rician fading channels and its performance is compared to conventional OFDM and FFT-based CI/OFDM. Simulation results show that the wavelet-based CI codes improve bit error rate performance, being more robust at signal-to-noise ratios above approximately 10 dB for Rician fading and 7 dB for Rayleigh fading compared to FFT-based CI codes. The wavelet transform provides time-frequency localization advantages over the FFT for realizing CI in OF
A efficacy of different buffer size on latency of network on chip (NoC)journalBEEI
Moore's prediction has been used to set targets for research and development in semiconductor industry for years now. A burgeoning number of processing cores on a chip demand competent and scalable communication architecture such as network-on-chip (NoC). NoC technology applies networking theory and methods to on-chip communication and brings noteworthy improvements over conventional bus and crossbar interconnections. Calculated performances such as latency, throughput, and bandwidth are characterized at design time to assured the performance of NoC. However, if communication pattern or parameters set like buffer size need to be altered, there might result in large area and power consumption or increased latency. Routers with large input buffers improve the efficiency of NoC communication while routers with small buffers reduce power consumption but result in high latency. This paper intention is to validate that size of buffer exert influence to NoC performance in several different network topologies. It is concluded that the way in which routers are interrelated or arranged affect NoC’s performance (latency) where different buffer sizes were adapted. That is why buffering requirements for different routers may vary based on their location in the network and the tasks assigned to them.
1) ATM uses fixed-sized packets called cells to transfer data at high speeds over both logical connections and physical interfaces in a streamlined manner.
2) ATM cells have a standard size and format including a header for routing and a payload for user data. Multiple logical connections can be multiplexed over a single physical interface.
3) ATM connections can be either virtual channel connections between two end users or virtual path connections that bundle multiple VCCs with the same endpoints to simplify networking and improve performance.
A Carrierless Amplitude Phase (CAP) Modulation Format: Perspective and Prospe...IJECEIAES
The explosive demand of broadband services nowadays requires data communication systems to have intensive capacity which subsequently increases the need for higher data rate as well. Although implementation of multiple wavelengths channels can be used (e.g. 4 × 25.8 Gb/s for 100 Gb/s connection) for such desired system, it usually leads to cost increment issue which is caused by employment of multiple optical components. Therefore, implementation of advanced modulation format using a single wavelength channel has become a preference to increase spectral efficiency by increasing the data rate for a given transmission system bandwidth. Conventional advanced modulation format however, involves a degree of complexity and costly transmission system. Hence, carrierless amplitude phase (CAP) modulation format has emerged as a promising advanced modulation format candidate due to spectral efficiency improvement ability with reduction of optical transceiver complexity and cost. The intriguing properties of CAP modulation format are reviewed as an attractive prospect in optical transmission system applications.
Improved data efficiency of programmable arbiter based on chip permutation ne...EditorIJAERD
The document describes a proposed improved on-chip permutation network (OCP network) for multiprocessor system-on-chips (MPSoCs) that supports guaranteed traffic permutation. The network employs a pipelined circuit-switching approach combined with a dynamic path-setup scheme under a three-stage Clos network topology. A programmable arbiter priority logic is used to improve data transfer efficiency by providing three priority logics - fixed, round robin, and dynamic - that can be selected according to priority requirements. The design is implemented on an FPGA and simulation results and synthesis reports indicate the proposed OCP network improves power, delay, and data efficiency compared to existing systems.
This document evaluates the performance of three IPv4 to IPv6 migration techniques: dual stack, tunneling, and NAT-PT translation. It uses the OPNET network simulation tool to model and analyze the network performance of each technique in terms of ethernet delay, throughput, and packet loss. The simulation results show that the tunneling technique exhibited the lowest delay and packet loss, while providing the highest throughput compared to the dual stack and NAT-PT translation techniques. Overall, the document finds that tunneling provides the best performance for migrating network traffic from IPv4 to IPv6.
Performance Evaluation of Ipv4, Ipv6 Migration TechniquesIOSR Journals
This document evaluates the performance of three IPv4 to IPv6 migration techniques: dual stack, tunneling, and NAT-PT translation. It uses the OPNET network simulation tool to model and analyze the network performance of each technique. The simulation results show that the tunneling technique exhibited the lowest Ethernet delay (75ms) and packet loss (1 packet/sec), while dual stack and NAT-PT had longer delays of 85-90ms and higher packet losses of 1.4 packets/sec. The tunneling technique also achieved the highest throughput of 200 bits/sec, compared to 100 bits/sec for dual stack and NAT-PT. In conclusion, the tunneling migration technique provided better performance than the other two techniques based
ATM is a cell relay protocol designed to optimize fiber optic networks. It breaks data into fixed-size cells for uniform transmission. ATM aims to maximize bandwidth, interface existing systems, be inexpensive, support telecom hierarchies, ensure reliable delivery, and minimize software functions. Connections between endpoints are established through virtual paths and circuits identified by header fields. Cells contain a 5-byte header and 48-byte payload. Connections can be permanent or switched. ATM defines layers for applications, cell processing, and physical transmission. It supports various quality of service levels through parameters like cell error and loss rates.
This document provides an overview of Asynchronous Transfer Mode (ATM) networking. It begins by stating the objectives of explaining ATM and then provides background on ATM, noting that it transfers information in small, fixed-size cells. It describes the benefits of ATM, including dynamic bandwidth allocation and support for multimedia traffic. It then explains the basic components of an ATM network, including switches and endpoints, and describes the ATM cell format and header fields. Finally, it introduces the concept of virtual connections in ATM networks.
A network switch (sometimes known as a switching hub) is a computer networking device that is used to connect devices together on a computer network by performing a form of packet switching. A switch is considered more advanced than a hub because a switch will only send a message to the device that needs or requests it, rather than broadcasting the same message out of each of its ports
Method and apparatus for live streaming media replication in a communication ...Tal Lavian Ph.D.
Replication of live streaming media services (SMS) in a communication network may be enabled by introducing the streaming media-savvy replication service on a network element, through a number of functions such as client/service registration and classification, packet interception and forwarding, media replication, status monitoring and configuration management. In an embodiment, client requests and server replies of an SMS are intercepted and evaluated by the network element. If the SMS is not streaming through the network element, the replication service registers the SMS and establishes a unique SMS session for the requesting clients. If the SMS is already streaming through the network element, the replication service replicates the streaming media and forwards it to the requesting clients. This reduces bandwidth usage on the links connecting the streaming media server with the network element and reduces the number of client connections to the streaming media provider's servers.
https://www.google.com/patents/US20050076099?dq=20050076099&hl=en&sa=X&ei=RONVVKPrLNefugSgnILwAQ&ved=0CB8Q6AEwAA
This document discusses standards-based network processor unit (NPU) and switch fabric devices that can be used to build next-generation multi-service platforms. It presents simulation results of a CSIX-based NPU/switch fabric configuration that could provide differentiated services in an OC-48 multi-service application. The key points are:
1) NPUs and switch fabrics interfacing via the CSIX-L1 standard can provide a low-cost alternative to custom ASICs for building multi-service platforms.
2) A CSIX-based NPU/switch fabric configuration is simulated to profile its capability in an OC-48 multi-service application and illustrate the utility of a standards-based approach
Discriminators for use in flow-based classificationDenis Zuev
This document describes data sets containing network flows that are characterized by discriminators (features) for use in flow-based classification. Each data set contains TCP flows sampled from different periods of a 24-hour network trace. The flows are characterized by features including port numbers, packet timing statistics, TCP header information, and Fourier transform frequency components of packet inter-arrival times. The data sets are provided to allow researchers to assess flow classification techniques.
The document summarizes several patent cases that have undergone scrutiny by the Patent Trial and Appeal Board (PTAB) under inter partes review (IPR) or covered business method (CBM) proceedings.
In the first case, claims from a wireless patent were found to be unpatentable over a prior art reference that discloses constructing message fields for data units with type and sequence number fields. In the second case, claims directed to arranging information for transmission with payload and service type identifier fields were found to be anticipated by a prior art reference describing multiplexing voice and data signals into frames. The third case did not find the prior art reference taught the limitations of establishing a unique channel-hopping band plan. The final
Duplexing mode, ARB and modulation approaches parameters affection on LTE upl...IJECEIAES
The next generation of radio technologies designed to increase the capacity and speed of mobile networks. LTE is the first technology designed explicitly for the Next Generation Network NGN and is set to become the de-facto NGN mobile access network standard. It takes advantage of the NGN's capabilities to provide an always-on mobile data experience comparable to wired networks. In this paper LTE uplink waveforms displayed with various duplexing mode, Allocated Resources Blocks ARB, Modulation types and total information per frame, QPSK and 16 QAM used as modulation techniques and tested under AWGN and Rayleigh channels, similarity and interference of the generated waveforms tested using auto-correlation and cross-correlation respectively.
Modified Headfirst Sliding Routing: A Time-Based Routing Scheme for Bus-Nochy...IJERA Editor
Several interesting topologies emerge by incorporating the third dimension in networks-on-chip (NoC). The Network-on-Chip (NoC) is Network-version of System on-Chip (SoC) means that on-chip communication is done through packet based networks. In NOC topology, routing algorithm and switching are main terminology .The routing algorithm is one of the key factor in NOC architecture. The routing algorithm, which defines as the path taken by a packet between the source and the destination. A good routing algorithm is necessary to improve the network performance. . Here we are proposing a new architecture to improve the throughput and latency of the network. In the proposed approach we are using a fixed path for the packet to transmit from source to destination
Similar to Dynamic assignment of traffic classes to a priority queue in a packet forwarding device (20)
This document describes an ultra low phase noise frequency synthesizer system for wireless communication applications. The system uses a combination of a fractional-N phase locked loop (PLL), sampling reference PLL, and direct digital synthesizer (DDS). It aims to reduce phase noise and enable higher order modulation schemes for increased data rates. The system comprises a front end module, display, and system on chip with the frequency synthesizer. It provides very low phase deviation of 0.04 degrees through a dual loop design, sampling PLL reference, and high frequency digital components.
A system for providing ultra low phase noise frequency synthesizers using Fractional-N PLL (Phase Lock Loop), Sampling Reference PLL and DDS (Direct Digital Synthesizer). Modern day advanced communication systems comprise frequency synthesizers that provide a frequency output signal to other parts of the transmitter and receiver so as to enable the system to operate at the set frequency band. The performance of the frequency synthesizer determines the performance of the communication link. Current days advanced communication systems comprises single loop Frequency synthesizers which are not completely able to provide lower phase deviations for errors (For 256 QAM the practical phase deviation for no errors is 0.4-0.5°) which would enable users to receive high data rate. This proposed system overcomes deficiencies of current generation state of the art communication systems by providing much lower level of phase deviation error which would result in much higher modulation schemes and high data rate.
Embodiments of the present invention present a method and apparatus for photonic line sharing for high-speed routers. Photonic switches receive high-speed optical data streams and produce the data streams to a router operating according to routing logic and produce optical data streams according to destination addresses stored in the data packets. Each photonic switch can be configured as one of a 1:N multiplexer or an M:N cross-connect switch. In one embodiment, optical data is converted to electrical data prior to routing, while an alternate embodiment routes only optical data. Another embodiment transfers large volumes of high-speed data through an optical bypass line in a circuit switched network to bypass the switch fabric thereby routing the data packets directly to the destination. An edge device selects one of the packet switched network or the circuit switched network. The bypass resources are released when the large volume of high-speed data is transferred.
Systems and methods to support sharing and exchanging in a networkTal Lavian Ph.D.
Embodiments of the invention provide for providing support for sharing and exchanging in a network. The system includes a memory coupled to a processor. The memory includes a database comprising information corresponding to first users and the second users. Each of the first users and the second users are facilitated for sharing or exchanging activity, service or product, based on one or more conditions corresponding thereto. Further, the memory includes one or more instructions executable by the processor to match each of the first users to at least one of the second users. Furthermore, the instructions may inform each of the first users about the match with the at least one of the second users when all the conditions are met by the at least one second user based on the information corresponding to each of the second users.
Systems and methods for visual presentation and selection of IVR menuTal Lavian Ph.D.
Embodiments of the invention provide a system for generating an Interactive Voice Response (IVR) database, the system comprising a processor and a memory coupled to the processor. The memory comprising a list of telephone numbers associated with one or more destinations implementing IVR menus, wherein the one or more destinations are grouped based on a plurality of categories of the IVR menus. Further the memory includes instructions executable by said processor for automatically communicating with the one of more destinations, and receiving at least one customization record from said at least one destination to store in the IVR database.
Various embodiments allow Grid applications to access resources shared in communication network domains. Grid Proxy Architecture for Network Resources (GPAN) bridges Grid services serving user applications and network services controlling network devices through proxy functions. At times, GPAN employs distributed network service peers (NSP) in network domains to discover, negotiate and allocate network resources for Grid applications. An elected master NSP is the unique Grid node that runs GPAN and represents the whole network to share network resources to Grids without Grid involvement of network devices. GPAN provides the Grid Proxy service (GPS) to interface with Grid services and applications, and the Grid Delegation service (GDS) to interface with network services to utilize network resources. In some cases, resource-based XML messaging can be employed for the GPAN proxy communication.
A system for providing ultra low phase noise frequency synthesizers using Fractional-N PLL (Phase Lock Loop), Sampling Reference PLL and DDS (Direct Digital Synthesizer). Modern day advanced communication systems comprise frequency synthesizers that provide a frequency output signal to other parts of the transmitter and receiver so as to enable the system to operate at the set frequency band. The performance of the frequency synthesizer determines the performance of the communication link. Current days advanced communication systems comprises single loop Frequency synthesizers which are not completely able to provide lower phase deviations for errors (For 256 QAM the practical phase deviation for no errors is 0.4-0.5°) which would enable users to receive high data rate. This proposed system overcomes deficiencies of current generation state of the art communication systems by providing much lower level of phase deviation error which would result in much higher modulation schemes and high data rate.
Systems and methods for electronic communicationsTal Lavian Ph.D.
Embodiments of the invention provide a system for enhancing user interaction with the Internet of Things. The system includes a processor, and a memory coupled to the processor. The memory includes a database having one or more options corresponding to each of the Internet of Things. The memory further includes instructions executable by the processor to share at least one of the one or more options with one or more users of the things. Further, the instructions receive information corresponding to selection of the at least one option by the one or more users. Additionally, the instructions update the database based on the selection of the at least one option by the one or more users. Further, a device for enhancing interaction with the things is also disclosed.
A system for providing ultra low phase noise frequency synthesizers using Fractional-N PLL (Phase Lock Loop), Sampling Reference PLL and DDS (Direct Digital Synthesizer). Modern day advanced communication systems comprise frequency synthesizers that provide a frequency output signal to other parts of the transmitter and receiver so as to enable the system to operate at the set frequency band. The performance of the frequency synthesizer determines the performance of the communication link. Current days advanced communication systems comprises single loop Frequency synthesizers which are not completely able to provide lower phase deviations for errors (For 256 QAM the practical phase deviation for no errors is 0.4-0.5°) which would enable users to receive high data rate. This proposed system overcomes deficiencies of current generation state of the art communication systems by providing much lower level of phase deviation error which would result in much higher modulation schemes and high data rate.
A system for providing ultra low phase noise frequency synthesizers using Fractional-N PLL (Phase Lock Loop), Sampling Reference PLL and DDS (Direct Digital Synthesizer). Modern day advanced communication systems comprise frequency synthesizers that provide a frequency output signal to other parts of the transmitter and receiver so as to enable the system to operate at the set frequency band. The performance of the frequency synthesizer determines the performance of the communication link. Current days advanced communication systems comprises single loop Frequency synthesizers which are not completely able to provide lower phase deviations for errors (For 256 QAM the practical phase deviation for no errors is 0.4-0.5°) which would enable users to receive high data rate. This proposed system overcomes deficiencies of current generation state of the art communication systems by providing much lower level of phase deviation error which would result in much higher modulation schemes and high data rate.
Radar target detection system for autonomous vehicles with ultra-low phase no...Tal Lavian Ph.D.
An object detection system for autonomous vehicle, comprising a radar unit and at least one ultra-low phase noise frequency synthesizer, is provided. The radar unit configured for detecting the presence and characteristics of one or more objects in various directions. The radar unit may include a transmitter for transmitting at least one radio signal; and a receiver for receiving the at least one radio signal returned from the one or more objects. The ultra-low phase noise frequency synthesizer may utilize Clocking device, Sampling Reference PLL, at least one fixed frequency divider, DDS and main PLL to reduce phase noise from the returned radio signal. This proposed system overcomes deficiencies of current generation state of the art Radar Systems by providing much lower level of phase noise which would result in improved performance of the radar system in terms of target detection, characterization etc. Further, a method for autonomous vehicle is also disclosed.
Various embodiments allow Grid applications to access resources shared in communication network domains. Grid Proxy Architecture for Network Resources (GPAN) bridges Grid services serving user applications and network services controlling network devices through proxy functions. At times, GPAN employs distributed network service peers (NSP) in network domains to discover, negotiate and allocate network resources for Grid applications. An elected master NSP is the unique Grid node that runs GPAN and represents the whole network to share network resources to Grids without Grid involvement of network devices. GPAN provides the Grid Proxy service (GPS) to interface with Grid services and applications, and the Grid Delegation service (GDS) to interface with network services to utilize network resources. In some cases, resource-based XML messaging can be employed for the GPAN proxy communication.
Method and apparatus for scheduling resources on a switched underlay networkTal Lavian Ph.D.
A method and apparatus for resource scheduling on a switched underlay network (18) enables coordination, scheduling, and scheduling optimization to take place taking into account the availability of the data and the network resources comprising the switched underlay network (18). Requested transfers may be fulfilled by assessing the requested transfer parameters, the availability of the network resources required to fulfill the request, the availability of the data to be transferred, the availability of sufficient storage resources to receive the data, and other potentially conflicting requested transfers. In one embodiment, the requests are under-constrained to enable transfer scheduling optimization to occur. The under-constrained nature of the requests enable transfer scheduling optimization to occur. The under-constrained nature of the requests enables requests to be scheduled taking into account factors such as transfer priority, transfer duration, the amount of time it has been since the transfer request was submitted, and many other factors.
Dynamic assignment of traffic classes to a priority queue in a packet forward...Tal Lavian Ph.D.
An apparatus and method for dynamic assignment of classes of traffic to a priority queue. Bandwidth consumption by one or more types of packet traffic received in the packet forwarding device is monitored to determine whether the bandwidth consumption exceeds a threshold. If the bandwidth consumption exceeds the threshold, assignment of at least one type of packet traffic of the one or more types of packet traffic is changed from a queue having a first priority to a queue having a second priority.
Method and apparatus for using a command design pattern to access and configu...Tal Lavian Ph.D.
This patent application describes a method and system for remotely accessing and configuring network devices using XML documents and a common design pattern. An XML request is sent from a client to a network device to request that a service be performed locally on the device. The network device includes a service engine that can parse the XML request using an XML DTD, instantiate the requested service, interact with device hardware and software to execute the service, and optionally return a response to the client. The use of XML documents and a common design pattern allows network devices to be accessed and configured in a flexible manner without needing to be pre-programmed for specific requests.
Embodiments of the invention provide means to the users of the system to provide ratings and corresponding feedback for enhancing the genuineness in the ratings. The system includes a memory coupled to a processor. The memory includes one or more instructions executable by the processor to enable the users of the system to rate each other based on at least one of sharing, exchanging, and selling one of activity, service or product. The system may provide a mechanism to encourage genuineness in ratings provided by the users. Furthermore, the instructions facilitate the rating receivers to provide feedbacks corresponding to the received ratings. The feedback includes accepting or objecting to a particular rating. Moreover, the memory includes instructions executable by the processor to enable the system to determine genuineness of an objection raised by a rating receiver.
Embodiments of the present invention provide a system for enhancing reliability in computation of ratings provided by a user over a social network. The system comprises of a processor and a memory coupled to the processor. The memory further comprises a rater score database, a satisfaction database, a social network registration database, a user profile database, and a plurality of instruction executable by the processor. Said instructions in the memory are enabled to accept a message from at least one user wherein said message comprises a satisfaction score associated with at least one service provider and to retrieve a rater score associated with said at least one user from said rater score database. Further, the memory includes instructions in order to compute a new satisfaction score based on said rater score and said satisfaction score and update said satisfaction database to include said new satisfaction score. In a similar manner, the new satisfaction score can be computed based upon the information stored in the social network registration database and user profile database.
Systems and methods for visual presentation and selection of ivr menuTal Lavian Ph.D.
Embodiments of the invention provide a system for generating an Interactive Voice Response (IVR) database, the system comprising a processor and a memory coupled to the processor. The memory comprising a list of telephone numbers associated with one or more destinations implementing IVR menus, wherein the one or more destinations are grouped based on a plurality of categories of the IVR menus. Further the memory includes instructions executable by said processor for automatically communicating with the one of more destinations, and receiving at least one customization record from said at least one destination to store in the IVR database.
A system for providing ultra low phase noise frequency synthesizers using Fractional-N PLL (Phase Lock Loop), Sampling Reference PLL and DDS (Direct Digital Synthesizer). Modern day advanced communication systems comprise frequency synthesizers that provide a frequency output signal to other parts of the transmitter and receiver so as to enable the system to operate at the set frequency band. The performance of the frequency synthesizer determines the performance of the communication link. Current days advanced communication systems comprises single loop Frequency synthesizers which are not completely able to provide lower phase deviations for errors (For 256 QAM the practical phase deviation for no errors is 0.4-0.5°) which would enable users to receive high data rate. This proposed system overcomes deficiencies of current generation state of the art communication systems by providing much lower level of phase deviation error which would result in much higher modulation schemes and high data rate.
A system for providing ultra low phase noise frequency synthesizers using Fractional-N PLL (Phase Lock Loop), Sampling Reference PLL and DDS (Direct Digital Synthesizer). Modern day advanced communication systems comprise frequency synthesizers that provide a frequency output signal to other parts of the transmitter and receiver so as to enable the system to operate at the set frequency band. The performance of the frequency synthesizer determines the performance of the communication link. Current days advanced communication systems comprises single loop Frequency synthesizers which are not completely able to provide lower phase deviations for errors (For 256 QAM the practical phase deviation for no errors is 0.4-0.5°) which would enable users to receive high data rate. This proposed system overcomes deficiencies of current generation state of the art communication systems by providing much lower level of phase deviation error which would result in much higher modulation schemes and high data rate.
Google Calendar is a versatile tool that allows users to manage their schedules and events effectively. With Google Calendar, you can create and organize calendars, set reminders for important events, and share your calendars with others. It also provides features like creating events, inviting attendees, and accessing your calendar from mobile devices. Additionally, Google Calendar allows you to embed calendars in websites or platforms like SlideShare, making it easier for others to view and interact with your schedules.
Building a Raspberry Pi Robot with Dot NET 8, Blazor and SignalRPeter Gallagher
In this session delivered at NDC Oslo 2024, I talk about how you can control a 3D printed Robot Arm with a Raspberry Pi, .NET 8, Blazor and SignalR.
I also show how you can use a Unity app on an Meta Quest 3 to control the arm VR too.
You can find the GitHub repo and workshop instructions here;
https://bit.ly/dotnetrobotgithub
Dynamic assignment of traffic classes to a priority queue in a packet forwarding device
1. Europäisches Patentamt
European Patent Office
Office européen des brevets
Note: Within nine months from the publication of the mention of the grant of the European patent, any person may give
notice to the European Patent Office of opposition to the European patent granted. Notice of opposition shall be filed in
a written reasoned statement. It shall not be deemed to have been filed until the opposition fee has been paid. (Art.
99(1) European Patent Convention).
Printed by Jouve, 75001 PARIS (FR)
(19)
EP 1 142 213 B1
*EP001142213B1*
(11) EP 1 142 213 B1
(12) EUROPEAN PATENT SPECIFICATION
(45) Date of publication and mention
of the grant of the patent:
23.11.2005 Bulletin 2005/47
(21) Application number: 00901402.8
(22) Date of filing: 07.01.2000
(51) Int Cl.7: H04L 12/56, H04L 29/06
(86) International application number:
PCT/US2000/000428
(87) International publication number:
WO 2000/041368 (13.07.2000 Gazette 2000/28)
(54) DYNAMIC ASSIGNMENT OF TRAFFIC CLASSES TO A PRIORITY QUEUE IN A PACKET
FORWARDING DEVICE
DYNAMISCHE ZUWEISUNG VERKEHRSKLASSEN AN EINER PRIORITÄTSWARTESCHLANGE
IN EINER PAKETBEFÖRDERUNGSVORRICHTUNG
AFFECTATION DYNAMIQUE DE CLASSES DE TRAFIC A UNE FILE D’ATTENTE PRIORITAIRE
DANS UN DISPOSITIF DE REACHEMINEMENT DE PAQUETS
(84) Designated Contracting States:
DE FR GB
(30) Priority: 08.01.1999 US 227389
(43) Date of publication of application:
10.10.2001 Bulletin 2001/41
(73) Proprietor: Nortel Networks Limited
St.Laurent, Québec H4S 2A9 (CA)
(72) Inventors:
• LAVIAN, Tal, I.
Sunnyvale, CA 94087 (US)
• LAU, Stephen
Milpitas, CA 95035 (US)
(74) Representative: Mackenzie, Andrew Bryan et al
Marks & Clerk
45 Grosvenor Road
St. Albans, Hertfordshire AL1 3AW (GB)
(56) References cited:
EP-A- 0 872 979 WO-A-99/00949
• TENNENHOUSE D L ET AL: "A SURVEY OF
ACTIVE NETWORK RESEARCH" IEEE
COMMUNICATIONS MAGAZINE,US,IEEE
SERVICE CENTER. PISCATAWAY, N.J, vol. 35,
no. 1, 1 January 1997 (1997-01-01), pages 80-86,
XP000683447 ISSN: 0163-6804
2. EP 1 142 213 B1
1 2
5
10
15
20
25
30
35
40
45
50
55
2
Description
FIELD OF THE INVENTION
[0001] The present invention relates to the field of tel-ecommunications,
and more particularly to dynamic as-signment
of traffic classes to queues having different pri-ority
levels.
BACKGROUND OF THE INVENTION
[0002] The flow of packets through packet-switched
networks is controlled by switches and routers that for-ward
packets based on destination information included
in the packets themselves. A typical switch or router in-cludes
a number of input/output (I/O) modules connect-ed
to a switching fabric, such as a crossbar or shared
memory switch. In some switches and routers, the
switching fabric is operated at a higher frequency than
the transmission frequency of the I/O modules so that
the switching fabric may deliver packets to an I/O mod-ule
faster than the I/O module can output them to the
network transmission medium. In these devices, pack-ets
are usually queued in the I/O module to await trans-mission.
[0003] One problem that may occur when packets are
queued in the I/O module or elsewhere in a switch or
router is that the queuing delay per packet varies de-pending
on the amount of traffic being handled by the
switch. Variable queuing delays tend to degrade data
streams produced by real-time sampling (e.g., audio
and video) because the original time delays between
successive packets in the stream convey the sampling
interval and are therefore needed to faithfully reproduce
the source information. Another problem is that results
from queuing packets in a switch or router is that data
from a relatively important source, such as a shared
server, may be impeded by data from less important
sources, resulting in bottlenecks.
[0004] One possible solution to this is described in In-ternational
Application Number WO 99 00949. In this
application a method whereby queues are prioritised is
disclosed and the priority of a flow may be lowered.
However, flows are packets forwarded within the subnet
and therefore requiring no header modification. There-fore,
a method for prioritising packets from/to sources/
destinations outside the subnet is required.
SUMMARY OF THE INVENTION
[0005] A method and apparatus for dynamic assign-ment
of classes of traffic to a priority queue in accord-ance
with Claims 1 and 9 respectively are disclosed.
Bandwidth consumption by one or more types of packet
traffic received in a packet forwarding device is moni-tored.
The queue assignment of at least one type of
packet traffic is automatically changed from a queue
having a first priority to a queue having a second priority
if the bandwidth consumption exceeds the threshold.
[0006] Other features and advantages of the inven-tion
will be apparent from the accompanying drawings
and from the detailed description that follows below.
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] The present invention is illustrated by way of
example and not limitation in the figures of the accom-panying
drawings in which like references indicate sim-ilar
elements and in which:
Fig.1 illustrates a packet forwarding device that can
be used to implement embodiments of the present
invention;
Fig. 2A illustrates queue fill logic implemented by a
queue manager in a quad interface device;
Fig. 2B illustrates queue drain logic according to
one embodiment;
Fig: 3 illustrates the flow of a packet within the
switch of Fig. 1;
Fig. 4 illustrates storage of an entry in an address
resolution table managed by an address resolution
unit;
Fig. 5 is a diagram of the software architecture of
the switch of Fig. 1 according to one embodiment;
and
Fig. 6 illustrates an example of dynamic assignment
of traffic classes to a priority queue.
DETAILED DESCRIPTION
[0008] A packet forwarding device in which selected
classes of network traffic may be dynamically assigned
for priority queuing is disclosed. In one embodiment, the
packet forwarding device includes a Java virtual ma-chine
for executing user-coded Java applets received
from a network management server (NMS). A Java-to-native
interface (JNI) is provided to allow the Java ap-plets
to obtain error information and traffic statistics from
the device hardware and to allow the Java applets to
write configuration information to the device hardware,
including information that indicates which classes of
traffic should be queued in priority queues. The Java ap-plets
implement user-specified traffic management pol-icies
based on real-time evaluation of the error informa-tion
and traffic statistics to provide dynamic control of
the priority queuing assignments. These and other as-pects
and advantages of the present invention are de-scribed
below.
[0009] Fig. 1 illustrates a packet forwarding device 17
that can be used to implement embodiments of the
present invention. For the purposes of the present de-scription,
the packet forwarding device 17 is assumed
to be a switch that switches packets between ingress
and egress ports based on media access control (MAC)
addresses within the packets. In an alternate embodi-ment,
the packet forwarding device 17 may be a router
3. EP 1 142 213 B1
3 4
5
10
15
20
25
30
35
40
45
50
55
3
that routes packets according to destination internet
protocol (IP) addresses or a routing switch that performs
both MAC address switching and IP address routing.
The techniques and structures disclosed herein are ap-plicable
generally to a device that forwards packets in a
packet switching network. Also, the term packet is used
broadly herein to refer to a fixed-length cell, a variable
length frame or any other information structure that is
self-contained as to its destination address.
[0010] The switch 17 includes a switching fabric 12
coupled to a plurality of I/O units (only I/O units 1 and
16 are depicted) and to a processing unit 10. The
processing unit includes at least a processor 31 (which
may be a microprocessor, digital signal processor or mi-crocontroller)
coupled to a memory 32 via a bus 33. In
one embodiment, each I/O unit 1, 16 includes four phys-ical
ports P1-P4 coupled to a quad media access con-troller
(QMAC) 14A, 14B via respective transceiver in-terface
units 21A-24A, 21B-24B. Each I/O unit 1, 16 also
includes a quad interface device (QID) 16A, 16B, an ad-dress
resolution unit (ARU) 15A, 15B and a memory
18A,18B, interconnected as shown in Fig. 1. Preferably,
the switch 17 is modular with at least the I/O units 1, 16
being implemented on port cards (not shown) that can
be installed in a backplane (not shown) of the switch 17.
In one implementation, each port card includes four I/O
units and therefore supports up to 16 physical ports. The
switch backplane includes slots for up to six port cards,
so that the switch 17 can be scaled according to cus-tomer
needs to support between 16 and 96 physical
ports. In alternate embodiments, each I/O unit 1, 16 may
support more or fewer physical ports, each port card
may support more or fewer I/O units 1, 16 and the switch
17 may support more or fewer port cards. For example,
the I/O unit 1 shown in Fig.1 may be used to support
four 10baseT transmission lines (i.e.,10 Mbps (mega-bit
per second), twisted-pair) or four 100baseF transmis-sion
lines (100 Mbps, fiber optic), while a different I/O
unit (not shown) may be used to support a single
1000baseF transmission line (1000 Mbps, fiber optic).
Nothing disclosed herein should be construed as limit-ing
embodiments of the present invention to use with a
particular transmission medium, I/O unit, port card or
chassis configuration.
[0011] Still referring to Fig.1, when a packet 25 is re-ceived
on physical port P1, it is supplied to the corre-sponding
physical transceiver 21A which performs any
necessary signal conditioning (e.g., optical to electrical
signal conversion) and then forwards the packet 25 to
the QMAC 14A. The QMAC 14A buffers packets re-ceived
from the four physical transceivers 21A-24A as
necessary, forwarding one packet at a time to the QID
16A. Receive logic within the QID 16A notifies the ARU
15A that the packet 25 has been received. The ARU
computes a table index based on the destination MAC
address within the packet 25 and uses the index to iden-tify
an entry in a forwarding table that corresponds to the
destination MAC address. In packet forwarding devices
that operate on different protocol layers of the packet (e.
g., routers), a forwarding table may be indexed based
on other destination information contained within the
packet.
[0012] According to one embodiment, the forwarding
table entry identified based on the destination MAC ad-dress
indicates the switch egress port to which the pack-et
25 is destined and also whether the packet is part of
a MAC-address based virtual local area network
(VLAN), or a port-based VLAN. (As an aside, a VLAN is
a logical grouping of MAC addresses (a MAC-address-based
VLAN) or a logical grouping of physical ports (a
port-based VLAN).) The forwarding table entry further
indicates whether the packet 25 is to be queued in a
priority queue in the I/O unit that contains the destination
port. As discussed below, priority queuing may be spec-ified
based on a number of conditions, including, but not
limited to, whether the packet is part of a particular IP
flow, or whether the packet is destined for a particular
port, VLAN or MAC address.
[0013] According to one embodiment, the QID 16A,
16B segments the packet 25 into a plurality of fixed-length
cells 26 for transmission through the switching
fabric 12. Each cell includes a header 28 that identifies
it as a constituent of the packet 25 and that identifies
the destination port for the cell (and therefore for the
packet 25). The header 28 of each cell also includes a
bit 29 indicating whether the cell is the beginning cell of
a packet and also a bit 30 indicating whether the packet
25 to which the cell belongs is to be queued in a priority
queue or a best effort queue on the destined I/O unit.
[0014] The switching fabric 12 forwards each cell to
the I/O unit indicated by the cell header 28. In the ex-emplary
data flow shown in Fig.1, the constituent cells
26 of the packet 25 are assumed to be forwarded to I/O
unit 16 where they are delivered to transmit logic within
the QID 16B. The transmit logic in the QID 16B includes
a queue manager (not shown) that maintains a priority
queue and a best effort queue in the memory 18B. In
one embodiment, the memory 18B is resolved into a
pool of buffers, each large enough to hold a complete
packet. When the beginning cell of the packet 25 is de-livered
to the QID 16B, the queue manager obtains a
buffer from the pool and appends the buffer to either the
priority queue or the best effort queue according to
whether the priority bit 30 is set in the beginning cell. In
one embodiment, the priority queue and the best effort
queue are each implemented by a linked list, with the
queue manager maintaining respective pointers to the
head and tail of each linked list. Entries are added to the
tail of the queue list by advancing the tail pointer to point
to a newly allocated buffer that has been appended to
the linked list, and entries are popped off the head of the
queue by advancing the head pointer to point to the next
buffer in the linked list and returning the spent buffer to
the pool.
[0015] After a buffer is appended to either the priority
queue or the best effort queue, the beginning cell and
4. EP 1 142 213 B1
5 6
5
10
15
20
25
30
35
40
45
50
55
4
subsequent cells are used to reassemble the packet 25
within the buffer. Eventually the packet 25 is popped off
the head of the queue and delivered to an egress port
via the QMAC 14B and the physical transceiver (e.g.,
23B) in an egress operation. This is shown by way of
example in Fig. 1 by the egress of packet 25 from phys-ical
port P3 of I/O unit 16.
[0016] Fig. 2A illustrates queue fill logic implemented
by the queue manager in the QID. Starting at block 51,
a cell is received in the QID from the switching fabric.
The beginning cell bit in the cell header is inspected at
decision block 53 to determine if the cell is the beginning
cell of a packet. If so, the priority bit in the cell header is
inspected at decision block 55 to determine whether to
allocate an entry in the priority queue or the best effort
queue for packet reassembly. If the priority bit is set, an
entry in the priority queue is allocated at block 57 and
the priority queue entry is associated with the portion of
the cell header that identifies the cell as a constituent of
a particular packet at block 59. If the priority bit in the
cell header is not set, then an entry in the best effort
queue is allocated at block 61 and the best effort queue
entry is associated with the portion of the cell header
that identifies the cell as a constituent of a particular
packet at block 63.
[0017] Returning to decision block 53, if the beginning
cell bit in the cell header is not set, then the queue entry
associated with the cell header is identified at block 65.
The association between the cell header and the queue
entry identified at block 65 was established earlier in ei-ther
block 59 or block 63. Also, identification of the
queue entry in block 65 may include inspection of the
priority bit in the cell to narrow the identification effort to
either the priority queue or the best effort queue. In block
67, the cell is combined with the preceding cell in the
queue entry in a packet reassembly operation. If the re-assembly
operation in block 67 results in a completed
packet (decision block 69), then the packet is marked
as ready for transmission in block 71. In one embodi-ment,
the packet is marked by setting a flag associated
with the queue entry in which the packet has been re-assembled.
Other techniques for indicating that a pack-et
is ready for transmission may be used in alternate
embodiments.
[0018] Fig. 2B illustrates queue drain logic according
to one embodiment. At decision block 75, the entry at
the head of the priority queue is inspected to determine
if it contains a packet ready for transmission. If so, the
packet is transmitted at block 77 and the corresponding
priority queue entry is popped off the head of the priority
queue and deallocated at block 79. If a ready packet is
not present at the head of the priority queue, then the
entry at the head of the best effort queue is inspected
at decision block 81. If a packet is ready at the head of
the best effort queue, it is transmitted at block 83 and
the corresponding best effort queue entry is popped off
the head of the best effort queue and deallocated in
block 85. Note that, in the embodiment illustrated in Fig.
2B, packets are drained from the best effort queue only
after the priority queue has been emptied. In alternate
embodiments, a timer, counter or similar logic element
may be used to ensure that the best effort queue 105 is
serviced at least every so often or at least after every N
number of packets are transmitted from the priority
queue, thereby ensuring at least a threshold level of
service to best effort queue.
[0019] Fig. 3 illustrates the flow of a packet within the
switch 17 of Fig. 1. A packet is received in the switch at
block 91 and used to identify an entry in a forwarding
table called the address resolution (AR) table at block
93. At decision block 95, a priority bit in the AR table
entry is inspected to determine whether the packet be-longs
to a class of traffic that has been selected for pri-ority
queuing. If the priority bit is set, the packet is seg-mented
into cells having respective priority bits set in
their headers in block 97. If the priority bit is not set, the
packet is segmented into cells having respective priority
bits cleared their cell headers in block 99. The constitu-ent
cells of each packet are forwarded to an egress I/O
unit by the switching fabric. In the egress I/O unit, the
priority bit of each cell is inspected (decision block 101)
and used to direct the cell to an entry in either the priority
queue 103 or the best effort queue 105 where it is com-bined
with other cells to reassemble the packet.
[0020] Fig. 4 illustrates storage of an entry in the ad-dress
resolution (AR) table managed by the ARU. In one
embodiment, the AR table is maintained in a high speed
static random access memory (SRAM) coupled to the
ARU. Alternatively, the AR table may be included in a
memory within an application-specific integrated circuit
(ASIC) that includes the ARU. Generally, the ARU
stores an entry in the AR table in response to packet
forwarding information from the processing unit. The
processing unit supplies packet forwarding information
to be stored in each AR table in the switch whenever a
new association between a destination address and a
switch egress port is learned. In one embodiment, an
address-to-port association is learned by transmitting a
packet that has an unknown egress port assignment on
each of the egress ports of the switch and associating
the destination address of the packet with the egress
port at which an acknowledgment is received. Upon
learning the association between the egress port and
the destination address, the processing unit issues for-warding
information that includes, for example, an iden-tifier
of the newly associated egress port, the destination
MAC address, an identifier of the VLAN associated with
the MAC address (if any), an identifier of the VLAN as-sociated
with the egress port (if any), the destination IP
address, the destination IP port (e.g., transmission con-trol
protocol (TCP), universal device protocol (UDP) or
other IP port) and the IP protocol (e.g., HTTP, FTP or
other IP protocol). The source IP address, source IP port
and source IP protocol may also be supplied to fully
identify an end-to-end IP flow.
[0021] Referring to Fig. 4, forwarding information 110
5. EP 1 142 213 B1
7 8
5
10
15
20
25
30
35
40
45
50
55
5
is received from the processing unit at block 115. At
block 117, the ARU stores the forwarding information in
an AR table entry. At decision block 119, the physical
egress port identifier stored in the AR table entry is com-pared
against priority configuration information to deter-mine
if packets destined for the egress port have been
selected for priority egress queuing. If so, the priority bit
is set in the AR table entry in block 127. Thereafter, in-coming
packets that index the newly stored table entry
will be queued in the priority queue to await transmis-sion.
If packets destined for the egress port have not
been selected for priority queuing, then at decision block
121 the MAC address stored in the AR table entry is
compared against the priority configuration information
to determine if packets destined for the MAC address
have been selected for priority egress queuing. If so, the
priority bit is set in the AR table entry in block 127. If
packets destined for the MAC address have not been
selected for priority egress queuing, then at decision
block 123 the VLAN identifier stored in the AR table en-try
(if present) is compared against the priority configu-ration
information to determine if packets destined for
the VLAN have been selected for priority egress queu-ing.
If so, the priority bit is set in the AR table entry in
block 127. If packets destined for the VLAN have not
been selected for priority egress queuing, then at block
125 the IP flow identified by the IP address, IP port and
IP protocol in the AR table is compared against the pri-ority
configuration information to determine if packets
that form part of the IP flow have been selected for pri-ority
egress queuing. If so, the priority bit is set in the
AR table entry, otherwise the priority bit is not set. Yet
other criteria may be considered in assigning priority
queuing in alternate embodiments. For example, priority
queuing may be specified for a particular IP protocol (e.
g., FTP, HTTP). Also, the ingress port, source MAC ad-dress
or source VLAN of a packet may also be used to
determine whether to queue the packet in the priority
egress packet. More specifically, in one embodiment,
priority or best effort queuing of unicast traffic is deter-mined
based on destination parameters (e.g., egress
port, destination MAC address or destination IP ad-dress),
while priority or best effort queuing of multicast
traffic is determined based on source parameters (e.g.,
ingress port, source MAC address or source IP ad-dress).
[0022] Fig. 5 is a diagram of the software architecture
of the switch 17 of Fig.1 according to one embodiment.
An operating system 143 and device drivers 145 are
provided to interface with the device hardware 141. For
example, device drivers are provided to write configura-tion
information and AR storage entries to the ARUs in
respective I/O units. Also, the operating system 143 per-forms
memory management functions and other system
services in response to requests from higher level soft-ware.
Generally, the device drivers 145 extend the serv-ices
provided by the operating system and are invoked
in response to requests for operating system service
that involve device-specific operations.
[0023] The device management code 147 is executed
by the processing unit (e.g., element 10 of Fig.1) to per-form
system level functions, including management of
forwarding entries in the distributed AR tables and man-agement
of forwarding entries in a master forwarding
table maintained in the memory of the processing unit.
The device management code 147 also includes rou-tines
for invoking device driver services, for example, to
query the ARU for traffic statistics and error information,
or to write updated configuration information to the
ARUs, including priority queuing information. Further,
the device management code 147 includes routines for
writing updated configuration information to the ARUs,
as discussed below in reference to Fig. 6. In one imple-mentation,
the device management code 147 is native
code, meaning that the device management code 147
is a compiled set of instructions that can be executed
directly by a processor in the processing unit to carry
out the device management functions.
[0024] In one embodiment, the device management
code 147 supports the operation of a Java client 160
that includes a number of Java applets, including a mon-itor
applet 157, a policy enforcement applet 159 and a
configuration applet 161. A Java applet is an instantia-tion
of a Java class that includes one or more methods
for self initialization (e.g., a constructor method called
"Applet()"), and one or more methods for communicat-ing
with a controlling application. Typically the control-ling
application for a Java applet is a web browser exe-cuted
on a general purpose computer. In the software
architecture shown in Fig. 5, however, a Java applica-tion
called Data Communication Interface (DCI) 153 is
the controlling application for the monitor, policy en-forcement
and configuration applets 157,159,161. The
DCI application 153 is executed by a Java virtual ma-chine
149 to manage the download of Java applets from
a network management server (NMS) 170. A library of
Java objects 155 is provided for use by the Java applets
157,159,161 and the DCI application 153.
[0025] In one implementation, the NMS 170 supplies
Java applets to the switch 17 in a hyper-text transfer
protocol (HTTP) data stream. Other protocols may also
be used. The constituent packets of the HTTP data
stream are addressed to the IP address of the switch
and are directed to the processing unit after being re-ceived
by the I/O unit coupled to the NMS 170. After
authenticating the HTTP data stream, the DCI applica-tion
153 stores the Java applets provided in the data
stream in the memory of the processing unit and exe-cutes
a method to invoke each applet. An applet is in-voked
by supplying the Java virtual machine 149 with
the address of the constructor method of the applet and
causing the Java virtual machine 149 to begin execution
of the applet code. Program code defining the Java vir-tual
machine 149 is executed to interpret the platform
independent byte codes of the Java applets 157, 159,
161 into native instructions that can be executed by a
6. EP 1 142 213 B1
9 10
5
10
15
20
25
30
35
40
45
50
55
6
processor within the processing unit.
[0026] According to one embodiment, the monitor ap-plet
157, policy enforcement applet 159 and configura-tion
applet 161 communicate with the device manage-ment
code 147 through a Java-native interface (JNI)
151. The JNI 151 is essentially an application program-ming
interface (API) and provides a set of methods that
can be invoked by the Java applets 157,159, 161 to
send messages and receive responses from the device
management code 147. In one implementation, the JNI
151 includes methods by which the monitor applet 157
can request the device management code 147 to gather
error information and traffic statistics from the device
hardware 141. The JNI 151 also includes methods by
which the configuration applet 161 can request the de-vice
management code 147 to write configuration infor-mation
to the device hardware 141. More specifically,
the JNI 151 includes a method by which the configura-tion
applet 161 can indicate that priority queuing should
be performed for specified classes of traffic, including,
but not limited to, the classes of traffic discussed above
in reference to Fig. 4. In this way, a user-coded config-uration
applet 161 may be executed by the Java virtual
machine 149 within the switch 17 to invoke a method in
the JNI 151 to request the device management code
147 to write information that assigns selected classes
of traffic to be queued in the priority egress queue. In
effect, the configuration applet 161 assigns virtual
queues defined by the selected classes of traffic to feed
into the priority egress queue.
[0027] Although a Java virtual machine 149 and Java
applets 157,159,161 have been described, other virtual
machines, interpreters and scripting languages may be
used in alternate embodiments. Also, as discussed be-low,
more or fewer Java applets may be used to perform
the monitoring, policy enforcement and configuration
functions in alternate embodiments.
[0028] Fig. 6 illustrates an example of dynamic as-signment
traffic classes to a priority queue. An exem-plary
network includes switches A and B coupled togeth-er
at physical ports 32 and 1, respectively. Suppose that
a network administrator or other user determines that
an important server 175 on port 2 of switch A requires
a relatively high quality of service (QoS), and that, at
least in switch B, the required QoS can be provided by
ensuring that at least 20% of the egress capacity of
switch B, port 1 is reserved for traffic destined to the
MAC address of the server 175. One way to ensure that
20% egress capacity is reserved to traffic destined for
the server 175 is to assign priority queuing for packets
destined to the MAC address of the server 175, but not
for other traffic. While such an assignment would ensure
priority egress to the server traffic, it also may result in
unnecessarily high bandwidth allocation to the server
175, potentially starving other important traffic or caus-ing
other important traffic to become bottlenecked be-hind
less important traffic in the best effort queue. For
example, suppose that there are at least two other MAC
address destinations, MAC address A and MAC ad-dress
B, to which the user desires to assign priority
queuing, so long as the egress capacity required by the
server-destined traffic is available. In that case, it would
be desirable to dynamically configure the MAC address
A and MAC address B traffic to be queued in either the
priority queue or the best effort queue according to ex-isting
traffic conditions. In at least one embodiment, this
is accomplished using monitor, policy enforcement and
configuration applets that have been downloaded to
switch B and which are executed in a Java client in
switch B as described above in reference to Fig. 5.
[0029] Fig. 6 includes exemplary pseudocode listings
of monitor, policy enforcement and configuration applets
178,179,180 that can be used to ensure that at least
20% of the egress capacity of switch B, port 1 is re-served
for traffic destined to the server 175, but without
unnecessarily denying priority queuing assignment to
traffic destined for MAC addresses A and B. After initial-ization,
the monitor applet 178 repeatedly measures of
the port 1 line utilization from the device hardware. In
one embodiment, the ARU in the I/O unit that manages
port 1 keeps a count of the number of packets destined
for particular egress ports, packets destined for partic-ular
MAC addresses, packets destined for particular
VLANS, packets that form part of a particular IP flow,
packets having a particular IP protocol, and so forth. The
ARU also tracks the number of errors associated with
these different classes of traffic, the number of packets
from each class of traffic that are dropped, and other
statistics. By determining the change in these different
statistics per unit time, a utilization factor may be gen-erated
that represents the percent utilization of the ca-pacity
of an egress port, an I/O unit or the overall switch.
Error rates and packet drop rates may also be generat-ed.
[0030] In one embodiment, the monitor applet 178
measures line utilization by invoking methods in the JNI
to read the port 1 line utilization resulting from traffic des-tined
for MAC address A and for MAC address B every
10 milliseconds.
[0031] The policy enforcement applet 179 includes
variables to hold the line utilization percentage of traffic
destined for MAC address A (A%), the line utilization
percentage of traffic destined for MAC address B (B%),
the queue assignment (i.e., priority or best effort) of traf-fic
destined for the server MAC address (QA_S), the
queue assignment of traffic destined for MAC address
A (QA_A) and the queue assignment of traffic destined
for MAC address B. Also, a constant, DELTA, is defined
to be 5% and the queue assignments for the MAC ad-dress
A, MAC address B and server MAC address traffic
are initially set to the priority queue.
[0032] The policy enforcement applet 179 also in-cludes
a forever loop in which the line utilization per-centages
A% and B% are obtained from the monitor ap-plet
178 and used to determine whether to change the
queue assignments QA_A and QA_B. If the MAC ad-
7. EP 1 142 213 B1
11 12
5
10
15
20
25
30
35
40
45
50
55
7
dress A traffic and the MAC address B traffic are both
assigned to the priority queue (the initial configuration)
and the sum of the line utilization percentages A% and
B% exceeds 80%, then less than 20% line utilization re-mains
for the server-destined traffic. In that event, the
MAC address A traffic is reassigned from the priority
queue to the best effort queue (code statement 181). If
the MAC address A traffic is assigned to the best effort
queue and the MAC address B traffic is assigned to the
priority queue, then the MAC address A traffic is reas-signed
to the priority queue if the sum of the line utiliza-tion
percentages A% and B% drops below 80% less
DELTA (code statement 183). The DELTA parameter
provides a deadband to prevent rapid changing of pri-ority
queue assignment.
[0033] If the MAC address A traffic is assigned to the
best effort queue and the MAC address B traffic is as-signed
to the priority queue and the line utilization per-centage
B% exceeds 80%, then less than 20% line uti-lization
remains for the server-destined traffic. Conse-quently,
the MAC address B traffic is reassigned from
the priority queue to the best effort queue (code state-ment
185). If the MAC address B traffic is assigned to
the best effort queue and the line utilization percentage
B% drops below 80% less DELTA, then the MAC ad-dress
B traffic is reassigned to the priority queue (code
statement 187). Although not specifically provided for in
the exemplary pseudocode listing of Fig. 6, the policy
enforcement applet 179 may treat the traffic destined for
the MAC A and MAC B addresses more symmetrically
by including additional statements to conditionally as-sign
traffic destined for MAC address A to the priority
queue, but not traffic destined for MAC address B. In the
exemplary pseudocode listing of Fig. 6, the policy en-forcement
applet 179 delays for 5 milliseconds at the
end of each pass through the forever loop before repeat-ing.
[0034] The configuration applet 180 includes varia-bles,
QA_A and QA_B, to hold the queue assignments
of the traffic destined for the MAC addresses A and B,
respectively. Variables LAST_QA_A and LAST_QA_B
are also provided to record the history (i.e., most recent
values) of the QA_A and QA_B values. The
LAST_QA_A and LAST_QA_B variables are initialized
to indicate that traffic destined for the MAC addresses
A and B is assigned to the priority queue.
[0035] Like the monitor and policy enforcement ap-plets
178,179, the configuration applet 180 includes a
forever loop in which a code sequence is executed fol-lowed
by a delay. In the exemplary listing of Fig. 6, the
first operation performed by the configuration applet 180
within the forever loop is to obtain the queue assign-ments
QA_A and QA_B from the policy enforcement ap-plet
179. If the queue assignment indicated by QA_A is
different from the queue assignment indicated by
LAST_QA_A, then a JNI method is invoked to request
the device code to reconfigure the queue assignment of
the traffic destined for MAC address A according to the
new QA_A value. The new QA_A value is then copied
into the LAST_QA_A variable so that subsequent queue
assignment changes are detected. If the queue assign-ment
indicated by QA_B is different from the queue as-signment
indicated by LAST_QA_B, then a JNI method
is invoked to request the device code to reconfigure the
queue assignment of the traffic destined for MAC ad-dress
B according to the new QA_B value. The new
QA_B value is then copied into the LAST_QA_B varia-ble
so that subsequent queue assignment changes are
detected. By this operation, and the operation of the
monitor and policy enforcement applets 178, 179, traffic
destined for the MAC addresses A and B is dynamically
assigned to the priority queue according to real-time
evaluations of the traffic conditions in the switch.
[0036] Although a three-applet implementation is il-lustrated
in Fig. 6, more or fewer applets may be used
in an alternate embodiment. For example, the functions
of the monitor, policy enforcement and configuration ap-plets
178,179,180 may be implemented in a single ap-plet.
Alternatively, multiple applets may be provided to
perform policy enforcement or other functions using dif-ferent
queue assignment criteria. For example, one pol-icy
enforcement applet may make priority queue assign-ments
based on destination MAC addresses, while an-other
policy enforcement applet makes priority queue
assignments based on error rates or line utilization of
higher level protocols. Multiple monitor applets or con-figuration
applets may similarly be provided.
[0037] Although queue assignment policy based on
destination MAC address is illustrated in Fig. 6, myriad
different queue assignment criteria may be used in other
embodiments. For example, instead of monitoring and
updating queue assignment based on traffic to destina-tion
MAC addresses, queue assignments may be up-dated
on other traffic patterns, including traffic to spec-ified
destination ports, traffic from specified source
ports, traffic from specified source MAC addresses, traf-fic
that forms part of a specified IP flow, traffic that is
transmitted using a specified protocol (e.g., HTTP, FTP
or other protocols) and so forth. Also, queue assign-ments
may be updated based on environmental condi-tions
such as time of day, changes in network configu-ration
(e.g., due to failure or congestion at other network
nodes), error rates, packet drop rates and so forth. Mon-itoring,
policy enforcement and configuration applets
that combine many or all of the above-described criteria
may be implemented to provide sophisticated traffic
handling capability in a packet forwarding device.
[0038] Although dynamic assignment of traffic class-es
to a priority egress queue has been emphasized, the
methods and apparatuses described herein may alter-natively
be used to assign traffic classes to a hierarchi-cal
set of queues anywhere in a packet forwarding de-vice
including, but not limited to, ingress queues and
queues associated with delivering and receiving pack-ets
from the switching fabric. Further, although the
queue assignment of traffic classes has been described
8. EP 1 142 213 B1
13 14
5
10
15
20
25
30
35
40
45
50
55
8
in terms of a pair of queues (priority and best effort), ad-ditional
queues in a prioritization hierarchy may be used
without departing from the spirit and scope of the
present invention.
[0039] In the foregoing specification, the invention
has been described with reference to specific exemplary
embodiments thereof. It will, however, be evident that
various modifications and changes may be made to the
specific exemplary embodiments without departing from
the broader spirit and scope of the invention as set forth
in the appended claims. Accordingly, the specification
and drawings are to be regarded in an illustrative rather
than a restrictive sense.
Claims
1. A method suitable for use within a packet forward-ing
device (17) characterised by the steps of
monitoring bandwidth consumption by one or
more types of packet traffic (25) received in the
packet forwarding device (17) comprising determin-ing
a measure of bandwidth consumption in the
packet forwarding device (17) due to traffic associ-ated
with a particular destination network address;
determining whether the bandwidth con-sumption
by the one or more types of packet traffic
(25) exceeds a threshold; and
automatically changing assignment of at least
one type of packet traffic of the one or more types
of packet traffic from a queue having a first priority
to a queue having a second priority if the bandwidth
consumption exceeds the threshold.
2. The method of claim 1 further comprising receiving
program code in the packet forwarding device (17)
after installation of the packet forwarding device
(17) in a packet communications network and
wherein said monitoring, determining and automat-ically
changing is implemented by the executing
program code.
3. The method of claim 2 wherein receiving the pro-gram
code comprises receiving a sequence of vir-tual
machine instructions and wherein executing
the program code comprises executing the se-quence
of virtual machine instructions using a vir-tual
machine included in the packet forwarding de-vice
(17).
4. The method of claim 3 wherein receiving the se-quence
of virtual machine instructions comprises
receiving a sequence of Java byte codes and
wherein executing the sequence of virtual machine
instructions using a virtual machine comprises ex-ecuting
the sequence of Java byte codes in a Java
virtual machine included in the packet forwarding
device.
5. The method of claim 1 wherein monitoring band-width
consumption by one or more types of packet
traffic (25) received in the packet forwarding device
(17) comprises determining a measure of band-width
consumption in the packet forwarding device
(17) due to traffic associated with a physical port on
the forwarding device (17).
6. The method of claim 1 wherein determining a meas-ure
of bandwidth consumption in the packet for-warding
device (17) due to traffic (25) associated
with the particular network address comprises de-termining
a measures of bandwidth consumption
due to traffic (25) associated with a particular media
access control, MAC, address.
7. The method of claim 1 wherein monitoring band-width
consumption by one or more types of packet
traffic received in the packet forwarding device (17)
comprises determining a measure of bandwidth
consumption in the packet forwarding device due to
traffic associated with a particular communications
protocol.
8. The method of claim 7 wherein determining a meas-ure
of bandwidth consumption in the packet for-warding
device (17) due to traffic (25) associated
with the particular communications protocol com-prises
determining a measure of bandwidth con-sumption
in the packet forwarding device (17) due
to traffic (25) associated with at least one of the fol-lowing
protocols: file transfer protocol, FTP, hyper-text
transfer protocol, HTTP, transmission control
protocol/internet protocol, TCP/IP.
9. A packet forwarding apparatus (17) comprising:
a plurality of input/output, I/O, ports to transmit
and receive packets of information;
first and second queues to buffer the packets
prior to transmission via one or more of the I/O
ports, packets (25) buffered in the first queue
having higher transmission priority than pack-ets
(25) buffered in the second queue, and
characterised by
queue assignment logic to assign the packets
(25) to be buffered in either the first queue or
the second queue according to a packet type
associated with each packet (25), each of the
packets (25) being associated with at least one
of a plurality of packet types; and
one or more agents (157) to monitor bandwidth
consumption by packets (25) associated with a
first packet type of the plurality of packet types,
monitoring bandwidth consumption comprising
determining a measure of bandwidth consump-tion
in the packet forwarding device (17) due to
traffic associated with a particular destination
9. EP 1 142 213 B1
15 16
5
10
15
20
25
30
35
40
45
50
55
9
network address, and to automatically change
assignment of packets (25) associated with the
first packet type from the first queue to the sec-ond
queue if bandwidth consumption of pack-ets
(25) associated with the first packet type ex-ceeds
a threshold.
10. The apparatus (17) of claim 9 further comprising:
a processing unit (10) coupled to the plurality
of I/O ports, the processing unit including a
memory (32) and a processor (31); and
a data communications interface to receive pro-gram
code in the memory (32) of processing
unit (10) after installation of the packet forward-ing
device (17) in a packet communications net-work
and wherein the one or more agents are
implemented by execution of the program code
in the processor (31) of the processing unit
(10).
11. The apparatus (17) of claim 10 wherein the packet
forwarding device (17) further comprises program
code that, when executed by the processing unit
(10) implements a virtual machine, and wherein the
program code received via the data communica-tions
interface comprises a sequence of instruc-tions
that is executed by the virtual machine to im-plement
one or more agents.
12. The apparatus (17) of claim 11 wherein the program
code received via the data communications inter-face
includes a sequence of Java byte codes and
wherein the virtual machine is a Java virtual ma-chine.
13. The apparatus (17) of claim 9 wherein the first pack-et
type comprises packets (25) associated with a
particular one of the I/O ports.
14. The apparatus (17) of claim 9 wherein the first pack-et
type comprises packets (25) associated with a
particular network address.
15. The apparatus (17) of claim 14 wherein the partic-ular
network address is a particular media access
control, MAC, address.
16. The apparatus (17) of claim 9 wherein the first pack-et
type comprises packets (25) associated with a
particular communications protocol.
17. The apparatus (17) of claim 16 wherein the partic-ular
communications protocol is a hyper-text trans-fer
protocol, HTTP.
18. The apparatus (17) of claim 16 wherein the partic-ular
communications protocol is a file transfer pro-tocol,
FTP.
19. A communications network comprising a packet for-warding
device (17), the packet forwarding device
(17) including:
a plurality of input/output, I/O, ports to transmit
and receive packets (25) of information from
one or more other devices in the communica-tions
network;
first and second queues to buffer the packets
(25) prior to transmission via one or more of the
I/O ports, packets (25) buffered in the first
queue having higher transmission priority than
packets (25) buffered in the second queue; and
characterised by
queue assignment logic to assign the packets
(25) to be buffered in either the first queue or
the second queue according to a packet type
associated with each packet (25), each of the
packets (25) being associated with at least one
of a plurality of packet types; and
one or more agents (157) to monitor bandwidth
consumption by packets (25) associated with a
first packet type of the plurality of packet types,
monitoring bandwidth consumption comprising
determining a measure of bandwidth consump-tion
in the packet forwarding device (17) due to
traffic associated with a particular destination
network address and to automatically change
assignment of packets (25) associated with the
first packet type from the first queue to the sec-ond
queue if bandwidth consumption of pack-ets
(25) associated with the first packet type ex-ceeds
a threshold.
20. The communications network of claim 19 wherein
the packet forwarding device (17) further includes:
a processing unit (10) coupled to the plurality
of I/O ports, the processing unit (10) including
a memory (32) and a processor (31); and
a data communications interface to receive pro-gram
code in the memory (32) of processing
unit (10) after installation of the packet forward-ing
device (17) in the communications network
and wherein the one or more agents are imple-mented
by execution of the program code in the
processor (31) of the processing unit (10).
21. The communications network of claim 20 wherein
the packet forwarding device (17) further includes
program code that, when executed by the process-ing
unit (10) implements a virtual machine, and
wherein the program code received via the data
communications interface includes a sequence of
instructions that is executed by the virtual machine
to implement one or more agents.
10. EP 1 142 213 B1
17 18
5
10
15
20
25
30
35
40
45
50
55
10
Patentansprüche
1. Verfahren, das zur Verwendung in einer Paket-Wei-terleitungseinrichtung
(17) geeignet ist, gekenn-zeichnet
durch die folgenden Schritte:
Überwachen des Bandbreitenverbrauchs
durch einen oder mehrere Typen von in der Pa-ket-
Weiterleitungseinrichtung (17) empfange-nem
Paketverkehr (25) unter Einschluss der
Feststellung einer Messung des Bandbreiten-verbrauchs
in der Paket-Weiterleitungseinrich-tung
(17) aufgrund von Verkehr, der einer be-stimmten
Ziel-Netzwerkadresse zugeordnet
ist;
Feststellen, ob der Bandbreitenverbrauch
durch eine oder mehrere Typen des Paketver-kehrs
(25) einen Schwellenwert übersteigt; und
automatisches Ändern der Zuordnung von zu-mindest
einem Typ des Paketverkehrs des ei-nen
Typs oder der mehreren Typen von Paket-verkehr
von einer Warteschlange, die eine er-ste
Priorität hat, zu einerWarteschlange, die ei-ne
zweite Priorität hat, wenn der Bandbreifen-verbrauch
den Schwellenwert übersteigt.
2. Verfahren nach Anspruch 1, das weiterhin den
Empfang von Programmcode in der Paket-Weiter-leitungseinrichtung
(17) nach der Installation der
Paket-Weiterleitungseinrichtung (17) in einem Pa-ket-
KommunikationsNetzwerk umfasst, und bei
dem das Überwachen, Feststellen und automati-sche
Ändern durch die Ausführung des Programm-codes
realisiert wird.
3. Verfahren nach Anspruch 2, bei dem das Empfan-gen
des Programmcodes das Empfangen einer Fol-ge
von virtuellen Maschinenbefehlen umfasst, und
bei dem die Ausführung des Programmcodes die
Ausführung der Folge von virtuellen Maschinenbe-fehlen
unter Verwendung einer virtuellen Maschine
umfasst, die in der PaketWeiterteitungseinrichtung
(17) enthalten ist.
4. Verfahren nach Anspruch 3, bei dem der Empfang
der Folge von virtuellen Maschinenbefehlen den
Empfang einer Folge von Java-Bytecodes umfasst,
und bei dem die Ausführung der Folge von virtuel-len
Maschinenbefehlen unter Verwendung einer Ja-va-
Maschine die Ausführung der Folge von Java-
Bytecodes in einer virtuellen Java-Maschine um-fasst,
die in der der Paket-Weiterleitungseinrich-tung
enthalten ist.
5. Verfahren nach Anspruch 1, bei dem die Überwa-chung
des Bandbreitenverbrauchs durch einen
oder mehrere Typen von Paketverkehr (25), der in
der Paket-Weiterleitungseinrichtung (17) empfan-gen
wird, die Feststellung einer Messung des Band-breitenverbrauchs
in der Paket-Weiterleitungsein-richtung
(17) aufgrund des Verkehrs umfasst, der
einem physikalischen Port auf der Weiterleitungs-einrichtung
(17) zugeordnet ist.
6. Verfahren nach Anspruch 1, bei dem die Feststel-lung
einer Messung des Bandbreitenverbrauchs in
der Paket-Weüerteitungseinrichtung (17) aufgrund
des der bestimmten Netzwerkadresse zugeordne-ten
Verkehrs (25) die Feststellung einer Messung
des Bandbreitenverbrauchs aufgrund des Verkehrs
(25) umfasst, der einer bestimmten Medienzu-gangssteuer-
(MAC-) Adresse zugeordnet ist.
7. Verfahren nach Anspruch 1, bei dem die Überwa-chung
des Bandbreitenverbrauchs durch einen
oder mehrere Typen von Paketverkehr, der in der
Paket-Weiterleitungseinrichtung (17) empfangen
wird, die Feststellung einer Messung des Bandbrei-tenverbrauchs
in der Paket-Weiterleitungseinrich-tung
aufgrund des Verkehrs umfasst, der einem be-stimmten
Kommunikationssprotokoll zugeordnet
ist.
8. Verfahren nach Anspruch 7, bei dem die Feststel-lung
einer Messung des Bandbreitenverbrauchs in
der Paket-Weiterleitungseinrichtung (17) aufgrund
von Verkehr (25), der dem bestimmten Kommuni-kationsprotokoll
zugeordnet ist, die Feststellung ei-ner
Messung des Bandbreitenverbrauchs in der Pa-ket-
Weiterleitungseinrichtung (17) aufgrund von
Verkehr (25) umfasst, der zumindest einem der fol-genden
Protokolle zugeordnet ist: Dateiübertra-gungsprotokoll,
FTP, Hypertext-Übertragungspro-tokoll,
HTTP, Übertragungssteuerprotokoll/Inter-netprotokoll,
TCP/IP.
9. Paket-Weiterleitungseinrichtung (17) mit:
einer Mehrzahl von Eingangs/Ausgangs-, I/O-,
Ports zum Senden und Empfangen von Pake-ten
von Informationen;
ersten und zweiten Warteschlangen zum Puf-fern
der Pakete vor der Aussendung über einen
oder mehrere der I/O-Ports, wobei Pakete (25),
die in der ersten Warteschlange gepuffert wer-den,
eine höhere Übertragungspriorität haben,
als Pakete (25), die in der zweiten Warte-schlange
gepuffert werden, und gekennzeich-net
durch:
eine Warteschlangen-Zuordnungslogik
zum Zuordnen der zu puffernden Pakete
(25) in entweder der erstenWarteschlange
oder der zweiten Warteschlange entspre-chend
einem Paketiyp, der jedem Paket
(25) zugeordnet ist, wobei jedes der Pake-
11. EP 1 142 213 B1
19 20
5
10
15
20
25
30
35
40
45
50
55
11
te (25) zumindest einem einer Vielzahl von
Pakettypen zugeordnet ist; und
einen oder mehrere Agenten (157) zur
Überwachung des Bandbreitenverbrauchs
durch Pakete (25), die einem ersten Pa-kettyp
der Vielzahl von Pakettypen zuge-ordnet
sind, wobei die Überwachung des
Bandbreitenverbrauchs die Feststellung
einer Messung des Bandbreitenver-brauchs
in der Paket-Weiterleitungsein-richtung
(17) aufgrund von Verkehr um-fasst,
der einer bestimmten Ziel-Netzwerk-adresse
zugeordnet ist, und zur automati-schen
Änderung der Zuordnung von Pake-ten
(25), die dem ersten Pakettyp zugeord-net
sind, von der ersten Warteschlange zu
der zweiten Warteschlange, wenn der
Bandbreitenverbrauch von Paketen (25),
die dem ersten Pakettyp zugeordnet sind,
einen Schwellenwert übersteigt.
10. Vorrichtung (17) nach Anspruch 9, die weiterhin fol-gendes
umfasst:
eine Verarbeitungseinheit (10), die mit der Viel-zahl
von I/O-Ports verbunden ist, wobei die Ver-arbeitungseinheit
einen Speicher (32) und ei-nen
Prozessor (31) einschließt; und
eine Datenkommunikations-Schnittstelle zum
Empfang von Programmcode in dem Speicher
(32) der Verarbeitungseinheit (10) nach der In-stallation
der Paket-Weiterleitungseinrichtung
(17) in einem Paket-Kommunikations-Netz-werk,
wobei der eine oder mehrere Agenten
durch Ausführen des Programmcodes in dem
Prozessor (31) der Verarbeitungseinheit (10)
realisiert wird (werden).
11. Vorrichtung (17) nach Anspruch 10, bei der die Pa-ket-
Weiterleitungseinrichtung (17) weiterhin Pro-grammcode
umfasst, der bei seiner Ausführung
durch die Verarbeitungseinheit (10) eine virtuelle
Maschine realisiert, und bei dem der über die Da-tenkommunikations-
Schnittstelle empfangene Pro-grammcode
eine Folge von Befehlen umfasst, die
von der virtuellen Maschine ausgeführt werden, um
einen oder mehrere Agenten zu realisieren.
12. Vorrichtung (17) nach Anspruch 11, bei dem der
über die Datenkommunikations-Schnittstelle emp-fangene
Programmcode eine Folge von Java-Byte-codes
umfasst, und bei der die virtuelle Maschine
eine virtuelle Java-Maschine ist.
13. Vorrichtung (17) nach Anspruch 9, bei der der erste
Pakettyp Pakete (25) umfasst, die einen bestimm-ten
einen der I/O-Ports zugeordnet sind.
14. Vorrichtung (17) nach Anspruch 9, bei dem der er-ste
Pakettyp Pakete (25) umfasst, die einer be-stimmten
Netzwerkadresse zugeordnet sind.
15. Vorrichtung (17) nach Anspruch14, bei dem die be-stimmte
Netzwerkadresse eine bestimmte Medien-zugriffssteuer-,
MAC, Adresse ist.
16. Vorrichtung (17) nach Anspruch 9, bei der der erste
Pakettyp Pakete (25) umfasst, die einen bestimm-ten
Kommunikationsprotokoll zugeordnet sind.
17. Vorrichtung (17) nach Anspruch 16, bei dem das
bestimmte Kommunikationsprotokoll ein Hypertext-Übertragungsprotokoll,
HTTP, ist.
18. Vorrichtung (17) nach Anspruch 16, bei dem das
bestimmte Kommunikationsprotokoll ein Dateiüber-tragungsprotokoll,
FTP, ist.
19. Kommunika5onsnetzwerk, das eine Paket-Weiter-leitungseinrichtung
(17) umfasst, wobei die Paket-
Weiterleitungseinrichtung (17) folgendes ein-schließt:
eine Vielzahl von Eingangs-/Ausgangs-, I/O-,
Ports zum Senden und Empfangen von Pake-ten
(25) an Informationen von einer oder meh-reren
anderen Einrichtungen in dem Kommuni-kationsnetzwerk;
erste und zweite Warteschlangen zum Puffern
der Pakete (25) vor der Aussendung über einen
oder mehrere I/O-Ports, wobei in der ersten
Warteschlange gepufferte Pakete (25) eine hö-here
Übertragungspriorität als Pakete (25) ha-ben,
die in der zweiten Warteschlange gepuf-fert
werden, gekennzeichnet durch:
eineWarteschlangen-Zuordnungslogik zur
Zuordnung der Pakete (25), die in entwe-der
der ersten Warteschlange oder der
zweiten Warteschlange zu puffern sind,
entsprechend einem Pakettyp, der jedem
Paket (25) zugeordnet ist, wobei jedes der
Pakete (25) zumindest einem einer Viel-zahl
von Pakettypen zugeordnet ist; und
einen oder mehrere Agenten (157) zur
Überwachung des Bandbreitenverbrauchs
durch Pakete (25), die einem ersten Pa-kettyp
der Vielzahl von Pakettypen zuge-ordnet
sind, wobei die Überwachung des
Bandbreitenverbrauchs die Feststellung
einer Messung des Bandbreitenver-brauchs
in der Paket-Weiterleitungsein-richtung
(17) aufgrund von Verkehr um-fasst,
der einer bestimmten Ziel-Netzwerk-adresse
zugeordnet ist, und zur automati-schen
Änderung der Zuordnung von Pake-
12. EP 1 142 213 B1
21 22
5
10
15
20
25
30
35
40
45
50
55
12
ten (25), die dem ersten Pakettyp zugeord-net
sind, von der ersten Warteschlange zu
der zweiten Warteschlange, wenn der
Bandbreitenverbrauch von Paketen (25),
die dem ersten Pakettyp zugeordnet sind,
einen Schwellenwert übersteigt.
20. Kommunikationsnetzwerk nach Anspruch 19, bei
dem die Paket-Weiterleitungseinrichtung (17) wei-terhin
folgendes einschließt:
eine Verarbeitungseinheit (10), die mit der Viel-zahl
von I/O-Ports gekoppelt ist, wobei die Ver-arbeitungseinheit
(10) einen Speicher (32) und
einen Prozessor (31) einschließt; und
eine Datenkommunikations-Schnittstelle zum
Empfang von Programmcode in dem Speicher
(32) der Verarbeitungseinheit (10) nach der In-stallation
der Paket-Weiterleitungseinrichtung
(17) in dem Kommunikationsnetzwerk, wobei
der eine oder mehrere Agenten durch eine Aus-führung
des Programmcodes in dem Prozessor
(31) der Verarbeitungseinheit (10) realisiert ist
(sind).
21. Kommunikationsnetzwerk nach Anspruch 20, bei
dem die Paket-Weiterleitungseinrichtung (17) wei-terhin
Programmcode einschließt, der bei seiner
Ausführung durch die Verarbeitungseinheit (10) ei-ne
virtuelle Maschine realisiert. und wobei der über
die Datenkommunikations-Schnittstelle empfange-ne
Programmcode eine Folge von Befehlen ein-schließt,
die von der virtuellen Maschine ausgeführt
werden, um einen oder mehrere Agenten zu reali-sieren.
Revendications
1. Un procédé convenant pour l'utilisation dans un dis-positif
d'acheminement de paquets (17), caractéri-sé
par les étapes consistant à:
surveiller la consommation de largeur de bande
par un ou plusieurs types de trafic de paquets
(25) reçu dans le dispositif d'acheminement de
paquets (17), ceci comprenant la détermination
d'une mesure de consommation de largeur de
bande dans le dispositif d'acheminement de
paquets (17) sous l'effet du trafic associé à une
adresse de réseau de destination particulière;
déterminer si la consommation de largeur de
bande par le ou les types de trafic de paquets
(25) dépasse un seuil; et
changer automatiquement l'assignation d'au
moins un type de trafic de paquets parmi le ou
les types de trafic de paquets, pour la faire pas-ser
d'une file d'attente ayant une première prio-rité
à une file d'attente ayant une deuxième
priorité, si la consommation de largeur de ban-de
dépasse le seuil.
2. Le procédé selon la revendication 1, comprenant
en outre la réception d'un code de programme dans
le dispositif d'acheminement de paquets (17) après
l'installation du dispositif d'acheminement de pa-quets
(17) dans un réseau de communication par
paquets, et dans lequel la surveillance, la détermi-nation
et le changement automatique sont accom-plis
par le code de programme qui s'exécute.
3. Le procédé selon la revendication 2, dans lequel la
réception du code de programme comprend la ré-ception
d'une séquence d'instructions de machine
virtuelle, et dans lequel l'exécution du code de pro-gramme
comprend l'exécution de la séquence
d'instructions de machine virtuelle, en utilisant une
machine virtuelle incluse dans le dispositif d'ache-minement
de paquets (17).
4. Le procédé selon la revendication 3, dans lequel la
réception de la séquence d'instructions de machine
virtuelle comprend la réception d'une séquence de
pseudo-codes Java, et dans lequel l'exécution de
la séquence d'instructions de machine virtuelle en
utilisant une machine virtuelle comprend l'exécution
de la séquence de pseudo-codes Java dans une
machine virtuelle Java incluse dans le dispositif
d'acheminement de paquets.
5. Le procédé selon la revendication 1, dans lequel la
surveillance de la consommation de largeur de ban-de
par un ou plusieurs types de trafic de paquets
(25) reçu dans le dispositif d'acheminement de pa-quets
(17) comprend la détermination d'une mesure
de consommation de largeur de bande dans le dis-positif
d'acheminement de paquets (17) sous l'effet
du trafic associé à un port physique sur le dispositif
d'acheminement (17).
6. Le procédé selon la revendication 1, dans lequel la
détermination d'une mesure de consommation de
largeur de bande dans le dispositif d'acheminement
de paquets (17) sous l'effet du trafic (25) associé à
l'adresse de réseau particulière, comprend la déter-mination
d'une mesure de consommation de lar-geur
de bande due au trafic (25) associé à une
adresse de commande d'accès au support, MAC,
particulière.
7. Le procédé selon la revendication 1, dans lequel la
surveillance de la consommation de largeur de ban-de
par un ou plusieurs types de trafic de paquets
reçu dans le dispositif d'acheminement de paquets
(17) comprend la détermination d'une mesure de
largeur de bande dans le dispositif d'acheminement
13. EP 1 142 213 B1
23 24
5
10
15
20
25
30
35
40
45
50
55
13
de paquets sous l'effet du trafic associé à un proto-cole
de communication particulier.
8. Le procédé selon la revendication 7, dans lequel la
détermination d'une mesure de consommation de
largeur de bande dans le dispositif d'acheminement
de paquets (17) sous l'effet du trafic (25) associé
au protocole de communication particulier, com-prend
la détermination d'une mesure de consom-mation
de largeur de bande dans le dispositif
d'acheminement de paquets (17) sous l'effet du tra-fic
(25) associé à un au moins des protocoles
suivants : protocole de transfert de fichiers, FTP,
protocole de transfert hyper-texte, HTTP, protocole
de commande de transmission / protocole Internet,
TCP/IP.
9. Un appareil d'acheminement de paquets (17)
comprenant :
une multiplicité de ports d'entrée / sortie, I/O,
pour émettre et recevoir des paquets d'informa-tion;
des première et seconde files d'attente pour en-registrer
les paquets en tampon avant l'émis-sion
par l'intermédiaire d'un ou plusieurs des
ports d'I/O, des paquets (25) enregistrés en
tampon dans la première file d'attente ayant
une priorité d'émission plus élevée que des pa-quets
(25) enregistrés en tampon dans la se-conde
file d'attente, et caractérisé par
une logique d'assignation de file d'attente pour
assigner l'enregistrement des paquets (25) en
tampon dans la première file d'attente ou la se-conde
file d'attente conformément à un type de
paquet associé à chaque paquet (25), chacun
des paquets (25) étant associé à au moins un
d'une multiplicité de types de paquets; et
un ou plusieurs agents (157) pour surveiller la
consommation de largeur de bande par des pa-quets
(25) associés à un premier type de pa-quet
de la multiplicité de types de paquets, la
surveillance de la consommation de largeur de
bande comprenant la détermination d'une me-sure
de consommation de largeur de bande
dans le dispositif d'acheminement de paquets
(17) sous l'effet du trafic associé à une adresse
de réseau de destination particulière, et pour
changer automatiquement l'assignation de pa-quets
(25) associés au premier type de paquet,
pour la faire passer de la première file d'attente
à la seconde file d'attente si la consommation
de largeur de bande de paquets (25) associés
au premier type de paquet dépasse un seuil.
10. L'appareil (17) selon la revendication 9, compre-nant
en outre:
une unité de traitement (10) couplée à la mul-tiplicité
de ports d'I/O, l'unité de traitement in-cluant
une mémoire (32) et un processeur (31);
et
une interface de transmission de données pour
recevoir un code de programme dans la mé-moire
(32) de l'unité de traitement (10) après
l'installation du dispositif d'acheminement de
paquets (17) dans un réseau de transmission
de paquets, et dans lequel le ou les agents sont
mis en oeuvre par l'exécution du code de pro-gramme
dans le processeur (31) de l'unité de
traitement (10).
11. L'appareil (17) selon la revendication 10, dans le-quel
le dispositif d'acheminement de paquets (17)
comprend en outre un code de programme qui, lors-qu'il
est exécuté par l'unité de traitement (10) réalise
une machine virtuelle, et dans lequel le code de pro-gramme
reçu par l'intermédiaire de l'interface de
transmission de données comprend une séquence
d'instructions qui est exécutée par la machine vir-tuelle
pour mettre en oeuvre un ou plusieurs agents.
12. L'appareil (17) selon la revendication 11, dans le-quel
le code de programme reçu par l'intermédiaire
de l'interface de transmission comprend une sé-quence
de pseudo-codes Java, et dans lequel la
machine virtuelle est une machine virtuelle Java.
13. L'appareil (17) selon la revendication 9, dans lequel
le premier type de paquet comprend des paquets
(25) associés à l'un particulier des ports d'I/O.
14. L'appareil (17) selon la revendication 9, dans lequel
le premier type de paquet comprend des paquets
(25) associés à une adresse de réseau particulière.
15. L'appareil (17) selon la revendication 14, dans le-quel
l'adresse de réseau particulière est une adres-se
de commande d'accès au support, MAC, parti-culière.
16. L'appareil (17) selon la revendication 9, dans lequel
le premier type de paquet comprend des paquets
(25) associés à un protocole de communication par-ticulier.
17. L'appareil (17) selon la revendication 16, dans le-quel
le protocole de communication particulier est
un protocole de transfert hyper-texte, HTTP.
18. L'appareil (17) selon la revendication 16, dans le-quel
le protocole de communication particulier est
un protocole de transfert de fichiers, FTP.
19. Un réseau de communication comprenant un dis-positif
d'acheminement de paquets (17), le disposi-
14. EP 1 142 213 B1
25 26
5
10
15
20
25
30
35
40
45
50
55
14
tif d'acheminement de paquets (17) incluant :
une multiplicité de ports d'entrée / sortie, I/O,
pour émettre et recevoir des paquets (25) d'in-formation
en relation avec un ou plusieurs
autres dispositifs dans le réseau de communi-cation;
des première et seconde files d'attente pour en-registrer
les paquets (25) en tampon avant
l'émission par un ou plusieurs des ports d'I/O,
des paquets (25) enregistrés en tampon dans
la première file d'attente ayant une priorité
d'émission plus élevée que des paquets (25)
enregistrés en tampon .dans la seconde file
d'attente; et caractérisé par
une logique d'assignation de file d'attente pour
assigner l'enregistrement des paquets (25) en
tampon dans la première file d'attente ou la se-conde
file d'attente conformément à un type de
paquet associé à chaque paquet (25), chacun
des paquets (25) étant associé à au moins un
d'une multiplicité de types de paquets; et
un ou plusieurs agents (157) pour surveiller la
consommation de largeur de bande par des pa-quets
(25) associés à un premier type de pa-quet
de la multiplicité de types de paquet, la
surveillance de la consommation de largeur de
bande comprenant la détermination d'une me-sure
de consommation de largeur de bande
dans le dispositif d'acheminement de paquets
(17) sous l'effet du trafic associé à une adresse
de réseau de destination particulière, et pour
changer automatiquement l'assignation de pa-quets
(25) associés au premier type de paquet,
pour la faire passer de la première file d'attente
à la seconde file d'attente si la consommation
de largeur de bande de paquets (25) associés
au premier type de paquet dépasse un seuil.
20. Le réseau de communication selon la revendication
19, dans lequel le dispositif d'acheminement de pa-quets
(17) comprend en outre :
une unité de traitement (10) couplée à la mul-tiplicité
de ports d'I/O, l'unité de traitement (10)
incluant une mémoire (32) et un processeur
(31); et
une interface de transmission de données pour
recevoir un code de programme dans la mé-moire
(32) de l'unité de traitement (10) après
l'installation du dispositif d'acheminement de
paquets (17) dans le réseau de communica-tion,
et dans lequel le ou les agents sont mis en
oeuvre par l'exécution du code de programme
dans le processeur (31) de l'unité de traitement
(10).
21. Le réseau de communication selon la revendication
20, dans lequel le dispositif d'acheminement de pa-quets
(17) comprend en outre un code de program-me
qui, lorsqu'il est exécuté par l'unité de traitement
(10), réalise une machine virtuelle, et dans lequel
le code de programme reçu par l'intermédiaire de
l'interface de transmission de données comprend
une séquence d'instructions qui est exécutée par la
machine virtuelle pour mettre en oeuvre un ou plu-sieurs
agents.