This document evaluates and compares the performance of seven high-speed TCP congestion control protocols: Bic TCP, Cubic TCP, Hamilton TCP, HighSpeed TCP, Illinois TCP, Scalable TCP and YeAH TCP. It first provides background on the need for high-speed congestion control as internet speeds have increased. It then summarizes the algorithms and mechanisms of each protocol. The document aims to simulate and compare the performance of these protocols with multiple flows, in order to determine the best approach for high-speed networks.
COMPARISON OF HIGH SPEED CONGESTION CONTROL PROTOCOLSIJNSA Journal
Congestion control limits the quantity of information input at a rate less important than that of the transmission one to ensure good performance as well as protect against overload and blocking of the network. Researchers have done a great deal of work on improving congestion control protocols, especially on high speed networks. In this paper, we will be studying the congestion control alongside low and high speed congestion control protocols. We will be also simulating, evaluating, and comparing eight of high speed congestion control protocols : Bic TCP, Cubic TCP, Hamilton TCP, HighSpeed TCP, Illinois TCP, Scalable TCP, Compound TCP and YeAH TCP, with multiple flows.
This document proposes a technique called Differential Distributed Space-Time Coding OFDM (D-DSTC OFDM) to address synchronization issues in asynchronous cooperative relay networks. It discusses how conventional distributed space-time coding suffers from inter-symbol interference due to relay synchronization errors. The proposed D-DSTC OFDM approach uses differential encoding, circular time reversal, and OFDM modulation to eliminate the need for channel state information or strict relay synchronization. Simulation results show the BER performance of D-DSTC OFDM degrades gracefully with increasing synchronization errors, outperforming coherent detection schemes. The key advantages and limitations of the D-DSTC OFDM approach are summarized.
Joint blind calibration and time-delay estimation for multiband rangingTarik Kazaz
In this presentation, we focus on the problem of blind joint calibration of multiband transceivers and time-delay (TD) estimation of multipath channels. We show that this problem can be formulated as a particular case of covariance matching. Although this problem is severely ill-posed, prior information about radio-frequency chain distortions and multipath channel sparsity is used for regularization. This approach leads to a biconvex optimization problem, which is formulated as a rank-constrained linear system and solved by a simple group Lasso algorithm.
% This method is general and can be also applied for calibration of sensors arrays and in direction of arrival estimation.
Numerical experiments show that the proposed algorithm provides better calibration and higher resolution for TD estimation than current state-of-the-art methods.
SSL implemented for secure communication over network. For establishing a tunnel between Queue Managers SSL is important. Here used Sha256 algorithm for encryption & decryption.
This document contains the agenda for a workshop on Multi-Gbps TCP. The workshop includes presentations on high speed networks, LHC networks, TCP/AQM protocols and their duality model, control theory and stability of TCP/AQM, FAST simulations, and related TCP kernel projects. Presentations will be given by researchers from Caltech, CERN, UCLA, and INRIA on topics such as TCP protocols, active queue management, stability analysis, and simulations. There will also be discussion periods following some of the presentations.
This document contains a Java program that implements the Cyclic Redundancy Check (CRC) algorithm for error detection in data transmission. The CRC algorithm uses polynomial arithmetic to compute a checksum over transmitted data by dividing the data by a fixed generator polynomial. The checksum is appended to the data before transmission. On the receiving end, the data and checksum are divided again by the generator polynomial. If the remainder is non-zero, then an error is detected in the received data. The Java program takes input data and generator polynomial from the user, computes the CRC checksum, transmits the data with checksum, and checks for errors on receiving end.
COMPARISON OF HIGH SPEED CONGESTION CONTROL PROTOCOLSIJNSA Journal
Congestion control limits the quantity of information input at a rate less important than that of the transmission one to ensure good performance as well as protect against overload and blocking of the network. Researchers have done a great deal of work on improving congestion control protocols, especially on high speed networks. In this paper, we will be studying the congestion control alongside low and high speed congestion control protocols. We will be also simulating, evaluating, and comparing eight of high speed congestion control protocols : Bic TCP, Cubic TCP, Hamilton TCP, HighSpeed TCP, Illinois TCP, Scalable TCP, Compound TCP and YeAH TCP, with multiple flows.
This document proposes a technique called Differential Distributed Space-Time Coding OFDM (D-DSTC OFDM) to address synchronization issues in asynchronous cooperative relay networks. It discusses how conventional distributed space-time coding suffers from inter-symbol interference due to relay synchronization errors. The proposed D-DSTC OFDM approach uses differential encoding, circular time reversal, and OFDM modulation to eliminate the need for channel state information or strict relay synchronization. Simulation results show the BER performance of D-DSTC OFDM degrades gracefully with increasing synchronization errors, outperforming coherent detection schemes. The key advantages and limitations of the D-DSTC OFDM approach are summarized.
Joint blind calibration and time-delay estimation for multiband rangingTarik Kazaz
In this presentation, we focus on the problem of blind joint calibration of multiband transceivers and time-delay (TD) estimation of multipath channels. We show that this problem can be formulated as a particular case of covariance matching. Although this problem is severely ill-posed, prior information about radio-frequency chain distortions and multipath channel sparsity is used for regularization. This approach leads to a biconvex optimization problem, which is formulated as a rank-constrained linear system and solved by a simple group Lasso algorithm.
% This method is general and can be also applied for calibration of sensors arrays and in direction of arrival estimation.
Numerical experiments show that the proposed algorithm provides better calibration and higher resolution for TD estimation than current state-of-the-art methods.
SSL implemented for secure communication over network. For establishing a tunnel between Queue Managers SSL is important. Here used Sha256 algorithm for encryption & decryption.
This document contains the agenda for a workshop on Multi-Gbps TCP. The workshop includes presentations on high speed networks, LHC networks, TCP/AQM protocols and their duality model, control theory and stability of TCP/AQM, FAST simulations, and related TCP kernel projects. Presentations will be given by researchers from Caltech, CERN, UCLA, and INRIA on topics such as TCP protocols, active queue management, stability analysis, and simulations. There will also be discussion periods following some of the presentations.
This document contains a Java program that implements the Cyclic Redundancy Check (CRC) algorithm for error detection in data transmission. The CRC algorithm uses polynomial arithmetic to compute a checksum over transmitted data by dividing the data by a fixed generator polynomial. The checksum is appended to the data before transmission. On the receiving end, the data and checksum are divided again by the generator polynomial. If the remainder is non-zero, then an error is detected in the received data. The Java program takes input data and generator polynomial from the user, computes the CRC checksum, transmits the data with checksum, and checks for errors on receiving end.
This document summarizes continuous-wave modulation techniques. It discusses amplitude modulation (AM), including the transmitter and receiver, and limitations of AM. It also covers linear modulation schemes like double sideband-suppressed carrier, single sideband, and vestigial sideband modulation. Frequency modulation techniques are analyzed, including narrowband and wideband FM. Bessel functions are used to describe the spectrum of wideband FM signals. Carson's rule and universal curves are presented for calculating the transmission bandwidth of FM signals.
CUBIC is a TCP congestion control algorithm that uses a cubic window growth function to help TCP scale better in high bandwidth-delay product networks. It aims to improve scalability, stability, and fairness compared to previous algorithms like BIC-TCP. The window growth is independent of round-trip time and becomes nearly zero around the maximum window size to help stability. CUBIC also incorporates features to help it behave friendly to standard TCP implementations.
The document discusses dense linear algebra solvers and algorithms. It provides an overview of existing software for dense linear algebra including LINPACK, EISPACK, LAPACK, ScaLAPACK, PLASMA, and MAGMA. It then discusses challenges with dense linear algebra on modern hardware including distributed memory, heterogeneity, and the high cost of communication. It introduces tile algorithms as an approach to address these challenges compared to traditional LAPACK algorithms.
The document discusses error detection techniques used in direct link networks, focusing on cyclic redundancy checks (CRCs) and internet checksum algorithms. It provides an overview of CRCs, describing how they make use of polynomial arithmetic modulo 2 to generate a redundant checksum that is sent along with messages. The document also gives examples of how CRCs can be used to detect errors by dividing the received polynomial by the generator polynomial and checking if the remainder is zero. Finally, it compares error detection, which requires retransmissions, to error correction, which has overhead on all messages.
The document discusses the effect of multi-tone jamming on a frequency hopped orthogonal frequency division multiple access (FH-OFDMA) system using orthogonal hopping patterns. It presents the baseband structure of an FH-OFDMA system and describes how data is transmitted over multiple subcarriers assigned according to an orthogonal hopping sequence. The paper analyzes the bit error rate performance of the FH-OFDMA system under multi-tone jamming in additive white Gaussian noise and Rayleigh fading channels.
The document discusses TCP congestion control algorithms. It describes the Additive Increase Multiplicative Decrease (AIMD) approach where the congestion window (cwnd) is increased linearly but reduced by half when packet loss is detected. Slow start is used to quickly ramp up cwnd initially through exponential growth. Fast retransmit detects lost packets using duplicate ACKs to retransmit earlier. Fast recovery then resumes increasing cwnd after a retransmit. The document also examines algorithms for adaptive retransmission timeouts based on mean and variance of measured round-trip times.
Investigating the Use of Synchronized Clocks in TCP Congestion ControlMichele Weigle
My PhD defense
May 14, 2003
University of North Carolina, Chapel Hill
Investigating the Use of Synchronized Clocks in TCP Congestion Control
Advisor: Kevin Jeffay
Queue Manager both as sender and Receiver.docxsarvank2
Queue manager is one and once delivery but sure delivery. It is very best technology for message routing. Queue Manager here acting as sender and reciever.
Sparse Random Network Coding for Reliable Multicast ServicesAndrea Tassi
Point-to-Multipoint communications are expected to play a pivotal role in next-generation networks. This talk refers to a cellular system transmitting layered multicast services to a Multicast Group (MG) of users. Reliability of communications is ensured via different Random Linear Network Coding (RLNC) techniques. We deal with a fundamental problem: the computational complexity of the RLNC decoder. The higher the number of decoding operations is, the more the user's computational overhead grows and, consequently, the faster the batteries of mobile devices drain. By referring to several sparse RLNC techniques, and without any assumption on the implementation of the RLNC decoder in use, we provide an efficient way to characterize the performance of users targeted by ultra-reliable layered multicast services. The proposed modeling allows to efficiently derive the average number of coded packet transmissions needed to recover one or more service layers. We design a convex resource allocation framework that allows to minimize the complexity of the RLNC decoder by jointly optimizing the transmission parameters and the sparsity of the code. The designed optimization framework also ensures service guarantees to predetermined fractions of users. Performance of the proposed optimization framework is then investigated in a LTE-A eMBMS network multicasting H.264/SVC video.
This document summarizes a research paper that proposes using Discrete Logarithmic Method (DLM) to reduce Peak-to-Average Power Ratio (PAPR) in Multi-Carrier Code Division Multiple Access (MC-CDMA) wireless systems. It describes how DLM, previously used in cryptography, can be applied to encode binary data as triplets for transmission. The encoding reduces PAPR without increasing bit error rate. It then outlines the MC-CDMA transmitter and receiver block diagrams, explaining how binary to DLM encoding is used before transmission and DLM to binary decoding is used upon reception. Simulation results showed over 5dB reduction in PAPR.
1) The document discusses the leaching of PbS concentrate from a particle size distribution over time in multiple tanks. Equations are provided to model reaction rate constants, residence time distributions, and fractional leaching.
2) Graphs and calculations show that larger particle sizes require longer times for over 97.5% lead extraction due to reaction-rate effects.
3) Using a Rosin-Rammler particle size distribution, the model predicts over 98% extraction requires around 35 minutes of residence time.
4) For a given feed rate and 96% conversion, 10 mixed flow tanks are shown to require less time than a single tank due to their more uniform residence time distributions approaching plug flow.
1) The document describes deriving the probability of blocking for a blocking system using an LCC infinite source traffic model. It shows the derivation of the blocking probability formula using Markov chains.
2) The blocking probability formula is then simulated using MATLAB to generate a graph of blocking probability versus traffic intensity for different numbers of channels.
3) The graph can be used for network planning to determine the maximum traffic or required channels for a given blocking probability goal.
This document summarizes key concepts about congestion control in TCP including:
- TCP uses additive increase multiplicative decrease (AIMD) to dynamically adjust the congestion window size and maintain efficiency and fairness.
- TCP has slow start and congestion avoidance states that govern how the congestion window is adjusted in response to acknowledgements.
- TCP responds to packet loss through fast retransmit, fast recovery, and halving the congestion window size to reduce congestion according to protocols like Tahoe, Reno, and New Reno.
Transaction Timestamping in Temporal DatabasesGera Shegalov
This document summarizes techniques for timestamping transactions in temporal databases to ensure consistency across distributed systems. It discusses using timestamps to serialize transactions and maintain valid transaction histories when transactions occur concurrently. Key techniques include assigning timestamps at commit time to establish order, using a read timestamp table to synchronize reads and writes, and coordinating timestamps across databases in distributed transactions using two-phase commit.
This document discusses improving the word accuracy of an automatic speech recognition (ASR) system for the Telugu language. It analyzes the substitution errors in the system using two different lexical models - one based on stress-timed English phonemes (CMU lexicon) and one handcrafted lexicon for syllable-timed Telugu (UOH lexicon). The UOH lexicon improves word accuracy by 20-30% compared to the CMU lexicon by better modeling the phonetic characteristics of Telugu. The paper also examines the effect of gender, accents, and non-native speakers on substitution errors and the resulting confusion matrices provide insight into the most commonly substituted phonemes.
This document summarizes a study examining factors that influence customer satisfaction and trust in mobile commerce (m-commerce) transactions. The study surveyed 200 m-commerce customers. Results of the analysis show that customer satisfaction with vendors was significantly influenced by ease-of-use, responsiveness, and brand image. Customer trust in vendors was affected by responsiveness, brand image, and satisfaction. The study concludes that m-commerce providers should focus on these factors to provide more satisfaction and trust among customers. Understanding these factors enables providers to better develop trust with m-commerce customers.
This document discusses security issues in cloud computing. It begins by defining cloud computing and describing its service models and deployment models. It then identifies several key security issues in cloud computing, including security, privacy, reliability, lack of open standards, compliance, and concerns about long-term viability of data. Security is identified as the top challenge according to a survey of IT executives. The document argues that more must be done to address security, privacy, and other issues in order to fully realize the potential of cloud computing.
The document proposes a Crosslayered and Power Conserved Routing Topology (CPCRT) for congestion control in mobile ad hoc networks. CPCRT aims to distinguish between packet loss due to link failure versus other causes like congestion. It takes a cross-layer approach using information from the physical, MAC, and application layers. The proposed method also aims to conserve power during packet transmission by adjusting transmission power levels based on received signal strength. Simulation results show that CPCRT can better utilize resources and conserve power during congestion control compared to other approaches.
This document summarizes how external economic factors influence policymaking and management in Sub-Saharan Africa. It discusses several challenges, including weak competitive capacity in global trade which makes African exports less competitive. It also examines how commodity price fluctuations, decreasing capital inflows, high external debt burdens, and economic shocks in other countries negatively impact African countries' ability to effectively plan and implement development policies. The document concludes that African countries need to address internal weaknesses to strengthen their ability to deal with challenges posed by the external economic environment.
This document summarizes continuous-wave modulation techniques. It discusses amplitude modulation (AM), including the transmitter and receiver, and limitations of AM. It also covers linear modulation schemes like double sideband-suppressed carrier, single sideband, and vestigial sideband modulation. Frequency modulation techniques are analyzed, including narrowband and wideband FM. Bessel functions are used to describe the spectrum of wideband FM signals. Carson's rule and universal curves are presented for calculating the transmission bandwidth of FM signals.
CUBIC is a TCP congestion control algorithm that uses a cubic window growth function to help TCP scale better in high bandwidth-delay product networks. It aims to improve scalability, stability, and fairness compared to previous algorithms like BIC-TCP. The window growth is independent of round-trip time and becomes nearly zero around the maximum window size to help stability. CUBIC also incorporates features to help it behave friendly to standard TCP implementations.
The document discusses dense linear algebra solvers and algorithms. It provides an overview of existing software for dense linear algebra including LINPACK, EISPACK, LAPACK, ScaLAPACK, PLASMA, and MAGMA. It then discusses challenges with dense linear algebra on modern hardware including distributed memory, heterogeneity, and the high cost of communication. It introduces tile algorithms as an approach to address these challenges compared to traditional LAPACK algorithms.
The document discusses error detection techniques used in direct link networks, focusing on cyclic redundancy checks (CRCs) and internet checksum algorithms. It provides an overview of CRCs, describing how they make use of polynomial arithmetic modulo 2 to generate a redundant checksum that is sent along with messages. The document also gives examples of how CRCs can be used to detect errors by dividing the received polynomial by the generator polynomial and checking if the remainder is zero. Finally, it compares error detection, which requires retransmissions, to error correction, which has overhead on all messages.
The document discusses the effect of multi-tone jamming on a frequency hopped orthogonal frequency division multiple access (FH-OFDMA) system using orthogonal hopping patterns. It presents the baseband structure of an FH-OFDMA system and describes how data is transmitted over multiple subcarriers assigned according to an orthogonal hopping sequence. The paper analyzes the bit error rate performance of the FH-OFDMA system under multi-tone jamming in additive white Gaussian noise and Rayleigh fading channels.
The document discusses TCP congestion control algorithms. It describes the Additive Increase Multiplicative Decrease (AIMD) approach where the congestion window (cwnd) is increased linearly but reduced by half when packet loss is detected. Slow start is used to quickly ramp up cwnd initially through exponential growth. Fast retransmit detects lost packets using duplicate ACKs to retransmit earlier. Fast recovery then resumes increasing cwnd after a retransmit. The document also examines algorithms for adaptive retransmission timeouts based on mean and variance of measured round-trip times.
Investigating the Use of Synchronized Clocks in TCP Congestion ControlMichele Weigle
My PhD defense
May 14, 2003
University of North Carolina, Chapel Hill
Investigating the Use of Synchronized Clocks in TCP Congestion Control
Advisor: Kevin Jeffay
Queue Manager both as sender and Receiver.docxsarvank2
Queue manager is one and once delivery but sure delivery. It is very best technology for message routing. Queue Manager here acting as sender and reciever.
Sparse Random Network Coding for Reliable Multicast ServicesAndrea Tassi
Point-to-Multipoint communications are expected to play a pivotal role in next-generation networks. This talk refers to a cellular system transmitting layered multicast services to a Multicast Group (MG) of users. Reliability of communications is ensured via different Random Linear Network Coding (RLNC) techniques. We deal with a fundamental problem: the computational complexity of the RLNC decoder. The higher the number of decoding operations is, the more the user's computational overhead grows and, consequently, the faster the batteries of mobile devices drain. By referring to several sparse RLNC techniques, and without any assumption on the implementation of the RLNC decoder in use, we provide an efficient way to characterize the performance of users targeted by ultra-reliable layered multicast services. The proposed modeling allows to efficiently derive the average number of coded packet transmissions needed to recover one or more service layers. We design a convex resource allocation framework that allows to minimize the complexity of the RLNC decoder by jointly optimizing the transmission parameters and the sparsity of the code. The designed optimization framework also ensures service guarantees to predetermined fractions of users. Performance of the proposed optimization framework is then investigated in a LTE-A eMBMS network multicasting H.264/SVC video.
This document summarizes a research paper that proposes using Discrete Logarithmic Method (DLM) to reduce Peak-to-Average Power Ratio (PAPR) in Multi-Carrier Code Division Multiple Access (MC-CDMA) wireless systems. It describes how DLM, previously used in cryptography, can be applied to encode binary data as triplets for transmission. The encoding reduces PAPR without increasing bit error rate. It then outlines the MC-CDMA transmitter and receiver block diagrams, explaining how binary to DLM encoding is used before transmission and DLM to binary decoding is used upon reception. Simulation results showed over 5dB reduction in PAPR.
1) The document discusses the leaching of PbS concentrate from a particle size distribution over time in multiple tanks. Equations are provided to model reaction rate constants, residence time distributions, and fractional leaching.
2) Graphs and calculations show that larger particle sizes require longer times for over 97.5% lead extraction due to reaction-rate effects.
3) Using a Rosin-Rammler particle size distribution, the model predicts over 98% extraction requires around 35 minutes of residence time.
4) For a given feed rate and 96% conversion, 10 mixed flow tanks are shown to require less time than a single tank due to their more uniform residence time distributions approaching plug flow.
1) The document describes deriving the probability of blocking for a blocking system using an LCC infinite source traffic model. It shows the derivation of the blocking probability formula using Markov chains.
2) The blocking probability formula is then simulated using MATLAB to generate a graph of blocking probability versus traffic intensity for different numbers of channels.
3) The graph can be used for network planning to determine the maximum traffic or required channels for a given blocking probability goal.
This document summarizes key concepts about congestion control in TCP including:
- TCP uses additive increase multiplicative decrease (AIMD) to dynamically adjust the congestion window size and maintain efficiency and fairness.
- TCP has slow start and congestion avoidance states that govern how the congestion window is adjusted in response to acknowledgements.
- TCP responds to packet loss through fast retransmit, fast recovery, and halving the congestion window size to reduce congestion according to protocols like Tahoe, Reno, and New Reno.
Transaction Timestamping in Temporal DatabasesGera Shegalov
This document summarizes techniques for timestamping transactions in temporal databases to ensure consistency across distributed systems. It discusses using timestamps to serialize transactions and maintain valid transaction histories when transactions occur concurrently. Key techniques include assigning timestamps at commit time to establish order, using a read timestamp table to synchronize reads and writes, and coordinating timestamps across databases in distributed transactions using two-phase commit.
This document discusses improving the word accuracy of an automatic speech recognition (ASR) system for the Telugu language. It analyzes the substitution errors in the system using two different lexical models - one based on stress-timed English phonemes (CMU lexicon) and one handcrafted lexicon for syllable-timed Telugu (UOH lexicon). The UOH lexicon improves word accuracy by 20-30% compared to the CMU lexicon by better modeling the phonetic characteristics of Telugu. The paper also examines the effect of gender, accents, and non-native speakers on substitution errors and the resulting confusion matrices provide insight into the most commonly substituted phonemes.
This document summarizes a study examining factors that influence customer satisfaction and trust in mobile commerce (m-commerce) transactions. The study surveyed 200 m-commerce customers. Results of the analysis show that customer satisfaction with vendors was significantly influenced by ease-of-use, responsiveness, and brand image. Customer trust in vendors was affected by responsiveness, brand image, and satisfaction. The study concludes that m-commerce providers should focus on these factors to provide more satisfaction and trust among customers. Understanding these factors enables providers to better develop trust with m-commerce customers.
This document discusses security issues in cloud computing. It begins by defining cloud computing and describing its service models and deployment models. It then identifies several key security issues in cloud computing, including security, privacy, reliability, lack of open standards, compliance, and concerns about long-term viability of data. Security is identified as the top challenge according to a survey of IT executives. The document argues that more must be done to address security, privacy, and other issues in order to fully realize the potential of cloud computing.
The document proposes a Crosslayered and Power Conserved Routing Topology (CPCRT) for congestion control in mobile ad hoc networks. CPCRT aims to distinguish between packet loss due to link failure versus other causes like congestion. It takes a cross-layer approach using information from the physical, MAC, and application layers. The proposed method also aims to conserve power during packet transmission by adjusting transmission power levels based on received signal strength. Simulation results show that CPCRT can better utilize resources and conserve power during congestion control compared to other approaches.
This document summarizes how external economic factors influence policymaking and management in Sub-Saharan Africa. It discusses several challenges, including weak competitive capacity in global trade which makes African exports less competitive. It also examines how commodity price fluctuations, decreasing capital inflows, high external debt burdens, and economic shocks in other countries negatively impact African countries' ability to effectively plan and implement development policies. The document concludes that African countries need to address internal weaknesses to strengthen their ability to deal with challenges posed by the external economic environment.
1) The document analyzes the level of educational development and underlying disparities in Burdwan District, West Bengal.
2) It finds significant spatial variations in educational infrastructure, dropout rates, and never-enrolled student populations across the district's 31 blocks.
3) The western, more urbanized blocks have better infrastructure but higher dropout rates, while eastern agricultural blocks have poorer infrastructure but lower dropout rates. Factors like poverty, early marriage, and economic opportunities contribute to educational disparities.
1) The document discusses route optimization techniques for solving the triangle routing problem in Mobile IPv4, specifically evaluating the performance of the Internet Service Provider Mobile Border Gateway (ISP MBG) scheme.
2) It provides background on Mobile IP, the triangle routing problem, and introduces the ISP MBG technique for optimizing routes.
3) The study evaluates the performance of ISP MBG by varying system parameters like number of nodes and zones, finding it provides shorter transmission times compared to conventional Mobile IP.
This document discusses the application of RFID technology in libraries. It begins with an introduction to RFID and how it can automate library processes. It then discusses the benefits of RFID for libraries, staff, and patrons, including faster circulation, easier inventory management, and improved patron services. The document also covers RFID standards relevant to libraries, such as ISO 18000-3 and NCIP. It provides recommendations on RFID implementation, including a phased approach and considerations for vendor selection. Overall, the document aims to provide librarians with information on utilizing RFID technology in their libraries.
This document summarizes a research paper that proposes a feature level fusion based bimodal biometric authentication system using fingerprint and face recognition with transformation domain techniques. The system extracts fingerprint features using Dual Tree Complex Wavelet Transforms and extracts face features using Discrete Wavelet Transforms. It then concatenates the fingerprint and face features into a single feature vector. Euclidean distance is used to match test biometrics to those stored in a database. The proposed algorithm is shown to achieve better equal error rates and true positive identification rates compared to individual transformation domain techniques.
Performance Evaluation of High Speed Congestion Control ProtocolsIOSR Journals
This document evaluates and compares the performance of seven high-speed congestion control protocols: Bic TCP, Cubic TCP, Hamilton TCP, HighSpeed TCP, Illinois TCP, Scalable TCP and YeAH TCP. The protocols are simulated using NS-2 with multiple flows and their efficiency, fairness, and overall performance are calculated and compared for different numbers of flows. Key metrics like throughput, link utilization, and fairness index are analyzed for the protocols under varying parameters like RTT, queue size, and number of flows. Cubic TCP and Bic TCP generally showed better performance than the other protocols based on the simulations.
This document summarizes key aspects of TCP traffic control as covered in Chapter 12. It discusses TCP flow and congestion control, including how the transmission rate is determined by incoming ACKs. It then covers traffic control fields in the TCP header, credit allocation mechanisms, and the impact of window size on throughput. The document also summarizes TCP congestion control techniques like slow start, congestion avoidance, fast retransmit, fast recovery, and limited transmit. It notes the differentiating impact of "mice vs. elephant" flows on network congestion.
This document discusses TCP flow and congestion control in high speed networks. It covers topics such as TCP flow control using a credit allocation scheme, TCP header fields for flow control, credit allocation flexibility, effects of window size, complicating factors, retransmission strategy using timers, adaptive retransmission timer algorithms, implementation policy options, congestion control difficulties, slow start, dynamic window sizing, fast retransmit, fast recovery, limited transmit, performance of TCP over ATM networks using UBR service, effects of switch buffer size, observations, partial packet discard techniques, and TCP over ABR service.
Boosting the Performance of Nested Spatial Mapping with Unequal Modulation in...Ealwan Lee
Presented at ICTC2018(9th International Conference on Information and Communication Technology Convergence)
Date : Oct 18, 2018
Place : Jeju, Korea
DOI) 10.1109/ICTC.2018.8539461
URL) https://ieeexplore.ieee.org/document/8539461
[ URL of the paper/preprint ]
https://www.researchgate.net/publication/328364760_Boosting_the_Performance_of_Nested_Spatial_Mapping_with_Unequal_Modulation_in_80211n
[ Prior works of Nested Spatial Mapping without Unequal Modulation(UEQM) ]
https://www.slideshare.net/ealwanlee/nested-mimo-lectures-in-2017-seoul
[ List of the articles related with this slide ]
https://www.linkedin.com/pulse/list-articles-nested-spatial-mapping-wlan80211n-ealwan-lee/
A short but packed course on TCP Dynamic Behavior. It starts by explaining TCP from scratch so the dynamic parts can be understood. Then it dives deep into how TCP behaves in real IP networks in the face of packet losses, delays and other phenomena.
The document discusses TCP congestion control algorithms. It describes the Additive Increase Multiplicative Decrease (AIMD) approach where the congestion window (cwnd) is increased linearly but reduced by half when packet loss is detected. Slow start is used to quickly ramp up cwnd initially through exponential growth. Fast retransmit detects lost packets using duplicate ACKs to retransmit earlier. Fast recovery then resumes increasing cwnd after a retransmit. The document also examines algorithms for adaptive retransmission timeouts based on mean and variance of measured round-trip times.
TCP uses congestion control algorithms like AIMD (Additive Increase Multiplicative Decrease) to adjust the congestion window size (cwnd) in response to indications of congestion from dropped or marked packets. Cwnd is increased linearly but decreased multiplicatively in half upon timeouts. Slow start is used initially and after timeouts to exponentially increase cwnd through doubling. Fast retransmit and fast recovery improve on timeouts by using duplicate ACKs to trigger early retransmits. Adaptive retransmission algorithms factor in the variability of measured RTTs.
- The document discusses simulation of FAST TCP congestion control algorithm using the OMNET++ discrete event simulation framework.
- It provides background on TCP congestion control algorithms like Tahoe and Reno and their problems. FAST TCP is presented as an enhancement that uses a delay-based approach for congestion window updates.
- The implementation details of FAST TCP within OMNET++ are outlined, including per ACK and per RTT congestion window update rules based on measured delay. Issues with fairness when used with other variants and priority queues are also noted.
DCTCP is an enhancement to TCP for data center networks that leverages ECN to provide more granular congestion feedback than standard TCP. It estimates the fraction of marked packets to react to the extent of congestion rather than just its presence. This finer control allows DCTCP to maintain low buffer occupancy while achieving high throughput. The algorithm adapts the window size based on the marked packet ratio in each RTT to be more responsive to congestion than standard TCP. DCTCP addresses the key requirements for data center networks of handling bursty traffic, low latency, and high throughput.
This document summarizes TCP congestion control. It discusses how TCP uses a congestion window and detects congestion via timeouts or duplicate ACKs to dynamically control transmission rates. TCP employs slow start, additive increase, and fast recovery policies. Slow start exponentially increases the window size initially. Additive increase slowly increases the window by 1/cwnd after each ACK. Fast recovery allows quick recovery from single losses. TCP switches between these policies and calculates throughput using additive increase, multiplicative decrease of the window over time.
TCP provides reliable data transmission through mechanisms like the three-way handshake, congestion control using AIMD, and fast retransmit. However, it is vulnerable to attacks like RST injection to terminate connections or FIN scans to detect open ports. Defenses include randomizing sequence numbers, stateful firewalls to validate packets, and intrusion detection systems to detect scanning behaviors.
This document discusses transport layer protocols, specifically UDP and TCP. It provides an overview of each protocol, including key properties and operations. UDP allows applications to communicate via multiplexing but does not ensure delivery. TCP establishes reliable, in-order connections and implements flow and congestion control to ensure reliable data transfer. The document outlines TCP connection establishment, data transfer methods, congestion control algorithms, and retransmission timeout estimation.
This chapter discusses TCP traffic control and congestion control. It introduces TCP flow control using a credit allocation scheme and sliding window mechanism. TCP congestion control dynamically adjusts the transmission window size in response to packet loss to control bandwidth. Key TCP congestion control algorithms discussed are slow start, congestion avoidance, fast retransmit, and fast recovery. The performance of TCP over ATM networks is also examined.
Improvement of Congestion window and Link utilization of High Speed Protocols...IOSR Journals
This document summarizes a research paper that proposes using a k-nearest neighbors (k-NN) algorithm to help high-speed transport layer protocols like CUBIC better distinguish between packet drops due to network congestion versus other factors like noise. The k-NN algorithm would analyze patterns in packet drop history to classify new drops, helping protocols avoid unnecessary window size reductions when drops are not actually due to congestion. The document provides background on high-speed protocols, issues like underutilization from treating all drops as congestion, and how incorporating k-NN classification could improve protocols' performance in noisy network conditions.
Employing non-orthogonal multiple access scheme in UAV-based wireless networksjournalBEEI
This paper studies the two-hop transmission relying unmanned aerial vehicle (UAV) relays which is suitable to implement in the internet of things (IoT) systems. To enhance system performance in order to overcome the large scale fading between the base station (BS) and destination as well as achieve the higher spectrum efficiency, where non-orthogonal multiple access (NOMA) strategies were typically applied for UAV relays to implement massive connections transmission. In particular, outage probability is evaluated via signal to noise ratio (SNR) criterion so that the terminal node can obtain reasonable performance. The derivations and analysis results showed that the considered fixed power allocation scheme provides performance gap among two signals at destination.The numerical simulation confirmed the exactness of derived expressions in the UAV assisted system.
This paper investigates fairness among network sessions that use the Multiplicative Increase Multiplicative Decrease (MIMD) congestion control algorithm. It first studies how two MIMD sessions share bandwidth in the presence of synchronous and asynchronous packet losses. It finds that rate-dependent losses lead to fair sharing, while rate-independent losses cause unfairness. The paper also examines fairness between sessions using MIMD (e.g. Scalable TCP) versus Additive Increase Multiplicative Decrease (AIMD, e.g. standard TCP). Simulations show the AIMD sessions converge to equal throughput, while MIMD sessions' throughput depends on initial conditions. Adding rate-dependent losses can achieve fairness between
This document summarizes a presentation on congestion control in TCP/IP networks. It discusses basics of congestion and how it can be catastrophic if not handled. It then describes the basic strategies used by TCP to combat congestion, including slow start, congestion avoidance, detection, and illustration of algorithms like fast retransmit and recovery. Issues with wireless networks and variants of TCP like New Reno, Vegas, and Westwood are also summarized. The presentation proposes a new congestion control algorithm and discusses plans to simulate and test it.
The document describes the design of a 10-bit, 50MS/s pipelined analog-to-digital converter (ADC) implemented in a 180nm CMOS process. Key aspects of the design include a flip-around track-and-hold amplifier, 1.5-bit per stage pipeline stages with gain of 2, and operational transconductance amplifiers optimized for power efficiency. Simulation results show the ADC achieves 9.64 effective bits, consumes 24.5mW of power, and has a figure of merit better than prior published pipelined ADCs of similar resolution and speed.
Lecture 19 22. transport protocol for ad-hoc Chandra Meena
This document discusses transport layer protocols for mobile ad hoc networks (MANETs). It begins with an introduction to MANETs and the need for new network architectures and protocols to support new types of networks. It then provides an overview of TCP/IP and how TCP works, including congestion control mechanisms. The document discusses challenges for TCP over wireless networks, where packet losses are often due to errors rather than congestion. It covers different versions of TCP and their approaches to congestion control. The goal is to design transport layer protocols that can address the unreliable links and frequent topology changes in MANETs.
This document provides an overview of TCP congestion control algorithms. It describes the basic additive increase/multiplicative decrease approach and key mechanisms like slow start, fast retransmit, and fast recovery. It also discusses algorithms for setting the retransmission timeout value and adaptations made in protocols like New Reno and Cubic.
This document summarizes a study on the economic prospects and human rights violations associated with shrimp farming in coastal regions of Bangladesh. It finds that while shrimp farming contributes significantly to Bangladesh's economy through exports and jobs, it has also led to environmental degradation and various human rights issues. Specifically, the study found reports of land conflicts, violence against women, restrictions on access to common areas, blocked canals interfering with water management, loss of agricultural land, and poor labor conditions like low wages, long hours, and unsafe working environments. Overall, the document examines both the economic benefits of the shrimp industry but also its negative social and human rights impacts.
This document discusses effective communication and common mistakes made in spoken and written English. It emphasizes that mistakes are opportunities to learn and should not be seen as embarrassing. While accuracy is important, the main goal of communication is to convey meaning clearly. The document outlines strategies for effective speaking, such as maintaining eye contact and developing listening skills. It also discusses challenges faced by some English learners in pronouncing certain sounds correctly. Overall, the document promotes focusing on intelligible communication over perfection and avoiding unnecessary bias or offense.
1) This document discusses the debate among Iranian religious intellectuals regarding modernization and their approaches to balancing tradition and modernity.
2) It outlines two major groups - Western-minded thinkers who emphasize separating tradition from modernity, and religious thinkers who seek to combine the two.
3) The document also summarizes the key arguments made by supporters of modernization, such as the neutrality of science, religion's emphasis on human progress, and that interaction between civilizations and modernization can aid development. It then summarizes the arguments made by opponents, such as the partiality of science and doubts that modernization alone can achieve social development.
This document summarizes a study on the extension service needs of catfish farmers in Oyo State, Nigeria. The study found that most catfish farmers were male, between 30-50 years old, and had primary education. Radio, friends/relatives, and extension agents were the most important information sources. The top extension service needs were marketing, stocking times, and credit access. The major challenges were poor weather, lack of credit, and high feed costs. The study recommends improved extension services, economic groups, credit access, and dissemination of best practices to enhance catfish production.
1. The study aimed to identify the effect of domestic violence on speech and pronunciation disorders in children in basic education in Ajloun governorate, Jordan.
2. The study found that parents used neglect and emotional violence against their children. Parents also punished children for using inappropriate words.
3. The study revealed significant differences in domestic violence between males and females, favoring males. Differences were also found based on birth order, favoring first born for emotional violence.
This document summarizes a study on labor relations practices in Assam's tea industry, with a focus on Jorhat District. It finds that workers have varying degrees of dissatisfaction across public, private, and government-owned tea estates. Workers were surveyed on topics like recruitment, selection, training, transfers, promotions, wages, and more. The study aims to identify strong areas and problems to improve labor relations. Key findings include high dissatisfaction among workers of Dhekiajuli Tea Estate regarding recruitment procedures and selection policies. Overall, the study examines labor relations in the tea industry and how satisfaction levels differ between estate types in Assam.
This document discusses the humanistic approach to teaching English as a foreign language. [1] It outlines four main methodologies associated with the humanistic approach: the silent way, community language learning, suggestopaedia, and total physical response. [2] These methods aim to engage students holistically and reduce anxiety around language learning. Classroom practices for these methods include relaxation exercises, role-playing scenarios, games, and peer work. [3] A study in India found that students had the greatest improvements in English skills during the first semester using these humanistic methods, showing their effectiveness. The humanistic approach aims to cultivate student motivation and a childlike openness to learning.
This document analyzes pulses production in sample villages of the Assan Valley region of Uttarakhand, India. It finds that the area and production of pulses, especially winter pulses like lentils and chickpeas, has drastically declined from 1990-2007. Through surveys of 275 farmers, the study identifies key constraints on pulses production including biotic factors like insect pests and diseases, abiotic factors like climate and rainfall, lack of access to inputs, weak extension services, and lack of market access. The rotation of pulses like chickpeas and pigeon peas with crops like rice and wheat was found to reduce chemical fertilizer use and increase outputs of those staple crops.
The document summarizes a study on gender differences in marital adjustment, mental health, and frustration reactions during middle age. The study was conducted in Delhi, India with 150 males and 150 females between ages 40-55 who were bank employees, doctors, or lecturers. It was found that females had higher levels of recreational adjustment than males, while males had a more group-oriented attitude than females. The study aimed to understand how marital adjustment, mental health, and reactions to frustration differed between males and females during middle age.
This document summarizes the views of two Iranian intellectuals, Ayatollah Morteza Motahari and Dr. Abdol-Karim Soroush, on the compatibility of Islam and democracy. Motahari represented religious reformists who sought to adapt modern concepts to religious texts. Soroush was a modernist who believed religion must renew itself to engage with modern life, not the other way around. Both supported an Islamic democratic state where the people choose leaders, but Soroush argued for greater limitations on clerical power and more emphasis on popular sovereignty and human political concepts over strict religious governance. The document examines their differing approaches to integrating democracy and Islam.
The document summarizes a study that investigated how blended scaffolding strategies through Facebook could aid learning and improve the writing process and performance of ESL students.
The study used a mixed methods approach, collecting both quantitative data through pre- and post-writing tests as well as qualitative data from student essays and interviews. Students received either traditional instruction alone (control group) or traditional instruction plus supplemental scaffolding through Facebook (experimental group).
Initial interview findings suggested students preferred the blended approach and felt it could help with learning, clarifying questions after school, generating ideas, editing work, and ultimately improving their writing and grades. The study aimed to determine if supplemental Facebook scaffolding positively impacted writing outcomes.
This document summarizes a study on rural health care in Thoubal District, Manipur, India. It finds that while India's constitution recognizes health as a primary duty, rural populations still lack adequate access to health care due to factors like poverty, lack of infrastructure, and social/psychological barriers. The study aims to evaluate health care facilities and services in Thoubal District, examine factors influencing access to primary health care, and assess the quality of services provided by health care workers to rural communities. It analyzes key health indicators for Manipur from the National Family Health Survey and finds that while material well-being is low, Manipur has relatively good public health outcomes, such as low infant mortality.
This document summarizes key points for socio-economic development in Aceh, Indonesia following conflict. It recommends:
1) Developing through participatory planning that engages local communities and innovation.
2) Ensuring political stability and peace by addressing injustices and providing jobs for ex-fighters.
3) Prioritizing micro-economic policies like entrepreneurship programs and credit facilities to revive small businesses.
This document summarizes a research study on the impact of microfinance banks on the standard of living of hairdressers in Oshodi-Isolo local government area of Lagos State, Nigeria. The study aims to examine how microfinance banks have impacted hairdressers' businesses and their ability to acquire assets and save. It involved surveying 120 hairdressers registered with the local government. The results found a significant relationship between microfinance efforts and the hairdressers' standard of living, indicating that microfinance has helped reduce poverty somewhat among this group. The study recommends that government ensure microfinance loans are easily obtainable with reasonable repayment schedules.
1) The document discusses the challenges faced by contemporary Indian society, including poverty, gender discrimination, corruption, illiteracy, global warming, and war. It then examines the role of NGOs in addressing these issues, such as alleviating poverty, empowering women, fighting corruption, providing education, and creating awareness about global warming.
2) The paper also identifies internal challenges NGOs face, like lack of commitment from staff, insufficient training facilities, and misappropriation of funds. External challenges include difficulties with fundraising, low community participation, and lack of trust in NGOs.
3) In conclusion, the role of NGOs is seen as tremendous in providing services to vulnerable groups. However,
1. The document analyzes science performance and dropout rates in France based on PISA test results from 2006-2009 compared to other developed countries.
2. While France achieved average results in math, its science scores remained below average and did not improve from 2006-2009. Dropout rates in France are about 11%.
3. The study finds that elementary and secondary curricula in France allocate fewer weekly hours to science compared to other core subjects, which may contribute to lower performance and higher dropout rates in science. Remedies discussed include improving teaching quality and fostering students' self-perception in science.
This document analyzes the current status of space law and conventions regarding sovereignty in outer space. It discusses key treaties like the Outer Space Treaty of 1967 and the Moon Treaty of 1979. While these treaties established some framework, many challenges remain unaddressed. Issues around defining boundaries between airspace and outer space, liability for damage, and jurisdiction over objects in space continue to be debated. The document concludes more work is still needed to harmonize regulations and reduce ambiguity regarding sovereignty and activities in outer space.
Gender discrimination in Pakistan threatens its security and progress. Women make up over half the population but face inhumane treatment through domestic violence, forced marriages, honor killings, and lack of access to education and jobs. Discrimination is deeply rooted in society and denies women their identity, treating them as property of fathers or husbands. To improve security and prosperity, Pakistan must eliminate discrimination and empower women through education, employment, and participation in decision making.
This document discusses the concept of God in the works of Tennessee Williams and Rabindranath Tagore. While from different cultures and born decades apart, both authors deeply explored human nature and spirituality. The document analyzes Williams' play "The Night of the Iguana" in depth, noting its religious symbols and exploration of faith through characters like Shannon. It also briefly discusses Tagore's views on evil and the nature of God. Overall, the document examines how both authors conveyed spiritual questions and themes in their work despite coming from varied backgrounds.
The document summarizes a study on intra-household labor distribution and the role of women in family decision making in Bangladesh. It analyzed 3 samples of households and found that:
1) Male members spent more time on productive work like crops and livestock while females spent more on reproductive work.
2) Females spent significant time on productive work as well and their workload increased after joining a poverty-reduction project.
3) After joining the project, 50% of females in some households became more involved in family decision making.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...
B0311219
1. IOSR Journal of Computer Engineering (IOSRJCE)
ISSN: 2278-0661 Volume 3, Issue 1 (July-Aug. 2012), PP 13-19
www.iosrjournals.org
Performance Evaluation of High Speed Congestion Control
Protocols
Mohamed Ali Mani and Rachid Mbarek
ABSTRACT: Computer networks are facing several technological and scientific challenges. Among these
challenges is congestion control. It limits the quantity of information input at a rate less important than that of
the transmission one to ensure good performance as well as protect against overload and blocking of the
network. Researchers have done a great deal of work on improving congestion control protocols, especially on
high speed networks.
In this paper, we will be studying the congestion control alongside low and high speed congestion control
protocols. We will be also simulating, evaluating, and comparing seven of high speed congestion control
protocols : Bic TCP, Cubic TCP, Hamilton TCP, HighSpeed TCP, Illinois TCP, Scalable TCP and YeAH TCP,
with multiple flows.
KEYWORDS: HIGH-SPEED, TCP PROTOCOLS, CONGESTION CONTROL, PERFORMANCE, MULTIPLE FLOWS.
I. INTRODUCTION
Due to the ease of finding information, data access and utilization by a simple mouse move in
multimedia pages, Internet has become necessary in our daily lives. In fact, the number of Internet users is
growing rapidly, and according to the ITU (International Telecommunication Union) it reached only 500 million
users in the beginning of 2000s while it was 1.6 billion in 2008 to rise to the number of 2 billions in 2011.
This increase must undergo a parallel development of both of hardware and software at the same time.
Early, throughput was about 56 kbps, however, now we talk about Gigabit Internet aiming to meet the users’
needs mainly in astronomy and video conference fields.
In Internet, there are some machines responsible for transmitting data packets, and other that take off
these packets from the queue. If the sender machine sends a data packet rate much more important than the rate
of the receiver, network congestion is produced, hence, a congestion control will limit the quantity of input
information with a lower rate than the transmission one to guarantee a good performance as well as a network
protection against overloading and blocking.
The high number of high speed congestion control protocols led us to prepare this research work,
which focuses on evaluating and comparing high speed congestion control protocols.
This paper is organized as follows:
In the second section we study the state of the art. In the third one, we present the architectures used as
well as the curves and the performances evaluation for different high speed congestion control protocols.
STATE OF THE ART
Researchers have worked on the enhancement of high speed congestion control protocols. Practically
every year, one or two protocols are implemented having for each one of them its own specific strengths and
weaknesses.
Recently, the research works are interested in evaluating the performance of these protocols by
comparing among 2 to 5 protocols with 1 to 12 flows [1] [2] [3] [4] [5].
HS-TCP [6]: If the congestion window is low, HS-TCP behaves just like the standard TCP, once the congestion
window exceeds the max_ssthresh, it rises in an aggressive way. The algorithm of HS-TCP is an AIMD
Additive Increment cwnd = cwnd + a(cwnd) Multiplicative Decrement cwnd = (1- b(cwnd)) x cwnd, the values
used for HSTCP are a(cwnd) in the interval of [1,72], b(cwnd) in the interval of [0.1,0.5] and low_window
equals 38 packets. when cwnd is lower than low_window a(cwnd) = 1 and b(cwnd) = 0.5 thus the algorithm acts
just like a standard TCP, in the case of a packet loss, the cwnd is reduced to 50%, when the cwnd reaches the
high_window a(cwnd ) = 72 and b(cwnd)=1.
If there is an acknowledgment:
1+β
𝑤1 = × cwnd cwnd < 𝑤1
2
𝑤1 = cwnd cwnd ≥ 𝑤1
w2 = cwnd
Scalable TCP [7]: For this protocol the cwnd increases with 0.001, if a congestion is detected the cwnd reduces
with 0.125 independently from the throughput and the packet size. This algorithm is a MIMD; Multiplicative
www.iosrjournals.org 13 | Page
2. Performance Evaluation Of High Speed Congestion Control Protocols
Increase since the cwnd increases with 0.01 per acknowledgment, Therefore there is a total increase of 0.01
cwnd per RTT, and Multiplicative Decrease since the cwnd decreases with 0.125 per packet loss.
BIC TCP [8]: uses a function that allows to have a rapid increase of cwnd when its value is far from the
maximal threshold Smax and to slowly decrease when having a value close to w_1.
cwnd = (1 – β) × cwnd
With :
β
δ
δ ≤ 1 and cwnd < w1
or w1 ≤ cwnd < w1 + β
fα (δ, cwnd) = δ 1 < δ ≤ Smax and cwnd < w1
w1
β−1
β ≤ cwnd − w1 < Smax 1 − β
Smax otherwise
cwnd : Current congestion window
Smax : Maximal threshold
Smin : Minimal threshold
Β : Decrease factor usually equals 0.875
CUBIC TCP [9]: The name CUBIC refers to the function of the congestion window increase which is cubic,
when receiving an ACK:
cwnd = C (T – K) ³ + Maxcwnd
When having a packet loss:
cwnd = β Maxcwnd
With :
Cwnd*:*Current congestion window
C*:*Constant
T*:*Duration of time since the last congestion
Maxcwnd *:*Size of the last congestion window
K*:* 3 Maxcwnd β/C
HTCP [10]: uses Δ the duration since the last congestion rather than cwnd as information about the product
delay bandwidth BDP. The increase parameter of AIMD (additive-increase/multiplicative-decrease) varies
depending on Δ. It also depends on the value of RTT which has provoked the unfairness among the competitor
flows with different RTTs.
In case of receiving an ACK:
2 1−β f (Δ)
cwnd = cwnd + cwndα
In case of packet loss:
cwnd = g β (B) × cwnd
With :
1 Δ ≤ ΔL
fα (Δ) =
max fα Δ Tmin ,1 Δ > ΔL
B K+1 – B(k)
0.5 B (K)
> ΔB
g β (B) = T min
min T max
,0.8 otherwise
cwnd*:*Current congestion window
α*:*Increase parameter
β*:*Decrease parameter
Δ*:*Duration of time since the last congestion
ΔL *:*Threshold to change the mode low speed to highspeed
Tmin *:*Minimal duration of sending and receiving packets
Tmax *:*Maximal duration of sending and receiving packets
B K + 1 *:*Maximal throughput reached before a congestion
fα Δ *=*1 +10 (Δ - ΔL ) + 0,25 (Δ - ΔL )²
www.iosrjournals.org 14 | Page
3. Performance Evaluation Of High Speed Congestion Control Protocols
Illinois TCP [11] : Similar to standard TCP, Illinois-TCP increases the congestion window by using two
parameters α and β, it is based on packet loss to define the congestion window value, however the values of α
and β are not constant by using the delays.
When receiving an ACK:
α
cwnd = cwnd +
𝑐𝑤𝑛𝑑
When losing packets:
cwnd = (1- β) × cwnd
With :
α 𝑚𝑎𝑥 𝑑 𝑎 ≤ 𝑑1
α = 𝑓1 (𝑑 𝑎 ) = 𝑘1
𝑘2+ 𝑑 𝑎
𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
β 𝑚𝑖𝑛 𝑑 𝑎 ≤ 𝑑2
β = 𝑓2 (𝑑 𝑎 ) = 𝑘3 + 𝑘4 𝑑 𝑎 𝑑2 < 𝑑 𝑎 < 𝑑3
β 𝑚𝑎𝑥 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
(𝑑 𝑚 − 𝑑 1 ) α 𝑚𝑖𝑛 α 𝑚𝑎𝑥
𝑘1 = *
α 𝑚𝑎𝑥 − α 𝑚𝑖𝑛
(𝑑 𝑚 − 𝑑 1 ) α 𝑚𝑖𝑛
𝑘2 = α −α
- 𝑑1
𝑚𝑎𝑥 𝑚𝑖𝑛
β 𝑚𝑖𝑛 𝑑 3 − β 𝑚𝑎𝑥 𝑑 2 β −β
𝑘3 = 𝑑3− 𝑑2
*𝑘4 = 𝑚𝑎𝑥 − 𝑑 𝑚𝑖𝑛
𝑑3 2
𝑘 𝑘
α 𝑚𝑖𝑛 = 𝑘 + 𝑑1 , α 𝑚𝑎𝑥 = 𝑘 +1 𝑑
2 𝑚𝑜𝑦 2 1
β 𝑚𝑖𝑛 = 𝑘3 + 𝑘4 𝑑2 , β 𝑚𝑎𝑥 =𝑘3 + 𝑘4 𝑑3
YeAH TCP [12] the algorithm starts with the Slow Start if cwnd < ssthresh and Scalable TCP once cwnd
reaches the ssthresh. This protocol uses two modes, fast mode and slow mode. In the case of the fast one, the
congestion window increases aggressively, for the second mode, the protocol behaves similarly to Reno TCP.
The mode is chosen according to the queue status. Having RTTBASE the minimal RTT measured by the sender
and RTTMIN the minimal RTT estimated from the current window. The estimated delay is:
RTTqueue = RTTmin − RTTbase
Thanks to this value, the number of packet on hold in the queue can be defined as follows:
cwnd
Q = RTTqueue × G = RTTqueue ×
RTTmin
And G is the throughput.
II. SIMULATION AND PERFORMANCES EVALUATION
2.1. Architecture
In order to simulate the high speed congestion control protocols, we have chosen a topology (as shown
in figure 1) composed of a sender and a receiver linked together with two routers by a line of 1Gbps of
bandwidth, the delay is 1ms. The routers are linked to each other with a line having a bandwidth of 200Mbps,
the delay is equal to 92 ms and the queue capacity is exactly 100 packets [13]. The MSS size equals 1460 bytes.
The differences among the bandwidths capacities provoke congestion.
Figure 1. Basic topology
To evaluate the performances, we have used 2, 4, 6, 8, 10, 12, 16 and 24 identical flows (as shown in
figure 2) with a link capacity of 1 Gbps and a delay of 1 ms.
Figure 2. Topology with multiple flows
2.2. TCP-Friendly
To test the TCP Friendly of different protocols, we have used a multiple flows topology with 4 input
flows triggered at the same time. The results are shown in the following graphs:
www.iosrjournals.org 15 | Page
4. Performance Evaluation Of High Speed Congestion Control Protocols
Figure 3.a. HS TCP : Throughput variation for 4 flows
*
Figure 3.b. Scalable TCP : Throughput variation for 4 flows
Figure 3.c. Bic TCP : Throughput variation for 4 flows
*
Figure 3.d. Hamilton TCP : Throughput variation for 4 flows
Figure 3.e. Cubic TCP : Throughput variation for 4 flows
*
Figure 3.f. Illinois TCP : Throughput variation for 4 flows
Figure 3.g. YeAH TCP : Throughput variation for 4 flows
www.iosrjournals.org 16 | Page
5. Performance Evaluation Of High Speed Congestion Control Protocols
2.3. Efficiency
The efficiency (rate of utilization or performance) of a network is the percentage of utilization [14].
Mathematically, it is the division of the average throughput by the optimal throughput.
An efficient network uses the maximum of capacity.
The following variable 𝑞 𝑖 is used as the average throughput of a flow i.
The throughput of a source with n flows is:
𝑛
1
𝑄= 𝑞𝑖
𝑛
𝑖=1
We calculate the ratio between the average throughput and the optimal one which is the result of an ideal
network performance:
𝑄
𝐸=
𝑄 𝑜𝑝𝑡
We have used these two topologies to simulate the seven protocols for 1, 2, 4, 6, 8, 10, 12, 16 and 24 flows.
The efficiency calculus results are as mentioned in the following graph:
Figure 4. Efficiency for different flow numbers
2.4. Fairness
The fairness is the attempt of sharing the network capacities among users in a fair way.
For the purpose of measuring the fairness, one method is used in the networks field which is called the Maximin
law proposed by Raj Jain [15]. Here is the procedure that allows us to calculate the fairness of a proposed
algorithm:
Having an algorithm that provides the distribution vi = [x1 , x2 ,…,xn ] instead of the optimal distribution vopt =
[x1,opt , x2,opt , … , xn,opt ]. We calculate the standardized distribution for every source as follows:
𝑥𝑖
𝑋𝑖 =
𝑥 𝑜𝑝𝑡
Thus, the fairness index F equals the sum of distributions squared and divided by the square of sums:
𝑛
( 𝑖=1 𝑋 𝑖 )²
𝐹= 𝑛
𝑛 𝑖=1 𝑋 𝑖 ²
We have used these two topologies to simulate the seven protocols for 1, 2, 4, 6, 8, 10, 12, 16 and 24 flows.
The fairness calculus results are as mentioned in the following graph:
Figure 5. Fairness for different flow numbers
2.5. Performance
The performance of a congestion control algorithm is the relation between efficiency and fairness:
Performance = α × E + (1 – α) × F
With α = [0,1.. 0,9], E the efficiency and F the fairness.
For the different algorithms, we have calculated the performances for a network rather efficient with α = 0,8, a
network rather fair with α = 0,2 and a balanced network when α = 0,5. The results for 1, 2, 4, 6, 8, 10, 12, 16 and
24 flows are as follows:
www.iosrjournals.org 17 | Page
6. Performance Evaluation Of High Speed Congestion Control Protocols
Figure 6.a. Performance for different flow numbers (α = 0,8)
Figure 6.b. Performance for different flow numbers (α = 0,5)
Figure 6.c. Performance for different flow numbers (α = 0,2)
2.6. Link Utilisation per RTT
We varied the RTT for different congestion control algorithms to calculate the link utilization. The
used RTT values are: 20 ms, 60 ms, 100 ms, 140 ms, 180 ms and 220 ms.
We used the basic topology with only one flow; the results are described in the following graph:
Figure 7. Link utilization for different RTTs
2.7. Link utilization for different queue sizes
Certainly the queue size has an important impact on the efficiency of the network, but which algorithm
benefits the more of this change? We varied the queue size for different congestion control algorithms to
calculate the link utilization. The used sizes are: 50, 100, 150, 200, 250 and 300 packets.
We used the basic topology with one flow, the results are as follows:
Figure 8. Link utilization for different queue sizes
III. CONCLUSIONS
The high number of congestion control algorithms never satisfied the researchers. Since this field is
always having changes and enhancements due to the users’ needs and the evolution of both of hardware and
software in Telecommunications. Indeed, it is difficult to conceive an algorithm that gives satisfying results for
all the architectures.
www.iosrjournals.org 18 | Page
7. Performance Evaluation Of High Speed Congestion Control Protocols
During this research work, we simulated seven high speed congestion control protocols for 1, 2, 4, 6, 8,
10, 12, 16 and 24 flows. We evaluated their performances by calculating the efficiency, the fairness as well as
the performance while varying the RTT and the queue size.
Basing on the simulation and the high speed protocols performance evaluation, we can conclude that:
• Some protocols perform well in some defined cases, but weak in others.
• The network architecture has got an important impact on the protocols performance.
• For 24 flows, all protocols are unfair.
• For 24 flows, the protocols: Bic TCP, Cubic TCP, Hamilton TCP, Scalable TCP and YeAH TCP are
efficient.
• Scalable is not only TCP-friendly, but also selfish, since the first flow gets the most important throughput,
and doesn’t leave its place for any other flow such as the case of Cubic TCP, Hamilton TCP and Illinois
TCP.
• The protocols Bic TCP, HighSpeed TCP and YeAH TCP are TCP-friendly
• For low values of RTT, all the protocols use the network capacities in a bad way.
• For low values of the queue size, Cubic TCP is the most efficient.
• YeAH TCP and Illinois TCP give the best results while increasing the queue size.
• Although his aggressiveness, YeAH TCP is quite TCP-friendly. It is also quite performant, but it doesn’t
suit all the architectures.
Further work is required to evaluate the performances of these protocols by using Relentless TCP
which will be the topic for a multitude of research works in the near future, particularly to define if the protocols
that have more stability and take more time before falling another time in a new congestion such as Cubic TCP
and Illinois TCP in order to check if they will well exploit the law that forms the basis of Relentless TCP which
is the reduction of the congestion window with the number of lost segments. It will be also interesting to create a
model that changes dynamically the congestion control protocols used in terms of flow number to better exploit
the network capacities as well as studying the QoS.
REFERENCES
[1] R. Mbarek, M. T. BenOthman and S. Nasri. “Fairness of High-Speed TCP Protocols with Different Flow Capacities”. Journal Of
Networks, Vol. 4, No. 3, pp. 163-169, May 2009.
[2] S. Hemminger. “TCP testing – Preventing global Internet warming”. LinuxConf Europe, Sep. 2007.
[3] S. Ha, L. Le, I. Rhee and L. Xu. “Impact of Background Traffic on Performance of High-Speed TCP Variant Protocols”. The
International Journal of Computer and Telecommunications Networking, Vol.51, May, 2007.
[4] M. Nabeshima and K. Yata. “Performance Evaluation and Comparison of Transport Protocols for Fast Long-Distance Networks”.
IEICE Trans. Commun., Vol.E89-B, No.4, pp. 1273-1283, Apr. 2006.
[5] M. Weigle, P. Sharma and J. Freeman. “Performance of Competing High-Speed TCP Flows”. Networking 2006, LNCS 3976, pp.
476-487, 2006.
[6] S. Floyd. “High-Speed TCP for Large Congestion Windows”. RFC 3649, Experimental, Dec. 2003.
[7] T. Kelly. “Scalable TCP : Improving Performance in High-Speed Wide Area Networks”. In Proceedings of PFLDnet, Geneva,
Switzerland, Feb. 2003.
[8] L. Xu, K. Harfoush, and I. Rhee. “Binary Increase Congestion Control for Fast, Long Distance Networks”. In Proceedings of IEE E
INFOCOM, Hong Kong, Mar. 2004.
[9] I. Rhee and L. Xu. “CUBIC: A New TCP-Friendly High-Speed TCP Variant”. In Proceedings of PFLDnet, Lyon, France, Feb.
2005.
[10] R. N. Shorten and D. J. Leith. “H-TCP : TCP for high-speed and long-distance networks”. In Proceedings of PFLDnet, Argonne,
Feb. 2004.
[11] S. Liu, T. Basar and R. Srikant.,"TCP-Illinois : A Loss- and Delay-based Congestion Control Algorithm for High-Speed Networks",
Performance Evaluation, 65(6-7) : 417-440, Mar. 2008.
[12] A. Baiocchi, A. P. Castellani and F. Vacirca. “YeAH-TCP: Yet Another Highspeed TCP”. PFLDnet, Roma, Italy, Feb. 2007.
[13] J. Semke, J. Mahdavi, and M. Mathis. "Automatic TCP Buffer Tuning", Computer Communications Review, a publication of ACM
SIGCOMM, Vol. 28, No. 4, Oct. 1998.
[14] R. Mbarek, M. T. BenOthman and S. Nasri. “Performance Evaluation of Competing High-Speed TCP Protocols”. IJCSNS
International Journal of Computer Science and Network Security, VOL.8 No.6, pp. 99-105, June 2008.
[15] R. Jain. “Congestion Control and Traffic Management in ATM Networks : Recent Advances and A Survey”. Computer networks
and ISDN Systems, Nov. 1995.
Authors
M. Mohamed Ali Mani received his M.S degree from ISITCOM, Hammam Sousse, Sousse University Tunisia, in 2011.
He is currently a lecturer in the Department of Computer Science in ISTLS in the same University. His research interests
lie in the general area of computer networks, including high speed transports protocols, flow control, congestion control in
high-speed networks and wireless networks.
Dr. Rachid Mbarek received his Ph.D from the College of Engineering (ENIS), Sfax University, Tunisia, received his
M.Sc. degree in Computer Engineering in 2003 from the same University. He is currently a Lecturer in the Department of
Telecommunication in the ISITCOM at Sousse University, Tunisia. His research interests lie in the general area of
computer communication networks, including high-speed transports protocols, flow control and congestion control in
high-speed networks, network performance analysis, peer to peer networking.
www.iosrjournals.org 19 | Page