Abstract Long Term Evolution (LTE) is the current standard for mobile networks based on Third Generation Partnership Project. LTE includes enhanced multimedia broadcast and multicast services (MBMS), also called as Evolved multimedia broadcast and multicast services (eMBMS) where the same content is transmitted to multiple users in one specific area. eMBMS is a new function defined in 3GPP Release 8 specification that supports the delivery of content and streaming to group users into LTE mobile networks. In LTE an important point of demanding multimedia services is to improve the robustness against packet losses. In this direction, in order to support effective point-to-multipoint download and streaming delivery, 3GPP has included an Application Layer Forward Error Correction (AL-FEC) scheme in the standard eMBMS. The standard AL-FEC system is based on systematic, fountain Raptor codes. Raptor coding is very useful in case of packet loss during transmission as it recover all data back from insufficient data at receiver terminal .In our work, in response to the emergence of an enhanced AL-FEC scheme, a raptor code has been implemented and performance is evaluated and the simulation results are obtained. Index Terms: long term evolution; multimedia broadcast multicast services; forward error correction; raptor codes
AODV Improvement by Modification at Source Node and Securing It from Black Ho...IJERA Editor
MANETS suffer from constraints in power, storage and computational resources ,as a result, they are more
vulnerable to various communications security related attacks. therefore we attempt to focus on analyzing and
improving the security of routing protocol for MANETS viz. the Ad hoc On Demand Distance Vector
(AODV)routing protocol. We propose modifications to the AODV we propose an algorithm to counter the
Black hole attack on the routing protocols in MANETs. All the routes has unique sequence number and the
malicious node has the highest Destination Sequence number and it is the first RREP to arrive. So the
comparison is made only to the first entry in the table without checking other entries in the table
A THROUGHPUT ANALYSIS OF TCP IN ADHOC NETWORKScsandit
This document analyzes the throughput of TCP in mobile ad hoc networks through simulations. It finds that TCP throughput decreases initially as the number of hops increases, then stabilizes at higher hop counts. This is due to hidden terminal problems at low hops. The number of retransmissions increases with payloads and flows due to buffering and congestion. TCP performance degrades in wireless networks because it cannot differentiate between congestion and non-congestion packet losses. Mobility, interference, and dynamic topology changes specific to wireless networks cause unnecessary triggering of TCP congestion control mechanisms.
TCP Fairness for Uplink and Downlink Flows in WLANsambitlick
The document proposes a dual queue scheme at access points to improve fairness between uplink and downlink TCP flows in wireless local area networks. The scheme employs two queues - one for downlink TCP data packets and another for uplink TCP ACK packets. By selecting the queues with different probabilities, the access point can control the ratio of TCP data and ACK sending rates to achieve fairness. Simulation results show that the dual queue scheme is effective at resolving the unfairness problem in a simple way without modifying existing MAC protocols or requiring per-flow queueing.
Proposition of an Adaptive Retransmission Timeout for TCP in 802.11 Wireless ...IJERA Editor
The Transport Control Protocol (TCP) is used to establish and control a session between two endpoints. The problem is that in 802.11 wireless environments TCP always considers that the packet loss is caused by network congestion. However, in these networks packet loss are usually caused by the high bit error rate, and the wireless link failures. Researchers found out that TCP performance in wireless networks can be highly enhanced as long as it is feasible to identify the packet loss causes; hence appropriate measures can be dynamically applied during an established TCP session in order to adjust the session parameters. This paper proposes an endto-end adaptive mechanism that allows the TCP session to dynamically adjust the RTO (Retransmission Timeout) of a TCP session; the server will have to adjust the timers based on feedbacks from clients. Feedbacks are piggybacked in the TCP Options header field of the ACK (Acknowledgment) messages. A feedback is an approximation of the time needed by the wireless channel to get the errors fixed. The mechanism has been validated using numerical analysis and simulations, and then compared to the original TCP protocol. Simulation results have shown better performance in terms of number of retransmissions at the server side due to the decrease in the number of timeouts; and thus lowest congestion on the wireless access point.
The document discusses network layer concepts including packet switching, IP addressing, and fragmentation. It provides details on:
- Packet switching breaks data into packets that are routed independently and reassembled at the destination. This allows for more efficient use of bandwidth compared to circuit switching.
- IP addresses in IPv4 are 32-bit numbers that identify devices on the network. Addresses are expressed in decimal notation like 192.168.1.1. Fragmentation breaks packets larger than the MTU into smaller fragments for transmission.
The document is a paper on the network layer written by Muhammad Adil Raja. It begins with an introduction that defines the key functions of the network layer, including getting packets from source to destination across multiple hops. It then outlines the topics to be covered, which are network layer design issues, routing algorithms, congestion control algorithms, and references. The body of the document discusses these topics in more detail through several sections. It covers issues like whether the network layer should provide connection-oriented or connectionless service, and compares virtual circuit and datagram networks. It also examines routing algorithms and the optimality principle.
ANALYSIS OF ROUTING PROTOCOLS IN WIRELESS MESH NETWORKIJCSIT Journal
There are two methods to improve the performance of routing protocols in wireless mesh networks. One way is to improve the methods used for select the path. Second way is to improve the algorithms to add up the new characteristics of wireless mesh networks. We also propose a new protocol that is used for Multi Interfaces and Multiple Channels (MIMC) named as Hybrid Wireless Mesh Protocol.
4..[26 36]signal strength based congestion control in manetAlexander Decker
This document discusses several algorithms and approaches for improving TCP performance in mobile ad hoc networks (MANETs) using signal strength measurements:
1. TCP Venoplus is a cross-layer approach that uses signal strength information from the MAC layer to better distinguish between packet losses due to congestion versus random errors.
2. Other algorithms proposed include Link-RED to tune wireless link drop probability, adaptive pacing to improve spatial reuse, and algorithms to predict link failures and find stable routes using signal strength measurements.
3. Received signal strength is also used in algorithms to minimize broadcast storms and adapt MAC layer retransmissions to distinguish true link failures from false ones caused by interference.
4. Gray zone prediction and
AODV Improvement by Modification at Source Node and Securing It from Black Ho...IJERA Editor
MANETS suffer from constraints in power, storage and computational resources ,as a result, they are more
vulnerable to various communications security related attacks. therefore we attempt to focus on analyzing and
improving the security of routing protocol for MANETS viz. the Ad hoc On Demand Distance Vector
(AODV)routing protocol. We propose modifications to the AODV we propose an algorithm to counter the
Black hole attack on the routing protocols in MANETs. All the routes has unique sequence number and the
malicious node has the highest Destination Sequence number and it is the first RREP to arrive. So the
comparison is made only to the first entry in the table without checking other entries in the table
A THROUGHPUT ANALYSIS OF TCP IN ADHOC NETWORKScsandit
This document analyzes the throughput of TCP in mobile ad hoc networks through simulations. It finds that TCP throughput decreases initially as the number of hops increases, then stabilizes at higher hop counts. This is due to hidden terminal problems at low hops. The number of retransmissions increases with payloads and flows due to buffering and congestion. TCP performance degrades in wireless networks because it cannot differentiate between congestion and non-congestion packet losses. Mobility, interference, and dynamic topology changes specific to wireless networks cause unnecessary triggering of TCP congestion control mechanisms.
TCP Fairness for Uplink and Downlink Flows in WLANsambitlick
The document proposes a dual queue scheme at access points to improve fairness between uplink and downlink TCP flows in wireless local area networks. The scheme employs two queues - one for downlink TCP data packets and another for uplink TCP ACK packets. By selecting the queues with different probabilities, the access point can control the ratio of TCP data and ACK sending rates to achieve fairness. Simulation results show that the dual queue scheme is effective at resolving the unfairness problem in a simple way without modifying existing MAC protocols or requiring per-flow queueing.
Proposition of an Adaptive Retransmission Timeout for TCP in 802.11 Wireless ...IJERA Editor
The Transport Control Protocol (TCP) is used to establish and control a session between two endpoints. The problem is that in 802.11 wireless environments TCP always considers that the packet loss is caused by network congestion. However, in these networks packet loss are usually caused by the high bit error rate, and the wireless link failures. Researchers found out that TCP performance in wireless networks can be highly enhanced as long as it is feasible to identify the packet loss causes; hence appropriate measures can be dynamically applied during an established TCP session in order to adjust the session parameters. This paper proposes an endto-end adaptive mechanism that allows the TCP session to dynamically adjust the RTO (Retransmission Timeout) of a TCP session; the server will have to adjust the timers based on feedbacks from clients. Feedbacks are piggybacked in the TCP Options header field of the ACK (Acknowledgment) messages. A feedback is an approximation of the time needed by the wireless channel to get the errors fixed. The mechanism has been validated using numerical analysis and simulations, and then compared to the original TCP protocol. Simulation results have shown better performance in terms of number of retransmissions at the server side due to the decrease in the number of timeouts; and thus lowest congestion on the wireless access point.
The document discusses network layer concepts including packet switching, IP addressing, and fragmentation. It provides details on:
- Packet switching breaks data into packets that are routed independently and reassembled at the destination. This allows for more efficient use of bandwidth compared to circuit switching.
- IP addresses in IPv4 are 32-bit numbers that identify devices on the network. Addresses are expressed in decimal notation like 192.168.1.1. Fragmentation breaks packets larger than the MTU into smaller fragments for transmission.
The document is a paper on the network layer written by Muhammad Adil Raja. It begins with an introduction that defines the key functions of the network layer, including getting packets from source to destination across multiple hops. It then outlines the topics to be covered, which are network layer design issues, routing algorithms, congestion control algorithms, and references. The body of the document discusses these topics in more detail through several sections. It covers issues like whether the network layer should provide connection-oriented or connectionless service, and compares virtual circuit and datagram networks. It also examines routing algorithms and the optimality principle.
ANALYSIS OF ROUTING PROTOCOLS IN WIRELESS MESH NETWORKIJCSIT Journal
There are two methods to improve the performance of routing protocols in wireless mesh networks. One way is to improve the methods used for select the path. Second way is to improve the algorithms to add up the new characteristics of wireless mesh networks. We also propose a new protocol that is used for Multi Interfaces and Multiple Channels (MIMC) named as Hybrid Wireless Mesh Protocol.
4..[26 36]signal strength based congestion control in manetAlexander Decker
This document discusses several algorithms and approaches for improving TCP performance in mobile ad hoc networks (MANETs) using signal strength measurements:
1. TCP Venoplus is a cross-layer approach that uses signal strength information from the MAC layer to better distinguish between packet losses due to congestion versus random errors.
2. Other algorithms proposed include Link-RED to tune wireless link drop probability, adaptive pacing to improve spatial reuse, and algorithms to predict link failures and find stable routes using signal strength measurements.
3. Received signal strength is also used in algorithms to minimize broadcast storms and adapt MAC layer retransmissions to distinguish true link failures from false ones caused by interference.
4. Gray zone prediction and
11.signal strength based congestion control in manetAlexander Decker
This document discusses several algorithms and approaches for improving TCP performance in mobile ad hoc networks (MANETs) using signal strength measurements:
1. TCP Venoplus is a cross-layer approach that uses signal strength information from the MAC layer to better distinguish between packet losses due to congestion versus random errors.
2. Other algorithms proposed include Link-RED and adaptive pacing to reduce the impact of wireless interference on TCP, as well as techniques to predict and avoid link failures before they occur such as gray zone prediction and multi-agent adaptive DSR.
3. Most of the algorithms aim to minimize unnecessary packet retransmissions when losses are due to non-congestion factors like mobility, thereby improving throughput
The document describes an opportunistic packet scheduling and media access control (OSMA) protocol for wireless LANs and multi-hop ad hoc networks. The OSMA protocol aims to alleviate the head-of-line blocking problem and exploit multiuser diversity by allowing a node to schedule transmissions to receivers with good channel conditions. The key mechanisms of OSMA are multicast RTS frames containing a list of candidate receivers, and priority-based CTS frames where the receiver with the best channel and highest priority replies first to avoid collisions. Simulation results show the OSMA protocol can significantly improve network throughput while maintaining fairness between links.
AN EXPLICIT LOSS AND HANDOFF NOTIFICATION SCHEME IN TCP FOR CELLULAR MOBILE S...IJCNCJournal
With the proliferation of mobile and wireless computing devices, the demand for continuous network connectivity exits for various wired-and-wireless integrated networks. Since Transmission Control Protocol (TCP) is the standard network protocol for communication on the Interne, any wireless network with Internet service need to be compatible with TCP. TCP is tuned to perform well in traditional wired
networks, where packet losses occur mostly because of congestion. However cellular wireless network
suffers from significant losses due to high bit errors and mobile handoff. TCP responds to all losses by
invoking congestion control and avoidance algorithms, resulting in degraded end-to-end performance. This
paper presents an improved Explicit Loss Notification algorithm to distinguish between packet loss due to congestion and packet loss due to wireless errors and handoffs. Simulation results show that the proposed protocol significantly improves the performance of TCP over cellular wireless network in terms of throughput and congestion window dynamics.
This document discusses various approaches to improving TCP performance over mobile networks. It describes Indirect TCP, Snooping TCP, Mobile TCP, optimizations like fast retransmit/recovery and transmission freezing, and transaction-oriented TCP. Each approach is summarized in terms of its key mechanisms, advantages, and disadvantages. Overall, the document evaluates different ways TCP has been adapted to better support mobility and address challenges like frequent disconnections, packet losses during handovers, and high bit error rates over wireless links.
This document analyzes the performance of LTE networks using random linear network coding (RLNC) at the application layer. The study uses a simulation setup with a remote host, eNodeB, and user equipment to send coded and uncoded packets. The results show that applying RLNC at the application layer improves throughput but increases packet delay compared to not using network coding. MIMO antenna techniques are also found to provide higher throughput than SISO.
The document discusses the application layer in computer networks. It describes several key application layer protocols including HTTP, FTP, email, telnet, SSH, DNS, and SNMP. It explains the client-server and peer-to-peer paradigms used in application layer communication. It also provides details about the World Wide Web including components like browsers, servers, documents and protocols like HTTP, HTML, and URLs.
In order enhance the network efficiency of Mobile ad hoc Networks (MANETs), an Power Unbiased Cooperative Media Access Control(PUC-MAC) protocol in MANETs was planned during this paper. It adopted the most effective partnership choice statement to select the cooperative node with higher channel condition, higher passing rate and additional balanced power consumption. Simulation results showed that PUC-MAC outperforms EC-MAC,Cooperative MAC (CoopMAC) and IEEE 802.11 Distributed Coordination perform (DCF) in terms of the packet release quantitative relation, network outturn and network lifespan beneath 2 distinct channel noise levels, particularly beneath the worst channel condition.
IV B.Tech I Sem CSE&IT JNTUK R10 regulation students have Mobile computing paper. This slides especially contains UNIT - 5 total material required for end exams
The transport layer accepts data from the session layer, breaks it into packets, and delivers the packets to the network layer. It provides end-to-end communication and ensures reliable delivery of data. The network interface layer sends and receives TCP/IP packets on the network medium. It encompasses the data link and physical layers of the OSI model. TCP/IP is independent of the specific network technology.
This document describes a proposed system called Enhanced Adaptive Acknowledgement (EAACK) for detecting misbehaving nodes in mobile ad hoc networks (MANETs). The system uses three components - ACK, Secure ACK, and Misbehavior Report Analysis. ACK provides end-to-end acknowledgment, S-ACK provides acknowledgment between three consecutive nodes, and MRA confirms any misbehavior reports. Digital signatures are also used to validate acknowledgments. The system is simulated using the NS-2 network simulator and results show it can effectively detect misbehaving nodes while maintaining good network performance.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This document discusses several approaches to improving TCP performance over mobile networks:
Traditional TCP uses slow start and congestion control mechanisms that reduce efficiency over mobile networks. Indirect TCP segments connections and uses a proxy to improve performance. Snooping TCP buffers packets to enable fast retransmissions without changing endpoints. Mobile TCP freezes the sender's window on disconnection to avoid unnecessary retransmissions. These methods aim to isolate wireless losses from congestion responses and enable fast recovery from errors and handovers.
This chapter discusses end-to-end transport protocols like UDP and TCP. UDP provides a simple demultiplexing service but does not guarantee delivery. TCP provides a reliable byte-stream service using a sliding window algorithm to ensure reliable, in-order delivery along with flow and congestion control. It establishes connections using a three-way handshake and terminates them gracefully. The chapter covers TCP and UDP headers, connection management, sliding window mechanics, and differences between flow and congestion control.
The three-way need for higher data rates, good quality of service and ubiquity in a converged all IP
communication cloud drives research in wireless communication. Wireless access networks are envisaged
candidates of the next generation wireless networks. The various access networks will be integrated with
other technologies including the wired backbone. The major issues in an all IP and converged networks
are: quality of service, seamless handover and network capacity. Emerging research seeks to address these
open research issues; for example the implementation of multi-channel and multi radio MAC protocols in
WMN. In this paper we analyze and evaluate the effectiveness of multi-channel and multi radio techniques
in WMN. The shortcomings of these schemes are highlighted and possible solutions are suggested. The
signalling delay metric is used for evaluation purposes. The focus is on the performance of the control
channel identified as the critical performance metric of multi-channel MAC protocols.
The document summarizes key concepts about the data link layer. It discusses the goals and services of the data link layer, including error detection and correction, sharing broadcast channels through multiple access protocols, link layer addressing, and reliable data transfer and flow control. It also provides an overview of common link layer technologies and implementation through instantiation of various data link layer protocols.
A New Data Link Layer Protocolfor Satellite IP NetworksNiraj Solanki
NSLP is a new satellite link protocol proposed for satellite IP networks. It simplifies the data link layer function and uses a variable length frame format coupled to IP packet data. This leads to higher transmission efficiency, lower IP packet loss rates, and better compatibility with TCP/IP compared to other protocols like CCSDS and HDLC. Simulation results show NSLP can improve performance for satellite IP networks by reducing overhead and improving utilization of limited satellite resources.
The document discusses the data link layer and outlines its key functions and design considerations. It covers three main services provided by the data link layer to the network layer: unacknowledged connectionless, acknowledged connectionless, and acknowledged connection-oriented. It also discusses framing of data, error detection and correction techniques, and elementary and sliding window protocols used at the data link layer. The goal is to study how the data link layer provides reliable and efficient communication between network layers on different machines.
System identification of a steam distillation pilot scale using arx and narx ...eSAT Journals
Abstract This paper presents steam temperature models for steam distillation pilot-scale (SDPS) by comparing Pseudo Random Binary Sequence (PRBS) versus Multi-Sine (M-Sine) perturbation signal Both perturbation signals were applied to nonlinear steam distillation system to study the capability of these input signals in exciting nonlinearity of system dynamics. In this work, both linear and nonlinear ARX model structures have been investigated. Five statistical approaches have been observed to evaluate the developed steam temperature models, namely, coefficient of determination, R2; auto-correlation function, ACF; cross-correlation function, CCF; root mean square error, RMSE; and residual histogram. The results showed that the nonlinear ARX models are superior as compared to the linear models when M-Sine perturbation applied to the steam distillation system. While, PRBS perturbation exhibit insufficient to model nonlinear system dynamic Keywords: SDPS, PRBS, M-Sine, ARX, R2, ACF, CCF, RMSE
Finite element simulation of hybrid welding process for welding 304 austeniti...eSAT Journals
Abstract Although autogenous laser welding has many advantages over traditional welding methods in many applications, still the process has a main disadvantage of poor gap bridging capability, which limits its applicability for wider industrial use. Owing to this limiting factor, a great deal of research work was carried out to overcome this disadvantage by using Arc source with laser welding. The combination of laser and Arc (MIG/TIG) welding processes in a same process zone is known as Hybrid Welding. This process involves very high peak temperature and rapid change in thermal cycle both of which are difficult to measure in real time. In this dissertation work, a 3- dimensional finite element model was developed for the analysis of hybrid welding process. Ansys Parametric Design language (APDL) code was developed for the same. The FEA results were validated with experimental results showing good agreement. Hybrid welding Simulations were carried out for AISI 304 Austenitic stainless Steel plate. The effects of laser beam power, Arc Welding and torch angle on the weld-bead geometry i.e. penetration (DP), welded zone width (BW) were investigated. The experimental plan was based on three factor 5 level central composite rotatable design. Second order polynomial equations for predicting the weld-bead geometry were developed for bead width and depth of penetration. The results indicate that the effect of arc current (AC) on bead width was more than on depth of penetration. Hence, the proposed models predict the responses adequately within the limits of welding parameters being used. Index Terms: Hybrid Welding, Ansys Parametric Design language (APDL), FEA, AISI 304 Austenitic stainless Steel, and Central composite rotatable design
Implementation of cyclic convolution based on fnteSAT Journals
This document describes an implementation of cyclic convolution based on the Fermat Number Transform (FNT). It proposes using a Code Convolution method Without Addition (CCWA) and a Butterfly Operation method Without Addition (BOWA) to perform the FNT and inverse FNT without carry propagation additions, except for the final stages. This reduces delay compared to other cyclic convolution architectures. A parallel architecture is presented using these techniques along with Modulo 2n+1 Partial Product Multipliers to perform the point-wise multiplications with less hardware complexity. Synthesis results demonstrate this architecture has better throughput and less hardware complexity than reported solutions.
Oscillatory motion control of hinged body using controllereSAT Journals
Abstract Due to technological revolution , there is change in daily life usuage of instrument & equipment.These usuage may be either for leisure or necessary and compulsory for life to live. In past there is necessity of a person to help other person but today`s fast life has restricted this helpful nature of human. This my project will helpful eliminate such necessity in certain cases. Oscillatory motion is very common everywhere. But its control is not upto now deviced tactfully. So it is tried to automate it keeping mind constraints such as cost, power consumption, safety,portability and ease of operating. Proper amalgamation of hardware and software make project flexible and stuff. The repetitive , monotonous and continuous operation is made simple by use of PIC microcontroller. There does not existing prototype or research paper on this subject. It probable first in it type.
Study and comparative analysis of resonat frequency for microsrtip fractal an...eSAT Journals
Abstract A new compact fractal patch antenna is designed based on the fractal geometry. Based on the simulation results, the proposed antenna has shown an excellent size reduction possibility with good radiation performance for wireless communication applications. The change in resonating frequency with respect to the dielectric constant of substrate has shown and discussed in this paper. The various resonating frequencies for designed antenna are 8.59GHz, 9.2 GHz and 11.36 GHz for RT Duroid Rogers 6010, FR 4 and RT Duroid Rogers 5880 respectively. The S-parameter (S11) for resonating frequencies is well below -10 dB. The far-field patternandS11oftheproposedantennaissimulatedandanalyzedusingCST Microwave Studio 2011
11.signal strength based congestion control in manetAlexander Decker
This document discusses several algorithms and approaches for improving TCP performance in mobile ad hoc networks (MANETs) using signal strength measurements:
1. TCP Venoplus is a cross-layer approach that uses signal strength information from the MAC layer to better distinguish between packet losses due to congestion versus random errors.
2. Other algorithms proposed include Link-RED and adaptive pacing to reduce the impact of wireless interference on TCP, as well as techniques to predict and avoid link failures before they occur such as gray zone prediction and multi-agent adaptive DSR.
3. Most of the algorithms aim to minimize unnecessary packet retransmissions when losses are due to non-congestion factors like mobility, thereby improving throughput
The document describes an opportunistic packet scheduling and media access control (OSMA) protocol for wireless LANs and multi-hop ad hoc networks. The OSMA protocol aims to alleviate the head-of-line blocking problem and exploit multiuser diversity by allowing a node to schedule transmissions to receivers with good channel conditions. The key mechanisms of OSMA are multicast RTS frames containing a list of candidate receivers, and priority-based CTS frames where the receiver with the best channel and highest priority replies first to avoid collisions. Simulation results show the OSMA protocol can significantly improve network throughput while maintaining fairness between links.
AN EXPLICIT LOSS AND HANDOFF NOTIFICATION SCHEME IN TCP FOR CELLULAR MOBILE S...IJCNCJournal
With the proliferation of mobile and wireless computing devices, the demand for continuous network connectivity exits for various wired-and-wireless integrated networks. Since Transmission Control Protocol (TCP) is the standard network protocol for communication on the Interne, any wireless network with Internet service need to be compatible with TCP. TCP is tuned to perform well in traditional wired
networks, where packet losses occur mostly because of congestion. However cellular wireless network
suffers from significant losses due to high bit errors and mobile handoff. TCP responds to all losses by
invoking congestion control and avoidance algorithms, resulting in degraded end-to-end performance. This
paper presents an improved Explicit Loss Notification algorithm to distinguish between packet loss due to congestion and packet loss due to wireless errors and handoffs. Simulation results show that the proposed protocol significantly improves the performance of TCP over cellular wireless network in terms of throughput and congestion window dynamics.
This document discusses various approaches to improving TCP performance over mobile networks. It describes Indirect TCP, Snooping TCP, Mobile TCP, optimizations like fast retransmit/recovery and transmission freezing, and transaction-oriented TCP. Each approach is summarized in terms of its key mechanisms, advantages, and disadvantages. Overall, the document evaluates different ways TCP has been adapted to better support mobility and address challenges like frequent disconnections, packet losses during handovers, and high bit error rates over wireless links.
This document analyzes the performance of LTE networks using random linear network coding (RLNC) at the application layer. The study uses a simulation setup with a remote host, eNodeB, and user equipment to send coded and uncoded packets. The results show that applying RLNC at the application layer improves throughput but increases packet delay compared to not using network coding. MIMO antenna techniques are also found to provide higher throughput than SISO.
The document discusses the application layer in computer networks. It describes several key application layer protocols including HTTP, FTP, email, telnet, SSH, DNS, and SNMP. It explains the client-server and peer-to-peer paradigms used in application layer communication. It also provides details about the World Wide Web including components like browsers, servers, documents and protocols like HTTP, HTML, and URLs.
In order enhance the network efficiency of Mobile ad hoc Networks (MANETs), an Power Unbiased Cooperative Media Access Control(PUC-MAC) protocol in MANETs was planned during this paper. It adopted the most effective partnership choice statement to select the cooperative node with higher channel condition, higher passing rate and additional balanced power consumption. Simulation results showed that PUC-MAC outperforms EC-MAC,Cooperative MAC (CoopMAC) and IEEE 802.11 Distributed Coordination perform (DCF) in terms of the packet release quantitative relation, network outturn and network lifespan beneath 2 distinct channel noise levels, particularly beneath the worst channel condition.
IV B.Tech I Sem CSE&IT JNTUK R10 regulation students have Mobile computing paper. This slides especially contains UNIT - 5 total material required for end exams
The transport layer accepts data from the session layer, breaks it into packets, and delivers the packets to the network layer. It provides end-to-end communication and ensures reliable delivery of data. The network interface layer sends and receives TCP/IP packets on the network medium. It encompasses the data link and physical layers of the OSI model. TCP/IP is independent of the specific network technology.
This document describes a proposed system called Enhanced Adaptive Acknowledgement (EAACK) for detecting misbehaving nodes in mobile ad hoc networks (MANETs). The system uses three components - ACK, Secure ACK, and Misbehavior Report Analysis. ACK provides end-to-end acknowledgment, S-ACK provides acknowledgment between three consecutive nodes, and MRA confirms any misbehavior reports. Digital signatures are also used to validate acknowledgments. The system is simulated using the NS-2 network simulator and results show it can effectively detect misbehaving nodes while maintaining good network performance.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This document discusses several approaches to improving TCP performance over mobile networks:
Traditional TCP uses slow start and congestion control mechanisms that reduce efficiency over mobile networks. Indirect TCP segments connections and uses a proxy to improve performance. Snooping TCP buffers packets to enable fast retransmissions without changing endpoints. Mobile TCP freezes the sender's window on disconnection to avoid unnecessary retransmissions. These methods aim to isolate wireless losses from congestion responses and enable fast recovery from errors and handovers.
This chapter discusses end-to-end transport protocols like UDP and TCP. UDP provides a simple demultiplexing service but does not guarantee delivery. TCP provides a reliable byte-stream service using a sliding window algorithm to ensure reliable, in-order delivery along with flow and congestion control. It establishes connections using a three-way handshake and terminates them gracefully. The chapter covers TCP and UDP headers, connection management, sliding window mechanics, and differences between flow and congestion control.
The three-way need for higher data rates, good quality of service and ubiquity in a converged all IP
communication cloud drives research in wireless communication. Wireless access networks are envisaged
candidates of the next generation wireless networks. The various access networks will be integrated with
other technologies including the wired backbone. The major issues in an all IP and converged networks
are: quality of service, seamless handover and network capacity. Emerging research seeks to address these
open research issues; for example the implementation of multi-channel and multi radio MAC protocols in
WMN. In this paper we analyze and evaluate the effectiveness of multi-channel and multi radio techniques
in WMN. The shortcomings of these schemes are highlighted and possible solutions are suggested. The
signalling delay metric is used for evaluation purposes. The focus is on the performance of the control
channel identified as the critical performance metric of multi-channel MAC protocols.
The document summarizes key concepts about the data link layer. It discusses the goals and services of the data link layer, including error detection and correction, sharing broadcast channels through multiple access protocols, link layer addressing, and reliable data transfer and flow control. It also provides an overview of common link layer technologies and implementation through instantiation of various data link layer protocols.
A New Data Link Layer Protocolfor Satellite IP NetworksNiraj Solanki
NSLP is a new satellite link protocol proposed for satellite IP networks. It simplifies the data link layer function and uses a variable length frame format coupled to IP packet data. This leads to higher transmission efficiency, lower IP packet loss rates, and better compatibility with TCP/IP compared to other protocols like CCSDS and HDLC. Simulation results show NSLP can improve performance for satellite IP networks by reducing overhead and improving utilization of limited satellite resources.
The document discusses the data link layer and outlines its key functions and design considerations. It covers three main services provided by the data link layer to the network layer: unacknowledged connectionless, acknowledged connectionless, and acknowledged connection-oriented. It also discusses framing of data, error detection and correction techniques, and elementary and sliding window protocols used at the data link layer. The goal is to study how the data link layer provides reliable and efficient communication between network layers on different machines.
System identification of a steam distillation pilot scale using arx and narx ...eSAT Journals
Abstract This paper presents steam temperature models for steam distillation pilot-scale (SDPS) by comparing Pseudo Random Binary Sequence (PRBS) versus Multi-Sine (M-Sine) perturbation signal Both perturbation signals were applied to nonlinear steam distillation system to study the capability of these input signals in exciting nonlinearity of system dynamics. In this work, both linear and nonlinear ARX model structures have been investigated. Five statistical approaches have been observed to evaluate the developed steam temperature models, namely, coefficient of determination, R2; auto-correlation function, ACF; cross-correlation function, CCF; root mean square error, RMSE; and residual histogram. The results showed that the nonlinear ARX models are superior as compared to the linear models when M-Sine perturbation applied to the steam distillation system. While, PRBS perturbation exhibit insufficient to model nonlinear system dynamic Keywords: SDPS, PRBS, M-Sine, ARX, R2, ACF, CCF, RMSE
Finite element simulation of hybrid welding process for welding 304 austeniti...eSAT Journals
Abstract Although autogenous laser welding has many advantages over traditional welding methods in many applications, still the process has a main disadvantage of poor gap bridging capability, which limits its applicability for wider industrial use. Owing to this limiting factor, a great deal of research work was carried out to overcome this disadvantage by using Arc source with laser welding. The combination of laser and Arc (MIG/TIG) welding processes in a same process zone is known as Hybrid Welding. This process involves very high peak temperature and rapid change in thermal cycle both of which are difficult to measure in real time. In this dissertation work, a 3- dimensional finite element model was developed for the analysis of hybrid welding process. Ansys Parametric Design language (APDL) code was developed for the same. The FEA results were validated with experimental results showing good agreement. Hybrid welding Simulations were carried out for AISI 304 Austenitic stainless Steel plate. The effects of laser beam power, Arc Welding and torch angle on the weld-bead geometry i.e. penetration (DP), welded zone width (BW) were investigated. The experimental plan was based on three factor 5 level central composite rotatable design. Second order polynomial equations for predicting the weld-bead geometry were developed for bead width and depth of penetration. The results indicate that the effect of arc current (AC) on bead width was more than on depth of penetration. Hence, the proposed models predict the responses adequately within the limits of welding parameters being used. Index Terms: Hybrid Welding, Ansys Parametric Design language (APDL), FEA, AISI 304 Austenitic stainless Steel, and Central composite rotatable design
Implementation of cyclic convolution based on fnteSAT Journals
This document describes an implementation of cyclic convolution based on the Fermat Number Transform (FNT). It proposes using a Code Convolution method Without Addition (CCWA) and a Butterfly Operation method Without Addition (BOWA) to perform the FNT and inverse FNT without carry propagation additions, except for the final stages. This reduces delay compared to other cyclic convolution architectures. A parallel architecture is presented using these techniques along with Modulo 2n+1 Partial Product Multipliers to perform the point-wise multiplications with less hardware complexity. Synthesis results demonstrate this architecture has better throughput and less hardware complexity than reported solutions.
Oscillatory motion control of hinged body using controllereSAT Journals
Abstract Due to technological revolution , there is change in daily life usuage of instrument & equipment.These usuage may be either for leisure or necessary and compulsory for life to live. In past there is necessity of a person to help other person but today`s fast life has restricted this helpful nature of human. This my project will helpful eliminate such necessity in certain cases. Oscillatory motion is very common everywhere. But its control is not upto now deviced tactfully. So it is tried to automate it keeping mind constraints such as cost, power consumption, safety,portability and ease of operating. Proper amalgamation of hardware and software make project flexible and stuff. The repetitive , monotonous and continuous operation is made simple by use of PIC microcontroller. There does not existing prototype or research paper on this subject. It probable first in it type.
Study and comparative analysis of resonat frequency for microsrtip fractal an...eSAT Journals
Abstract A new compact fractal patch antenna is designed based on the fractal geometry. Based on the simulation results, the proposed antenna has shown an excellent size reduction possibility with good radiation performance for wireless communication applications. The change in resonating frequency with respect to the dielectric constant of substrate has shown and discussed in this paper. The various resonating frequencies for designed antenna are 8.59GHz, 9.2 GHz and 11.36 GHz for RT Duroid Rogers 6010, FR 4 and RT Duroid Rogers 5880 respectively. The S-parameter (S11) for resonating frequencies is well below -10 dB. The far-field patternandS11oftheproposedantennaissimulatedandanalyzedusingCST Microwave Studio 2011
Water quality index with missing parameterseSAT Journals
Abstract This paper presents the efficient modifications in calculating formula of water quality index. Water quality index provides us a single number which expresses overall water quality at a certain location and time which is based on several quality parameters. The objective of an index is to turn complex water quality data into information that is understandable and usable by the public. In this paper a formula will be found to calculate water quality index when the numerical value of some of it’s quality parameters are missing. The standard formula to calculate water quality index has nine water quality parameters- biochemical oxygen demand, dissolved oxygen, pH, nitrate, phosphate, faecal coliform, turbidity, total dissolve solids and temperature. Sometimes it becomes very difficult to find out the values of all these parameters because of lack of time or because of failure in testing. In that case the formula with missing parameters will help us to calculate water quality index. Index Terms: Water quality index, q- values, weight factors, weighted mean.
A mathematical formulation of urban zoning selection criteria in a distribute...eSAT Journals
This document presents a mathematical formulation for zoning or partitioning a distributed service network into subnetworks. It describes various zoning criteria like demand equity, contiguity, compactness and avoiding enclaves that must be considered. An objective function is defined to minimize the maximum relative demand deviation between zones, subject to constraints. A pictorial example is provided to demonstrate the zoning process. The formulation allows a network to be optimally partitioned into zones based on defined criteria.
Seismic performance assessment of the torsional effect in asymmetric structur...eSAT Journals
Abstract In the recent time we come across many structures which are irregular in shape, this type of cannot be avoided due to the functional and architectural requirements. These type of structures have irregular distribution of centre of mass and centre of rigidity which causes the torsional effect on the structures which is one of the most important factor influencing the seismic damage of the structure. Structures with asymmetric distribution of mass and rigidity undergoes torsional motions during earthquake. To assess the torsional effect on the structures in the present study four types of structures are considered with varying eccentricity subjected to Pushover Analysis and Non-Linear Time History Analysis. The performance of the structures are assessed as per the procedure prescribe in ATC-40 and FEMA-356. The analysis of the structural models is done in ETABS. The results have shown that the structures with less eccentricity and in the direction of the columns orientation are performing well, also ductility, drift, and lateral displacement depends on the eccentricity of the structures. Key Words: Symmetric Structure, Asymmetric Structure, Pushover Analysis, Non-Linear Time History Analysis.
Biodiesel as a blended fuel in compression ignition engineseSAT Journals
Abstract Fast depletion of fossil fuels, rapid increase in the prices of petroleum products and harmful exhaust emissions from the engine jointly created renewed interest among researchers to find the suitable alternative fuels. The literature survey shows that the yield of Hibiscus Cannabinus seeds per hectare is about 800 kg and oil yield is about 120 liter (15-18 % yield). Its by products are fiber and cake, find wide spread applications. It is found that physical and chemical properties of Hibiscus Cannabinus oil biodiesel are very close to the diesel. The authors have conducted experimental tests on a single cylinder diesel engine using Hibiscus Cannabinus oil biodiesel and diesel blended fuel. The performance parameters like thermal efficiency, brake specific fuel consumption, fuel – air ratio and smoke tests are determined through experimentation. The authors concluded that the Hibiscus Cannabinus oil biodiesel biodiesel could be used as an alternative fuel in the blending form.
Keywords: Blended fuel, Diesel engine, Exhaust emissions, Hibiscus Cannabinus oil biodiesel
Implementation of low power divider techniques using radixeSAT Journals
This document summarizes a research paper on implementing low power divider techniques using radix-8 division. It describes the design of a radix-8 divider that aims to reduce energy dissipation without penalizing latency. The algorithm and implementation of the divider are explained. Several low power techniques are then applied, including switching off inactive blocks, retiming the recurrence, reducing transitions in multiplexers, changing the redundant representation, partitioning the selection function, and modifying the conversion and rounding process. Applying these techniques can reduce energy dissipation by up to 70% compared to a standard implementation. The performance of the radix-8 divider is also compared to a radix-4 divider and one using overlapped radix-2 stages.
A survey on hiding user privacy in location based services through clusteringeSAT Journals
Abstract Smartphone’s are being more and more popular as the technology being evolve. The Smartphone’s are capable of providing the location aware services like GPS. They share all the location information with the central location server. When user submit any query then these query also carries some personal information of the user. This query and information is then submitted to the LGS server. At the LBS server this information is not much confidential. Someone can use this information to make user panic. To overcome this we are proposing the new collaborative approach to hide user’s personal data from the LBS server. Our approach does not lead to make changes in the architecture of the LBS server. And we are also not going to use the third party server. Here we are going to use the other user’s device to search other users query so that other user can be get hide from the LBS server. Keywords: Mobile networks, location-based services, location privacy, Bayesian inference attacks, epidemic models
Reusing of glass powder and industrial waste materials in concreteeSAT Journals
Abstract A huge amount of concrete is consumed in the construction work. A good quality concrete is mixing of cement, fine and coarse aggregates, water and admixtures as needed to obtain an optimum quality and economy. In this study investigation were carried out on compressive strength, split tensile strength and water absorption of M-40 grade of concrete mixes with 20% constant replacement of waste glass powder in cement and partial replacement of waste foundry sand in fine aggregate. From the test results, strength are achieved very less on 7th and 14th das but it increases on the 28thday. High strength values found at 40% replacement level in strength parameters. Keywords: waste glass powder, waste foundry sand, eco-friendly, concrete mix.
An understanding of graphical perceptioneSAT Journals
Abstract Graphics was a two dimensional notion till the nineteenth century. It was only in the twentieth century, during the modern movement, that graphics emerged as a three dimensional tool. During the same time architecture also had a change in its definition and there was a breakthrough in the design industry. Colour which was earlier used as a superficial element for decoration now, started being used as a design element of third dimension in architecture. Designing with colour therefore it must be considered in all its aspects of which are related to one another, so that virtually in any work in which colour plays a role all must be taken into accent. Colours have a profound effect on mind and emotion and influence our mood and feelings. Colour can reinforce ideas of form and material; express its divisions and proportions. Colour can underline an architectural statement, it can punctuate, direct and focus attention and compensate in a way unlike any other abstract element”. How graphical use of colour can create a statement in architecture.
Effect of differential settlement on frame forces a parametric studyeSAT Journals
Abstract It has been well established that packed bed solar collectors perform better as compared to conventional collectors. Results of performance studies on packed bed solar collector are available in literature in which different operating conditions have been considered which make it difficult to compare their performance accurately. Considering this comparative study of performance of solar collector with different packing elements has been made in the present work. Experimental investigations on solar collector packed with iron chips, wire mesh, gravels and glass balls for the same set of operating parameter have been done on a single setup to study the effect of packing material and its geometry on the thermal efficiency of packed bed collector. It is observed that iron chips packed collector is identified as the best packing materials out of the materials selected for study leading to thermal efficiency of 76.21% for the mass flow rate of 0.035 kg/s and porosity of 0.945, which is 69.58% higher as compared to smooth collector. Thermal efficiency of wire mesh packed collector for similar operating conditions is found to be 74.26% which is 65.24% higher than smooth collector. In low porosity range gravel packed collector is found to perform better as compared to glass ball packing. Effect of mass flow rate on the effective efficiency has also been conducted for various packing elements used in the present study. Based on the experimental results, plots have been drawn for efficiency against temperature rise parameters for different packing elements which can be used by the designer for choosing the correct value of mass flow rate for the specific temperature rise application. Key Words: Solar Collector, Iron Chips, Wire Mesh, Gravels, Glass Balls, Packed Bed.
Critical analysis of radar data signal de noising by implementation of haar w...eSAT Journals
Abstract Data signal de-noising is a crucial part of every transaction which involves capturing waves and processing them into understandable format. Most essentially, this process of data signal de-noising is important in the field of military and defence. Because of the rapid development in military communication systems, a limitation arises in the systems. The limitation being - quantity of signals profoundly increases along with noise and other disturbances’ presence in the signal. In order to overcome this drawback, batch de-noising techniques are implemented on the systems to produce clean signals. In this paper, the critical analysis case of RADAR [RAdio Detection And Ranging] data de-noising by implementation of Haar Wavelet Transformation is considered. A signal set simulation containing four signals procured from RADAR data is taken into consideration. The signals are then subjected to de-noising in MATLAB and indigenously developed python tool using Haar Wavelet Transformations. The ensuing results pertaining to accuracy and efficiency of output signals are then compared. Keywords: RADAR data, MATLAB tool, Python, De-noising techniques, Haar Wavelet Transformation, Signal Simulation.
Survey on cloud computing security techniqueseSAT Journals
This document summarizes techniques for ensuring security in cloud computing. It discusses different cloud deployment models and types of threats to cloud security. It then analyzes several intrusion detection and prevention system techniques for providing data security in cloud computing, including proof of retrievability models and cryptographic storage services. It compares the advantages and disadvantages of techniques proposed by Juels, Shacham, Bowers, and Kamara regarding their ability to verify data integrity and availability while tolerating security breaches.
Harmonized scheme for data mining technique to progress decision support syst...eSAT Journals
Abstract Decision Support System (DSS) is equivalent synonym as management information systems (MIS). Decision supporting systems include also decisions made upon individual data from external sources, management feeling, and various other data sources not included in business intelligence. They serve as an integrated repository for internal and external data-intelligence critical to understanding and evaluating the business within its environmental context. Data mining have emerged to meet this need. With the addition of models, analytic tools, and user interfaces, they have the potential to provide actionable information that supports effective problem and opportunity identification, critical decision-making, and strategy formulation, implementation, and evaluation. The proposed system will support top level management to make a good decision in any time under any uncertain environment. Keywords: Dss, Dm, Mis, Clustering, Classification, Association Rule, K-Mean, Olap, Matlab
Performance evaluation and emission analysis of 4 s, i.c. engine using ethan...eSAT Journals
This document evaluates the performance and emissions of a single cylinder diesel engine running on ethanol-biodiesel-diesel blends. The key findings are:
1) Engine tests were conducted at varying loads using blends of D80B15E5, D70B20E10, and D70B25E5 and results were compared to neat diesel.
2) The D70B20E10 blend showed lower CO and HC emissions than other blends and slightly higher thermal efficiency than diesel.
3) Biodiesel's higher density and lower calorific value led to slightly higher fuel consumption for blends compared to diesel.
A study on modelling and simulation of photovoltaic cellseSAT Journals
Abstract This Paper presents a detailed study on the types of modelling of the PV Panel for simulation studies. The main concern of this study is to analyze the results and compare them under standard test conditions. PV systems are generally integrated with specific control algorithms in order to extract the maximum possible power. Hence it is highly imperative that the Maximum Power Point (MPP) is achieved effectively and thus we need to design a model from which the MPPT algorithm can be realized in an efficient way. Also other parameters should be taken into account for finding the best model for the use in simulation. It is very important to choose the appropriate model based on the application. The models used for study in this paper include the single diode model, two diode model and Simscape modelling. MATLAB/Simulink presents a powerful tool to study such systems. The work tests the accuracy of the models under different temperature and irradiance conditions. The two diode model is known to have better accuracy at low irradiance levels which allows for a more accurate prediction of PV system performance. Simscape, part of Simulink environment, has a solar cell block that makes building a PV model straightforward and much easier programming with full demonstration to all system details. On the basis of the study, the best model that can be used for simulation purposes can be selected. It is envisaged that the work can be very useful for professionals who require simple and accurate PV simulators for their design. All the systems here are modeled and simulated in MATLAB/Simulink environment. Keywords: PV cell, STC, MATLAB Simulink, Ideality Factor
This document summarizes a research paper that proposes using parallel concatenated turbo codes in wireless sensor networks in an adaptive way. The key points are:
1) Turbo codes can achieve near-Shannon limit performance but decoding is complex, making them difficult to implement on energy-constrained sensor nodes.
2) The proposed approach shifts the complex turbo decoding to the base station while sensor nodes implement encoding and basic error correction.
3) At sensor nodes, a parallel concatenated convolutional code (PCCC) circuit encodes data and detects/corrects errors in forwarded packets. This improves energy efficiency and reliability over the wireless sensor network.
A Case Study on Ip Based Cdma Ran by Controlling RouterIJERA Editor
As communication plays an important role in day to day life, the effective and efficient data transmission is to be maintained. This paper mainly deals with implements a congestion control mechanism using Router control method for IP-RAN on CDMA cellular network. The Router control mechanism uses the features of CDMA networks using active Queue Management technique to reduce delay and to minimize the correlated losses. When utilizing these new personal tools and services to enrich our lives, while being mobile, we are using Mobile Multimedia applications. As new handsets, new technologies and new business models are introduced on the marketplace, new attractive multimedia services can and will be launched, fulfilling the demands. Because the number of multimedia services and even more so, the context in which the services are used is numerous, the following model is introduced in order to simplify and clarify how different services will evolve, enrich our lives and fulfill our desires.The proposed paper work is to be realized using Matlab platform.
The document discusses channel estimation techniques for wireless communication systems. It describes that channel estimation is important for mobile wireless networks as the channel changes over time due to movement. It summarizes that channel estimation algorithms allow the receiver to approximate the time-varying channel impulse response. The document then compares two popular channel estimation algorithms - LMS and RLS. LMS uses estimates of the gradient vector to iteratively minimize the mean square error, while RLS recursively finds filter coefficients to minimize a weighted linear least squares cost function.
Abstract The data can get lost, reordered or duplicated due to the presence of routers and buffer space over the unreliable channel in the conventional networks. The data link layer deals with frame formation, flow control, error control, and addressing and link management. All such functions will be performed only by data link protocols. The sliding window protocol will detect and correct error if the received data have enough redundant bits or repeat a retransmission of data. The paper shows the working of this duplex protocol of data link network. Keywords: ACK, GOBACK, ARQ, NACK.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
This document summarizes a research paper that proposes a new routing algorithm called Quadrant-Based Directional Routing (Q-DIR) for multihop wireless networks. Q-DIR is implemented as a cross-layer with Contention Window Adaptation Mechanism (CWAM) to reduce network power consumption and increase throughput. Q-DIR limits flooding to the quadrant containing the source and destination nodes. CWAM adapts the contention window size based on node traffic to improve throughput. Simulation results show that Q-DIR with CWAM outperforms standard flooding protocols by utilizing fewer nodes and increasing throughput while reducing power consumption.
Analysis of MAC protocol for Cognitive Radio Wireless Sensor Network (CR-WSN)IRJET Journal
This document analyzes MAC protocols for cognitive radio wireless sensor networks (CR-WSNs). It begins with an introduction to cognitive radio and how combining it with wireless sensor networks can yield new networking capabilities. It then discusses various MAC protocol layers and focuses on the COGMAC protocol. COGMAC is a decentralized cognitive MAC protocol based on the multichannel preamble reservation scheme. The document outlines COGMAC's advantages over conventional MAC protocols and introduces an upgraded version called COGMAC+. COGMAC+ uses adaptive energy detection and random backoff schemes to improve performance. Finally, the document summarizes that cognitive radio can improve spectrum utilization and quality in sensor networks by exploiting multiple channel availability and overcoming issues from dense deployments.
Study on Performance of Simulation Analysis on Multimedia NetworkIRJET Journal
This document summarizes a study that simulated voice communication over wired networks using the NS-2 network simulator. The study modeled VoIP traffic between nodes using the SCTP protocol and added background traffic to evaluate its effects. Key findings from the simulation included:
1) Average latency was 0.98 seconds and 98 packets were dropped, indicating degraded performance when background traffic was added.
2) Average jitter (packet delay variation) was calculated to be 0.006 seconds, showing instability in the network with changing traffic patterns.
3) A graph of latency over time demonstrated increased delays and bottlenecks as background traffic overloaded network links.
Quadrant Based DIR in CWin Adaptation Mechanism for Multihop Wireless NetworkIJCI JOURNAL
In Multihop Wireless Networks, traffic forwarding capability of each node varies according to its level of contention. Each node can yield its channel access opportunity to its neighbouring nodes, so that all the nodes can evenly share the channel and have similar forwarding capability. In this manner the wireless channel is utilized effectively, which is achieved using Contention Window Adaptation Mechanism (CWAM). This mechanism achieves a higher end-to-end throughout but consumes the network power to a higher level. So, a newly proposed algorithm Quadrant- Based Directional Routing Protocol (Q-DIR) is implemented as a cross-layer with CWAM, to reduce the total network power consumption through limited flooding and also reduce the routing overheads, which eventually increases overall network throughput. This algorithm limits the broadcast region to a quadrant where the source node and the destination nodes are located. Implementation of the algorithm is done in Linux based NS-2 simulator
Performance Evaluation of a Layered WSN Using AODV and MCF Protocols in NS-2csandit
This document summarizes a study that compares the performance of two routing protocols, AODV and MCF, in a layered wireless sensor network (WSN) using the network simulator NS-2. It first provides background on AODV, describing how it establishes and maintains routes. It then describes the MCF protocol, which formulates lightpath routing as an integer linear program to minimize the impact of fiber failures. The document outlines how both protocols were implemented in NS-2 and compares their performance based on metrics like throughput, packet loss, and end-to-end delay. The simulation results show that MCF generally has better throughput and reliability than AODV in the scenario of a 80-node WSN.
IRJET- Performance Improvement of Wireless Network using Modern Simulation ToolsIRJET Journal
This document summarizes a research study that used the ns-3 network simulator to analyze the performance of two routing protocols - Optimized Link State Routing (OLSR) and Adhoc On-demand Distance Vector (AODV) - in a wireless ad hoc network under different conditions. The study varied parameters like packet size, number of nodes, and hello interval (the frequency at which routing information is broadcast) and measured metrics like throughput, delay, jitter, packet delivery ratio, packet loss, and congestion window. The results showed how the performance of the two protocols was impacted by changes to these parameters. The goal was to better understand congestion control and avoidance in wireless ad hoc networks through simulation.
Modified PREQ in HWMP for Congestion Avoidance in Wireless Mesh NetworkIRJET Journal
This document summarizes a proposed technique called Modified PREQ in HWMP for Congestion Avoidance in Wireless Mesh Networks. The technique aims to determine congested paths using CCNF frames and provide rerouting to less congested paths before congestion occurs to reduce burden on congested nodes. It allows continued packet transmission on congested paths until a rerouting path is found during congestion scenarios. When a packet is transmitted on the new path, the previous path is deleted to avoid further delay. Sequence numbers are used to avoid flooding the network. The paper compares this Modified PREQ technique to other congestion avoidance techniques to improve throughput and average delay using an NS-3 simulator.
EFFICIENT MULTI-PATH PROTOCOL FOR WIRELESS SENSOR NETWORKSijwmn
Wireless sensor networks are useful for streaming multimedia in infrastructure-free and hazardous environments. However, these networks are quite different from their wired counterpart and are composed of nodes with constrained bandwidth and energy. Multiple-path transmission is one of the methods for ensuring QoS routing in both wired and wireless environment. Directed diffusion, a well known wireless sensor network protocol, only routes packets through a single path, which barely meets the throughput requirement of multimedia data. Instead, we propose a multipath algorithm based on directed diffusion that reinforces multiple routes with high link quality and low latency. This algorithm retains the merits of the original directed diffusion algorithms, including its energy efficiency and scalability. A hybrid metric of link quality and latency is used as the criterion for path selection. In order to select disjoint paths, we propose a scheme for reinforced nodes to respond negatively to multiple reinforcement messages. We use the NS-2 simulation tool with video trace generated by Multiple Description Coding (MDC) to evaluate the performance. The results show that our algorithm gives better throughput and delay performance, i.e higher video quality, than standard directed diffusion that transmits over a single path, with low overheads and energy consumption.
Turbo codes are error-correcting codes with performance that is close to the
Shannon theoretical limit (SHA). The motivation for using turbo codes is
that the codes are an appealing mix of a random appearance on the channel
and a physically realizable decoding structure. The communication systems
have the problem of latency, fast switching, and reliable data transfer. The
objective of the research paper is to design and turbo encoder and decoder
hardware chip and analyze its performance. Two convolutional codes are
concatenated concurrently and detached by an interleaver or permuter in the
turbo encoder. The expected data from the channel is interpreted iteratively
using the two related decoders. The soft (probabilistic) data about an
individual bit of the decoded structure is passed in each cycle from one
elementary decoder to the next, and this information is updated regularly.
The performance of the chip is also verified using the maximum a posteriori
(MAP) method in the decoder chip. The performance of field-programmable
gate array (FPGA) hardware is evaluated using hardware and timing
parameters extracted from Xilinx ISE 14.7. The parallel concatenation offers
a better global rate for the same component code performance, and reduced
delay, low hardware complexity, and higher frequency support.
This document summarizes a research paper that proposes a novel cross-layer routing technique for mobile ad hoc networks. The technique calculates both signal strength and node mobility to select the most efficient and stable path for data transmission. It aims to improve on traditional ad hoc routing protocols like AODV by considering both link quality metrics from the physical layer (signal strength) and node mobility. The proposed method selects routes based on signal strength if mobility is high, and on traditional hop count if mobility is low, in order to find paths that reduce link failure and improve throughput.
The improvement of end to end delays in network management system using netwo...IJCNCJournal
The document summarizes research on improving end-to-end delays in a network management system using network coding. Specifically, it applies network coding to manage radio and television broadcast stations in a wireless network. The study shows that a proposed "Fast Forwarding Strategy" using network coding outperforms a classical routing strategy in reducing end-to-end delays from source to destination. It analyzes end-to-end delays theoretically using network calculus and conducts a practical study on a network of broadcast stations, finding the proposed strategy reduces delays compared to the classical strategy.
FREQUENCY AND TIME DOMAIN PACKET SCHEDULING BASED ON CHANNEL PREDICTION WITH ...ijwmn
1) The document discusses packet scheduling algorithms for LTE downlink systems that must operate under imperfect channel quality information (CQI) due to errors, delays, and other issues.
2) It proposes a new packet scheduling algorithm that uses a Kalman filter-based channel predictor in the frequency domain to estimate the true CQI from erroneous feedback, combined with a time domain grouping technique using proportional fair and modified largest weighted delay first algorithms.
3) Simulation results showed this approach achieves better performance than existing algorithms in terms of system throughput and packet loss ratio under imperfect CQI conditions.
Improving quality of service using ofdm technique for 4 th generation networkeSAT Journals
This document summarizes a research paper that compares the performance of 32QAM and 64QAM digital modulation techniques when used with OFDM for 4G networks. It finds that 32QAM has better performance with lower bit and packet loss over 64QAM. Specifically, when transmitting 1920 bits over an AWGN channel, 32QAM had 65 bit losses and 0 packet losses, while 64QAM had 80 bit losses and 0.04167 packet losses. Therefore, the document concludes 32QAM can be more efficiently used than 64QAM for digital transmission in 4G networks when combined with OFDM modulation.
Improving quality of service using ofdm technique for 4 th generation networkeSAT Publishing House
This document summarizes a research paper that compares the performance of 32QAM and 64QAM digital modulation techniques when used with OFDM for 4G networks. It finds that 32QAM has better performance with lower bit and packet loss over 64QAM. Specifically, when transmitting 1920 bits over an AWGN channel, 32QAM had 65 bit losses and 0 packet losses, while 64QAM had 80 bit losses and 0.04167 packet losses. Therefore, the document concludes 32QAM can be more efficiently used than 64QAM for digital transmission in 4G networks when combined with OFDM modulation.
Similar to Performance analysis of al fec raptor code over 3 gpp embms network (20)
Mechanical properties of hybrid fiber reinforced concrete for pavementseSAT Journals
Abstract
The effect of addition of mono fibers and hybrid fibers on the mechanical properties of concrete mixture is studied in the present
investigation. Steel fibers of 1% and polypropylene fibers 0.036% were added individually to the concrete mixture as mono fibers and
then they were added together to form a hybrid fiber reinforced concrete. Mechanical properties such as compressive, split tensile and
flexural strength were determined. The results show that hybrid fibers improve the compressive strength marginally as compared to
mono fibers. Whereas, hybridization improves split tensile strength and flexural strength noticeably.
Keywords:-Hybridization, mono fibers, steel fiber, polypropylene fiber, Improvement in mechanical properties.
Material management in construction – a case studyeSAT Journals
Abstract
The objective of the present study is to understand about all the problems occurring in the company because of improper application
of material management. In construction project operation, often there is a project cost variance in terms of the material, equipments,
manpower, subcontractor, overhead cost, and general condition. Material is the main component in construction projects. Therefore,
if the material management is not properly managed it will create a project cost variance. Project cost can be controlled by taking
corrective actions towards the cost variance. Therefore a methodology is used to diagnose and evaluate the procurement process
involved in material management and launch a continuous improvement was developed and applied. A thorough study was carried
out along with study of cases, surveys and interviews to professionals involved in this area. As a result, a methodology for diagnosis
and improvement was proposed and tested in selected projects. The results obtained show that the main problem of procurement is
related to schedule delays and lack of specified quality for the project. To prevent this situation it is often necessary to dedicate
important resources like money, personnel, time, etc. To monitor and control the process. A great potential for improvement was
detected if state of the art technologies such as, electronic mail, electronic data interchange (EDI), and analysis were applied to the
procurement process. These helped to eliminate the root causes for many types of problems that were detected.
Managing drought short term strategies in semi arid regions a case studyeSAT Journals
Abstract
Drought management needs multidisciplinary action. Interdisciplinary efforts among the experts in various fields of the droughts
prone areas are helpful to achieve tangible and permanent solution for this recurring problem. The Gulbarga district having the total
area around 16, 240 sq.km, and accounts 8.45 per cent of the Karnataka state area. The district has been situated with latitude 17º 19'
60" North and longitude of 76 º 49' 60" east. The district is situated entirely on the Deccan plateau positioned at a height of 300 to
750 m above MSL. Sub-tropical, semi-arid type is one among the drought prone districts of Karnataka State. The drought
management is very important for a district like Gulbarga. In this paper various short term strategies are discussed to mitigate the
drought condition in the district.
Keywords: Drought, South-West monsoon, Semi-Arid, Rainfall, Strategies etc.
Life cycle cost analysis of overlay for an urban road in bangaloreeSAT Journals
Abstract
Pavements are subjected to severe condition of stresses and weathering effects from the day they are constructed and opened to traffic
mainly due to its fatigue behavior and environmental effects. Therefore, pavement rehabilitation is one of the most important
components of entire road systems. This paper highlights the design of concrete pavement with added mono fibers like polypropylene,
steel and hybrid fibres for a widened portion of existing concrete pavement and various overlay alternatives for an existing
bituminous pavement in an urban road in Bangalore. Along with this, Life cycle cost analyses at these sections are done by Net
Present Value (NPV) method to identify the most feasible option. The results show that though the initial cost of construction of
concrete overlay is high, over a period of time it prove to be better than the bituminous overlay considering the whole life cycle cost.
The economic analysis also indicates that, out of the three fibre options, hybrid reinforced concrete would be economical without
compromising the performance of the pavement.
Keywords: - Fatigue, Life cycle cost analysis, Net Present Value method, Overlay, Rehabilitation
Laboratory studies of dense bituminous mixes ii with reclaimed asphalt materialseSAT Journals
Abstract
The issue of growing demand on our nation’s roadways over that past couple of decades, decreasing budgetary funds, and the need to
provide a safe, efficient, and cost effective roadway system has led to a dramatic increase in the need to rehabilitate our existing
pavements and the issue of building sustainable road infrastructure in India. With these emergency of the mentioned needs and this
are today’s burning issue and has become the purpose of the study.
In the present study, the samples of existing bituminous layer materials were collected from NH-48(Devahalli to Hassan) site.The
mixtures were designed by Marshall Method as per Asphalt institute (MS-II) at 20% and 30% Reclaimed Asphalt Pavement (RAP).
RAP material was blended with virgin aggregate such that all specimens tested for the, Dense Bituminous Macadam-II (DBM-II)
gradation as per Ministry of Roads, Transport, and Highways (MoRT&H) and cost analysis were carried out to know the economics.
Laboratory results and analysis showed the use of recycled materials showed significant variability in Marshall Stability, and the
variability increased with the increase in RAP content. The saving can be realized from utilization of recycled materials as per the
methodology, the reduction in the total cost is 19%, 30%, comparing with the virgin mixes.
Keywords: Reclaimed Asphalt Pavement, Marshall Stability, MS-II, Dense Bituminous Macadam-II
Laboratory investigation of expansive soil stabilized with natural inorganic ...eSAT Journals
This document summarizes a study on stabilizing expansive black cotton soil with the natural inorganic stabilizer RBI-81. Laboratory tests were conducted to evaluate the effect of RBI-81 on the soil's engineering properties. The tests showed that with 2% RBI-81 and 28 days of curing, the unconfined compressive strength increased by around 250% and the CBR value improved by approximately 400% compared to the untreated soil. Overall, the study found that RBI-81 effectively improved the strength properties of the black cotton soil and its suitability as a soil stabilizer was supported.
Influence of reinforcement on the behavior of hollow concrete block masonry p...eSAT Journals
Abstract
Reinforced masonry was developed to exploit the strength potential of masonry and to solve its lack of tensile strength. Experimental
and analytical studies have been carried out to investigate the effect of reinforcement on the behavior of hollow concrete block
masonry prisms under compression and to predict ultimate failure compressive strength. In the numerical program, three dimensional
non-linear finite elements (FE) model based on the micro-modeling approach is developed for both unreinforced and reinforced
masonry prisms using ANSYS (14.5). The proposed FE model uses multi-linear stress-strain relationships to model the non-linear
behavior of hollow concrete block, mortar, and grout. Willam-Warnke’s five parameter failure theory has been adopted to model the
failure of masonry materials. The comparison of the numerical and experimental results indicates that the FE models can successfully
capture the highly nonlinear behavior of the physical specimens and accurately predict their strength and failure mechanisms.
Keywords: Structural masonry, Hollow concrete block prism, grout, Compression failure, Finite element method,
Numerical modeling.
Influence of compaction energy on soil stabilized with chemical stabilizereSAT Journals
This document summarizes a study on the influence of compaction energy on soil stabilized with a chemical stabilizer. Laboratory tests were conducted on locally available loamy soil treated with a patented polymer liquid stabilizer and compacted at four different energy levels. The study found that increasing the compaction effort increased the density of both untreated and treated soil, but the rate of increase was lower for stabilized soil. Treating the soil with the stabilizer improved its unconfined compressive strength and resilient modulus, and reduced accumulated plastic strain, with these properties further improved by higher compaction efforts. The stabilized soil exhibited strength and performance benefits compared to the untreated soil.
Geographical information system (gis) for water resources managementeSAT Journals
This document describes a hydrological framework developed in the form of a Hydrologic Information System (HIS) to meet the information needs of various government departments related to water management in a state. The HIS consists of a hydrological database coupled with tools for collecting and analyzing spatial and non-spatial water resources data. It also incorporates a hydrological model to indirectly assess water balance components over space and time. A web-based GIS portal was created to allow users to access and visualize the hydrological data, as well as outputs from the SWAT hydrological model. The framework is intended to facilitate integrated water resources planning and management across different administrative levels.
Forest type mapping of bidar forest division, karnataka using geoinformatics ...eSAT Journals
Abstract
The study demonstrate the potentiality of satellite remote sensing technique for the generation of baseline information on forest types
including tree plantation details in Bidar forest division, Karnataka covering an area of 5814.60Sq.Kms. The Total Area of Bidar
forest division is 5814Sq.Kms analysis of the satellite data in the study area reveals that about 84% of the total area is Covered by
crop land, 1.778% of the area is covered by dry deciduous forest, 1.38 % of mixed plantation, which is very threatening to the
environmental stability of the forest, future plantation site has been mapped. With the use of latest Geo-informatics technology proper
and exact condition of the trees can be observed and necessary precautions can be taken for future plantation works in an appropriate
manner
Keywords:-RS, GIS, GPS, Forest Type, Tree Plantation
Factors influencing compressive strength of geopolymer concreteeSAT Journals
Abstract
To study effects of several factors on the properties of fly ash based geopolymer concrete on the compressive strength and also the
cost comparison with the normal concrete. The test variables were molarities of sodium hydroxide(NaOH) 8M,14M and 16M, ratio of
NaOH to sodium silicate (Na2SiO3) 1, 1.5, 2 and 2.5, alkaline liquid to fly ash ratio 0.35 and 0.40 and replacement of water in
Na2SiO3 solution by 10%, 20% and 30% were used in the present study. The test results indicated that the highest compressive
strength 54 MPa was observed for 16M of NaOH, ratio of NaOH to Na2SiO3 2.5 and alkaline liquid to fly ash ratio of 0.35. Lowest
compressive strength of 27 MPa was observed for 8M of NaOH, ratio of NaOH to Na2SiO3 is 1 and alkaline liquid to fly ash ratio of
0.40. Alkaline liquid to fly ash ratio of 0.35, water replacement of 10% and 30% for 8 and 16 molarity of NaOH and has resulted in
compressive strength of 36 MPa and 20 MPa respectively. Superplasticiser dosage of 2 % by weight of fly ash has given higher
strength in all cases.
Keywords: compressive strength, alkaline liquid, fly ash
Experimental investigation on circular hollow steel columns in filled with li...eSAT Journals
Abstract
Composite Circular hollow Steel tubes with and without GFRP infill for three different grades of Light weight concrete are tested for
ultimate load capacity and axial shortening , under Cyclic loading. Steel tubes are compared for different lengths, cross sections and
thickness. Specimens were tested separately after adopting Taguchi’s L9 (Latin Squares) Orthogonal array in order to save the initial
experimental cost on number of specimens and experimental duration. Analysis was carried out using ANN (Artificial Neural
Network) technique with the assistance of Mini Tab- a statistical soft tool. Comparison for predicted, experimental & ANN output is
obtained from linear regression plots. From this research study, it can be concluded that *Cross sectional area of steel tube has most
significant effect on ultimate load carrying capacity, *as length of steel tube increased- load carrying capacity decreased & *ANN
modeling predicted acceptable results. Thus ANN tool can be utilized for predicting ultimate load carrying capacity for composite
columns.
Keywords: Light weight concrete, GFRP, Artificial Neural Network, Linear Regression, Back propagation, orthogonal
Array, Latin Squares
Experimental behavior of circular hsscfrc filled steel tubular columns under ...eSAT Journals
This document summarizes an experimental study that tested circular concrete-filled steel tube columns with varying parameters. 45 specimens were tested with different fiber percentages (0-2%), tube diameter-to-wall-thickness ratios (D/t from 15-25), and length-to-diameter (L/d) ratios (from 2.97-7.04). The results found that columns filled with fiber-reinforced concrete exhibited higher stiffness, equal ductility, and enhanced energy absorption compared to those filled with plain concrete. The load carrying capacity increased with fiber content up to 1.5% but not at 2.0%. The analytical predictions of failure load closely matched the experimental values.
Evaluation of punching shear in flat slabseSAT Journals
Abstract
Flat-slab construction has been widely used in construction today because of many advantages that it offers. The basic philosophy in
the design of flat slab is to consider only gravity forces; this method ignores the effect of punching shear due to unbalanced moments
at the slab column junction which is critical. An attempt has been made to generate generalized design sheets which accounts both
punching shear due to gravity loads and unbalanced moments for cases (a) interior column; (b) edge column (bending perpendicular
to shorter edge); (c) edge column (bending parallel to shorter edge); (d) corner column. These design sheets are prepared as per
codal provisions of IS 456-2000. These design sheets will be helpful in calculating the shear reinforcement to be provided at the
critical section which is ignored in many design offices. Apart from its usefulness in evaluating punching shear and the necessary
shear reinforcement, the design sheets developed will enable the designer to fix the depth of flat slab during the initial phase of the
design.
Keywords: Flat slabs, punching shear, unbalanced moment.
Evaluation of performance of intake tower dam for recent earthquake in indiaeSAT Journals
Abstract
Intake towers are typically tall, hollow, reinforced concrete structures and form entrance to reservoir outlet works. A parametric
study on dynamic behavior of circular cylindrical towers can be carried out to study the effect of depth of submergence, wall thickness
and slenderness ratio, and also effect on tower considering dynamic analysis for time history function of different soil condition and
by Goyal and Chopra accounting interaction effects of added hydrodynamic mass of surrounding and inside water in intake tower of
dam
Key words: Hydrodynamic mass, Depth of submergence, Reservoir, Time history analysis,
Evaluation of operational efficiency of urban road network using travel time ...eSAT Journals
This document evaluates the operational efficiency of an urban road network in Tiruchirappalli, India using travel time reliability measures. Traffic volume and travel times were collected using video data from 8-10 AM on various roads. Average travel times, 95th percentile travel times, and buffer time indexes were calculated to assess reliability. Non-motorized vehicles were found to most impact reliability on one road. A relationship between buffer time index and traffic volume was developed. Finally, a travel time model was created and validated based on length, speed, and volume.
Estimation of surface runoff in nallur amanikere watershed using scs cn methodeSAT Journals
Abstract
The development of watershed aims at productive utilization of all the available natural resources in the entire area extending from
ridge line to stream outlet. The per capita availability of land for cultivation has been decreasing over the years. Therefore, water and
the related land resources must be developed, utilized and managed in an integrated and comprehensive manner. Remote sensing and
GIS techniques are being increasingly used for planning, management and development of natural resources. The study area, Nallur
Amanikere watershed geographically lies between 110 38’ and 110 52’ N latitude and 760 30’ and 760 50’ E longitude with an area of
415.68 Sq. km. The thematic layers such as land use/land cover and soil maps were derived from remotely sensed data and overlayed
through ArcGIS software to assign the curve number on polygon wise. The daily rainfall data of six rain gauge stations in and around
the watershed (2001-2011) was used to estimate the daily runoff from the watershed using Soil Conservation Service - Curve Number
(SCS-CN) method. The runoff estimated from the SCS-CN model was then used to know the variation of runoff potential with different
land use/land cover and with different soil conditions.
Keywords: Watershed, Nallur watershed, Surface runoff, Rainfall-Runoff, SCS-CN, Remote Sensing, GIS.
Estimation of morphometric parameters and runoff using rs & gis techniqueseSAT Journals
This document summarizes a study that used remote sensing and GIS techniques to estimate morphometric parameters and runoff for the Yagachi catchment area in India over a 10-year period. Morphometric analysis was conducted to understand the hydrological response at the micro-watershed level. Daily runoff was estimated using the SCS curve number model. The results showed a positive correlation between rainfall and runoff. Land use/land cover changes between 2001-2010 were found to impact estimated runoff amounts. Remote sensing approaches provided an effective means to model runoff for this large, ungauged area.
Effect of variation of plastic hinge length on the results of non linear anal...eSAT Journals
Abstract The nonlinear Static procedure also well known as pushover analysis is method where in monotonically increasing loads are applied to the structure till the structure is unable to resist any further load. It is a popular tool for seismic performance evaluation of existing and new structures. In literature lot of research has been carried out on conventional pushover analysis and after knowing deficiency efforts have been made to improve it. But actual test results to verify the analytically obtained pushover results are rarely available. It has been found that some amount of variation is always expected to exist in seismic demand prediction of pushover analysis. Initial study is carried out by considering user defined hinge properties and default hinge length. Attempt is being made to assess the variation of pushover analysis results by considering user defined hinge properties and various hinge length formulations available in literature and results compared with experimentally obtained results based on test carried out on a G+2 storied RCC framed structure. For the present study two geometric models viz bare frame and rigid frame model is considered and it is found that the results of pushover analysis are very sensitive to geometric model and hinge length adopted. Keywords: Pushover analysis, Base shear, Displacement, hinge length, moment curvature analysis
Effect of use of recycled materials on indirect tensile strength of asphalt c...eSAT Journals
Abstract
Depletion of natural resources and aggregate quarries for the road construction is a serious problem to procure materials. Hence
recycling or reuse of material is beneficial. On emphasizing development in sustainable construction in the present era, recycling of
asphalt pavements is one of the effective and proven rehabilitation processes. For the laboratory investigations reclaimed asphalt
pavement (RAP) from NH-4 and crumb rubber modified binder (CRMB-55) was used. Foundry waste was used as a replacement to
conventional filler. Laboratory tests were conducted on asphalt concrete mixes with 30, 40, 50, and 60 percent replacement with RAP.
These test results were compared with conventional mixes and asphalt concrete mixes with complete binder extracted RAP
aggregates. Mix design was carried out by Marshall Method. The Marshall Tests indicated highest stability values for asphalt
concrete (AC) mixes with 60% RAP. The optimum binder content (OBC) decreased with increased in RAP in AC mixes. The Indirect
Tensile Strength (ITS) for AC mixes with RAP also was found to be higher when compared to conventional AC mixes at 300C.
Keywords: Reclaimed asphalt pavement, Foundry waste, Recycling, Marshall Stability, Indirect tensile strength.
TIME DIVISION MULTIPLEXING TECHNIQUE FOR COMMUNICATION SYSTEMHODECEDSIET
Time Division Multiplexing (TDM) is a method of transmitting multiple signals over a single communication channel by dividing the signal into many segments, each having a very short duration of time. These time slots are then allocated to different data streams, allowing multiple signals to share the same transmission medium efficiently. TDM is widely used in telecommunications and data communication systems.
### How TDM Works
1. **Time Slots Allocation**: The core principle of TDM is to assign distinct time slots to each signal. During each time slot, the respective signal is transmitted, and then the process repeats cyclically. For example, if there are four signals to be transmitted, the TDM cycle will divide time into four slots, each assigned to one signal.
2. **Synchronization**: Synchronization is crucial in TDM systems to ensure that the signals are correctly aligned with their respective time slots. Both the transmitter and receiver must be synchronized to avoid any overlap or loss of data. This synchronization is typically maintained by a clock signal that ensures time slots are accurately aligned.
3. **Frame Structure**: TDM data is organized into frames, where each frame consists of a set of time slots. Each frame is repeated at regular intervals, ensuring continuous transmission of data streams. The frame structure helps in managing the data streams and maintaining the synchronization between the transmitter and receiver.
4. **Multiplexer and Demultiplexer**: At the transmitting end, a multiplexer combines multiple input signals into a single composite signal by assigning each signal to a specific time slot. At the receiving end, a demultiplexer separates the composite signal back into individual signals based on their respective time slots.
### Types of TDM
1. **Synchronous TDM**: In synchronous TDM, time slots are pre-assigned to each signal, regardless of whether the signal has data to transmit or not. This can lead to inefficiencies if some time slots remain empty due to the absence of data.
2. **Asynchronous TDM (or Statistical TDM)**: Asynchronous TDM addresses the inefficiencies of synchronous TDM by allocating time slots dynamically based on the presence of data. Time slots are assigned only when there is data to transmit, which optimizes the use of the communication channel.
### Applications of TDM
- **Telecommunications**: TDM is extensively used in telecommunication systems, such as in T1 and E1 lines, where multiple telephone calls are transmitted over a single line by assigning each call to a specific time slot.
- **Digital Audio and Video Broadcasting**: TDM is used in broadcasting systems to transmit multiple audio or video streams over a single channel, ensuring efficient use of bandwidth.
- **Computer Networks**: TDM is used in network protocols and systems to manage the transmission of data from multiple sources over a single network medium.
### Advantages of TDM
- **Efficient Use of Bandwidth**: TDM all
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...
Performance analysis of al fec raptor code over 3 gpp embms network
1. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 02 Issue: 04 | Apr-2013, Available @ http://www.ijret.org 601
PERFORMANCE ANALYSIS OF AL-FEC RAPTOR CODE OVER 3GPP
EMBMS NETWORK
Avani U Pandya1
, Sameer D Trapasiya2
, Santhi S Chinnam3
1
M.E. Research Scholar, 2
Assistant Professor, Department of Electronics & Communication Engineering,
G .H. Patel College of Engineering & Technology.,Gujarat,India,
3
Telecom Engineer, 4G RAN & Device Validation, Rancore Technologies Pvt Ltd, Mumbai, India,
avani.pandya03@gmail.com, sameertrapasiya@gmail.com, swaroop419@gmail.com
Abstract
Long Term Evolution (LTE) is the current standard for mobile networks based on Third Generation Partnership Project. LTE includes
enhanced multimedia broadcast and multicast services (MBMS), also called as Evolved multimedia broadcast and multicast services
(eMBMS) where the same content is transmitted to multiple users in one specific area. eMBMS is a new function defined in 3GPP
Release 8 specification that supports the delivery of content and streaming to group users into LTE mobile networks. In LTE an
important point of demanding multimedia services is to improve the robustness against packet losses. In this direction, in order to
support effective point-to-multipoint download and streaming delivery, 3GPP has included an Application Layer Forward Error
Correction (AL-FEC) scheme in the standard eMBMS. The standard AL-FEC system is based on systematic, fountain Raptor codes.
Raptor coding is very useful in case of packet loss during transmission as it recover all data back from insufficient data at receiver
terminal .In our work, in response to the emergence of an enhanced AL-FEC scheme, a raptor code has been implemented and
performance is evaluated and the simulation results are obtained.
Index Terms: long term evolution; multimedia broadcast multicast services; forward error correction; raptor codes
-----------------------------------------------------------------------***-----------------------------------------------------------------------
1. INTRODUCTION
Nowadays there is a significant demand for multimedia
services over wireless networks due to the explosive growth of
the multimedia internet applications and dramatic increase in
mobile wireless access. It is, therefore, foreseen that the
wireless systems will have to support applications with
increased complexity and tighter performance requirements,
such as real-time video streaming. Furthermore, it is expected
that popular content is streamed not just to a single user, but to
multiple users attempting to access the same content at the
same time. This is addressed by standardization bodies
through introduction of a point-to-multipoint service -
enhanced Multimedia Broadcast Multicast Service (eMBMS),
a resource-efficient transmission scheme targeting
simultaneous distribution of multimedia content to many user
devices within a serving area, over a single set of Core
Network and Radio resources. So, Multimedia Broadcast and
Multicast Service (MBMS) has been standardized as a key
feature in Third Generation Partnership Project(3GPP)
systems to broadcast and multicast multimedia content to
multiple mobile subscribers via MBMS radio bearer service.
MBMS is a point-to-multipoint (PTM) Standard, whose
further evolvement and enrichment attracts nowadays
widespread interest.
Long Term Evolution (LTE) has been designed to support
only packet-switched services. Long Term Evolution (LTE)
provides both the transmission mode single-cell MBMS,
MBMS services which are transmitted in a single cell and
multi-cellular evolved MBMS transmission mode, providing
synchronous MBMS transmission from multiple cells , also
known as multicast / broadcast single frequency network
mode of transmission. To transmit the same data to multiple
recipients allows network resources to be shared. MBMS
extends the existing architecture with the introduction of
3GPP MBMS bearer service and MBMS user services.
MBMS user services are constructed above the MBMS bearer
service. For the delivery of MBMS-based services, 3GPP
defines three functional layers. The first layer, called Bearers,
provides a mechanism to data transmission over IP. Bearers
based on point-to-multipoint data transmission (MBMS
bearers), which can be used in conjunction with point-to-point
transmission. The second layer is called delivery method
which offers two modes of content delivery: download method
of discrete objects and the streaming method providing
continuous media. Delivery also provides reliability with FEC.
The third layer (User Service/Application) enables
applications to the end-user and allows him to activate or
deactivate the service.
Generally an MBMS session includes the following three
phases.
1. User Service Discovery phase: MBMS services are
advertised to the end user using 2-way point-to-point TCP-IP-
2. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 02 Issue: 04 | Apr-2013, Available @ http://www.ijret.org 602
based communication or 1-way point-to-multipoint UDP-IP-
based transmission.
2. Delivery phase: Multimedia content is delivered (either
streaming or download mode) using 1-way point-to-
multipoint UDP-IP transmission.
3. Phase after delivery: A user can report on the quality of
the content received or request a repair service files (if the
download delivery method) using 2-way point-to-point TCP-
IP communication.
During the delivery, a UDP packet can be eliminated by the
physical layer, if bit errors cannot be corrected, and can be lost
due to, for example, network congestion or hardware failure.
As there is no feedback channel in the delivery phase of
eMBMS, ARQ-based protocols cannot be used as those ARQ
receivers use a feedback channel to the sender for requesting a
retransmission of lost packets.
Another approach is to use Forward Error Correction (FEC)
Codes. In order to analyze the performance of Forward Error
Correcting (FEC) algorithms, we need to define a channel
model. Binary Erasure Channel (BEC) was first introduced by
Peter Elias of MIT in 1955[2]. It has been considered as too
theoretic to draw attention, until the last few years when
Internet became popular. Binary Erasure channel is defined as
“a symbol either arrives to the destination, without any error
or is erased and never received”. Therefore, the receiver has
no idea about the bit transmitted with a certain probability p,
and this is exactly sure of the bit transmitted with a probability
1-p. According to Shannon, BEC capacity is 1-p, which means
that the size of the alphabet of 2k, where k is the number of
bits in the alphabet, no more than (1 - p) k bits/ symbols can
be communicated reliably to the binary erasure channel.
Automatic Repeat Request (ARQ) schemes have so long been
used as a classical approach to solve the reliable
communication problem [3]. In addition, any feedback from
the receiver to the sender will not increase the channel
capacity and reliable communication must be possible at this
rate. ARQ works well for point-to-point transmission and has
also been an effective tool for reliable point-to-multipoint
transmission. However, when the number of receivers
increases, ARQ reveals its limits. A major limitation is the
problem of feedback implosion that occurs when too many
receivers are transmitting to the sender. Another problem is
that, for a loss rate of data packets, and a set of receivers
experiencing losses, the probability that each data packet must
be retransmitted rapidly approaching unity as the number of
receivers becomes large. In other words, a high average
number of transmissions required per packet. However, an
excessive number of feedbacks are used in the case of erasures
the wasting of bandwidth, network overload and intolerable
delays occur.
Forward Error Correction (FEC) codes exist as Reed-Solomon
codes, which can recover the K source symbols from the
coded symbols of all K and N the total number of symbols
transmitted. However, the rate R = K/N should be determined
in accordance with the erasure probability p, before
transmission. If p changes or p is less or greater than expected,
it will cause problems at the decoder side or will result in a
lower transmission rate achievable. Another disadvantage of
fixed powerful FEC codes is the high cost of decoding and
encoding, for example, the Reed-Solomon codes have an
encoding cost of K (N-K)log2N [4].
Next is Low density parity check codes which were invented
by Gallager in the 1960s. For each destination node, the sum
of the values of its neighboring variable nodes must be zero.
They consist of sparse generator matrices with reduced costs
decoding when accompanied by Belief Propagation (BP).
Some of these codes are very close to Shannon limit.
However, they do not have a fast encoding algorithm, even if
some approaches have been proposed, often running in linear
time. The use of a code block, it is necessary to estimate the
probability p of the BEC and adjust the encoder with this
amount of redundancy.
Tornado codes have been developed for this purpose as an
extension of LDPC codes with reduced complexity of
encoding [5]. They can manage the rate adjustment, but not
yet effective especially when the channel is subject to frequent
changes.
Fountain Codes are a new class of codes designed and ideally
suited for reliable transmission of data over an erasure channel
with unknown erasure probability [6]. A fountain code has
properties similar to a water Fountain which can be thought as
an infinite supply of water drops. Anyone who wants to collect
the water drops holds a bucket under the fountain. When
enough water is collected, the bucket is removed. Similarly
with a digital source, a client gets encoded packets from one
or more servers and packages once enough are obtained, the
client can reconstruct the original file, which packets are
obtained should not matter. They are rate-less in the sense that
for a given message, the encoder can produce potentially
infinite number of output symbols. Output symbols can be bits
or more general bit sequences. However, random linear
Fountain Codes have encoding complexity of O (N2) and
decoding complexity of O (N 3) which makes them
impractical for nowadays applications.
Luby Transform (LT) codes have been proposed by Michael
Luby [7] to reduce the encoding and decoding complexity of
random linear Fountain Codes while maintaining the small
overhead. With a good choice of degree distribution, i.e. the
distributions of the edges in the Tanner graph, LT codes can
come arbitrarily close to channel capacity with certain decoder
reliability and logarithmically increasing encoding and
decoding costs. In order to reduce the complexity even more,
we can decrease the reliability of the decoder. Thus, we would
have a reduced degree distribution resulting linear time
encoding and decoding complexity. However, the decoder
3. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 02 Issue: 04 | Apr-2013, Available @ http://www.ijret.org 603
cannot decode all the input symbols with the lower degree
distribution for the same overhead constraint. Therefore,
utilizing an erasure correcting pre-code would then correct the
erasures arising from the weakened decoder. If the pre-code is
a linear time block code, like an LDPC code, Raptor Codes
provide marvelous encoding and decoding speeds while
providing near optimal performance for the BEC [8].
Raptor codes are an extension of the other part of LT codes
combined with a system of pre-coding. The design and degree
distribution pre-coding is the heart of Raptor codes. Instead,
the media data is protected using the application layer FEC
with Raptor codes. To systematically increase the reliability of
the transmission, an application layer FEC code can be used.
The rest of this paper is organized as follows: in Section II we
provide an overview of the 3GPP AL-FEC eMBMS delivery
framework and Section III presents a detailed description of
the examined AL-FEC scheme. In Section IV we present the
simulation environment and the conducted experimental
results. Finally, in Section V we draw our conclusions and we
describe some possible future steps.
2. EMBMS PROTOCOL STACK
2.1 3GPP AL-FEC eMBMS DELIVERY
The 3GPP standard multicast services, MBMS mentioned [1],
is a service of point-to-multipoint way in which data is
transmitted from a single source to a group of several mobile
terminals in an area of specific service. 3GPP defines two
delivery methods namely, downloading and streaming.
eMBMS user plane stack of these delivery methods is
illustrated in Fig. 1.
1) MBMS Streaming Delivery Protocols Stack: The purpose of
the MBMS streaming delivery method is to deliver continuous
multimedia data (i.e. speech, audio, video) over an MBMS
bearer. MBMS makes use of the most advanced multimedia
codecs such as H.264 for video applications and enhanced
Advanced Audio Coding (AAC) for audio applications. Real-
time transport protocol (RTP) is the transport protocol for
MBMS streaming delivery. RTP provides means for sending
real-time or streaming data over user datagram protocol
(UDP), the resulting UDP flows are mapped to MBMS IP
multicast bearers. Then IP packets are processed in the Packet
Data Convergence Protocol (PDCP) layer where for example
header compression might be applied. In the Radio Link
Control (RLC) the resulting PDCP- Protocol Data Units
(PDUs), generally of arbitrary length, are mapped to fixed
length RLC-PDUs. The RLC layer operates in
unacknowledged mode as feedback links on the radio access
network are not available for point-to-multipoint bearers. The
RLC layer is responsible for mapping IP packets to RLC
SDUs.Functions provided at the RLC layer are for example
segmentation and reassembly, concatenation, padding,
sequence numbering, reordering and out-of-sequence and
duplication detection. The Medium Access Control (MAC)
layer maps and multiplexes the RLC PDUs to the transport
channel and selects the transport format depending on the
instantaneous source rate. The MAC layer and physical layer
appropriately adapt the RLC-PDU to the expected
transmission conditions by applying, among others; channel
coding, power and resource assignment, and modulation. [1]
2).MBMS Download (File) Delivery Protocols Stack: MBMS
download delivery method aims to distribute discrete objects
(e.g. files) by means of a MBMS download session. Download
method uses the File deLivery over Unidirectional Transport
(FLUTE) protocol when delivering content over MBMS
bearers. FLUTE is built on top of the Asynchronous Layered
Coding (ALC) protocol instantiation. ALC combines the
Layered Coding Transport (LCT) building block and the FEC
building block to provide reliable asynchronous delivery of
content to an unlimited number of concurrent receivers from a
single sender. A detailed description of the FLUTE building
block structure can be found in [1]. Thereafter, FLUTE is
carried over UDP/IP, and is independent of the IP version and
is forwarded to the Packet Data Convergence Protocol (PDCP)
layer. The packets are then sent to the Radio Link Control
(RLC) layer. The RLC Layer functions in unacknowledged
mode. The RLC layer is responsible for mapping IP packets to
RLC SDUs. The Media Access Control (MAC) Layer adds a
16 bit header to form a PDU, which is then sent in a transport
block on the physical layer.
APPLICATIONS
Streaming Delivery
Video, Audio
Etc.
RTP Payload Format
SRTP RTP/RTCP
Download Delivery
File, Image etc.
FEC Streaming Frame work
(AL-FEC)
FLUTE (AL-FEC)
UDP
IP Multicast
PDCP
RLC (Segmentaion)
MAC
Phy-FEC Phy Layer
Resource/Power
allocation
Figure 1: eMBMS protocol stack
3. AL-FEC Scheme
In this section we provide a detailed description of the 3GPP
standardized Raptor code.
3.1 RAPTOR CODES
Raptor codes were introduced in [9]. The use of Raptor codes
in the application layer of eMBMS was introduced to 3GPP
4. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 02 Issue: 04 | Apr-2013, Available @ http://www.ijret.org 604
aiming to improve service robustness against packet losses.
Raptor codes are fountain codes, meaning that as many
encoding symbols as desired can be generated by the encoder
on-the-fly from the source symbols of a source block of data.
Raptor codes are one of the first known classes of fountain
codes with linear encoding and decoding time [9]. The
systematic Raptor Encoder is used to generate repair symbols
from a source block that consists of K source symbols [10].In
preparation of the encoding; a certain amount of data is
collected within a FEC source block. The data of a source
block are further divided into k source symbols of a fixed
symbol size. The decoder is able to recover the whole source
block from any set of encoding symbols only slightly more in
number than the source symbols. The Raptor code specified
for MBMS is a systematic code producing n encoding symbols
E from k < n source symbols C, so as the original source
symbols are within the stream of the transmitted symbols. This
code can be viewed as the concatenation of several codes. The
most inner code is a non-systematic Luby-Transform (LT)
code [7] with l input symbols F, which provides the fountain
property of the Raptor codes. This non-systematic Raptor code
is not constructed by encoding the source symbols with the LT
code, but by encoding the intermediate symbols generated by
some outer high-rate block code. This means that the outer
high rate block code generates the F intermediate symbols
using k input symbols D. Finally, a systematic realization of
the code is obtained by applying some pre-processing to the k
source symbols C such that the input symbols D to the non-
systematic Raptor code are obtained [11]. Considering the
performance of Raptor codes the most typical comparison is
that to an ideal fountain code. An ideal fountain code can
produce from any number k of source symbols any number m
of repair symbols with the property that any combination of k
of the k+m encoding symbols is sufficient for the recovery of
the k source symbols. That is the point of the most important
differentiation between an ideal fountain code and the
standardized Raptor code. While an ideal code has zero
reception overhead i.e., the number of received symbols
needed to decode the source symbols is exactly the number of
source symbols, the Raptor code has a performance close to
that property. The performance of an AL-FEC code can be
described by the decoding failure probability of the code. The
study presented in [12] describes the decoding failure
probability of Raptor code as a function of the source block
size and the received symbols. In fact, the inefficiency of the
Raptor code can accurately be modeled by (1) [12]
1 if
( , )
0.85 0.567 ifRf n k
n k
p n k
n k
(1)
In (1), pfR(n,k) denotes the decoding failure probability of the
Raptor code if the source block size is k symbols and n
encoding symbols have been received. Failure probability
decreases exponentially when the number of received
encoding symbols increases. Moreover, a crucial point for the
robustness of an AL-FEC protected delivery session is the
transmission overhead. The transmission overhead is defined
as the amount of redundant information divided by the amount
of source data and is equal to the fraction (N-K) /K in terms of
percentage. In this fraction, N denotes the number of
transmitted packets and K denotes the number of the source
packets.
1) 3GPP Raptor Encoding Process: Raptor codes [10] are
serially concatenated codes with a pre-code as the outer code
and the LT code [1] as the inner code. Pre-code itself is also a
serially concatenated code, which uses an LDPC code and a
code with dense parity (Gray code) check matrix as the outer
code and the inner code, respectively. For Raptor Encoder
source object is divided into Z>=1 number of source blocks.
Each source block has K source symbols of size T bytes each.
Each source block is encoded independently from the next
Block. The source block construction is specified in 3GPP
26.346 [1]. The systematic Raptor codes generate encoding
symbols which contain K source symbols plus repair symbols.
In this systematic Raptor code, K original input symbols are
first encoded to L=K+S+H intermediate symbols by precode.
Here, S and H are the amount of redundancy added by the
outer and the inner codes of the pre-code, respectively. And
then, we can generate any number of encoding symbols as
needed with those intermediate symbols. The intermediate
symbols are related to the source symbols by a set of source
symbol triples. The encoding process is to produce repair
symbols by bit-wise XORing specific source symbols. The
whole process is divided in following steps.
Step 1:- The first step in encoding can be performed by
generating an LxL encoding matrix, A to calculate L
intermediate symbols as [10].
GLDPC Is 0SXH
GHALF IH
GLTGLT
S
K
K S H
H
(-1)
Pre_Code Matrix
GLT(1...N)
GLT(1...K)
LT Encoder
LT Decoder
Systematic GF(2) Raptor Encoder
L
GLDPC Is 0SXH
GHALF IH
GLTGLT
S
N
K S H
H
(-1)
Pre_Code Matrix
Systematic GF(2) Raptor Decoder
Channel
L
C
CK
N
K
K
D
Figure 2: Block diagram of Raptor encoder and decoder
5. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 02 Issue: 04 | Apr-2013, Available @ http://www.ijret.org 605
C = A-1
D (2)
Where, C represents column vector of the L intermediate
symbols, D represents column vector of S+H zero symbols
followed by the K source symbols.
The precode matrix A consists of several submatrices as
shown in figure 2. In this matrix A, the top S+H rows
represent the constraints of the pre-code on C, and the bottom
K rows, each corresponding to a source symbol of D,
represent the generator matrix of the LT code. (GLDPC)S x K
and (HHa1f) H x (S + K) correspond to a LDPC check matrix
and a high dense parity check matrix, respectively. I_S be the
S x S identity matrix I_H be the H x H identity matrix 0_SxH
be the S x H zero matrix and (GLT) KxL is a LT encoding
matrix. (GLDPC)S X K and (HHa1f) H X (S+ K) are known
on the both encoder and decoder side.
Step 2:-The second encoding step is to generate the repair
symbols from L intermediate symbols using LT encoding
process as LTEnc[K,C[0],C[1],…,C[L-1],(d,a,b)] [10] where
(d, a, b) represents triples for each symbol.
At the end of step 2 final transmitted streams are generated.
2) 3GPP Raptor Decoding Process: The N received
symbols are input to the decoder where
N = K + R – Ls (3)
Where K defines the no. of source symbols, R defines the
repair symbols and Ls represent loss symbols.
Matrix A of (M x L) can be formed similar to the encoding
process using received N symbols [10] where
M = S+H+N (4)
Where S is LDPC symbols, H Half Symbols and N received
symbols respectively. The matrix, A is a bit matrix that
satisfies A x C = E using matrix multiplication in GF (2).
Intermediate symbols C can then be decoded if the bit matrix
A is square (L x L) and invertible. Since the number of
received encoding symbols, N > K in most cases, so following
steps should be taken for decoding.
Step 1:- The first step in decoding is to convert (M x L)
matrix A to an (L x L) matrix using Gaussian Elimination
method [10].
Improved Gaussian elimination, consisting of row/column
exchange and row Ex-OR, is used in the 3GPP Raptor
decoding algorithm. In the decoding process, the original
matrix A will be converted into an identity matrix. Besides,
vector C and D change concurrently. Let (N ≥ K) be the
number of received encoding symbols and M=S+H+N. The
vector D=(D[0],… ,D[M-1]) is the column vector of M
symbols with values known to the receiver, where D[0],…
,D[S+H-1] are zero-valued symbols that correspond to LDPC
and Half symbols, D[S+H], … ,D[M-1] are the received
encoding symbols for the source symbols. When the original
matrix A is converted into identity matrix successfully, we can
get the intermediate symbols from D.
Before Gaussian elimination, we assume C[0]=0 ,
C[1]=1,… ,C[L-1]=L-1 and D[0]=0,D[1]=1,…
,D[M-1]=M-1 .initially. In the process of Gaussian
elimination, the vectors C and D change concurrently with the
changes of matrix A. The process abides by the rules as
follows:
If the row i of A is exclusive-ORed into row i', then
symbol D[d[i]] is exclusive-ORed into symbols
D[d[i']];
If the row i of A is exchanged with row i', then the
value d[i] is exchanged with the value d [i'];
If the column j of A is exchanged with column j',
then the value c[j] is exchanged with the value c [j'].
It is clear that C[c[0]],C[c[1]],… ,C[c[L-1]] =D[d[0]],
D[d[1] ],… ,D[d[L-1] ] at the end of successful decoding.
The process of converting A into identity matrix consists of
four phases:
Phase I:-In the first phase of the Gaussian elimination, the
matrix A is partitioned into submatrices, where I is an identity
matrix i × i, O is a matrix with all values zero, U is a matrix
with u columns. An efficient algorithm, suggested in 3GPP
MBMS standard, is helpful for choosing an appropriate row to
do Gaussian elimination. The algorithm defines a graph
structure of matrix V. Let the columns of V be the set of nodes
and let the rows that have exactly two 1's in V be the set of
edges that connect the associated nodes. We define component
as follows: component in the graph is a set of nodes and edges
such that there is a path between all pairs of nodes, the size of
a component is the number of nodes in the component.
When i+u=L, the first phase ends successfully and the matrix
V disappears. If there are non-zero rows in V can be chosen in
the first phase, we can use the efficient 3GPP standard as
follows:
Let r represents the minimum integer that at least one
row has exactly r 1's in V. When r ≠ 2, choose a row
with exactly r 1's in V with minimum original degree
among all such rows;
When r=2, choose any row contained in the
maximum size component [13] of the graph defined
by V.
Then the first row is exchanged with the chosen row of V and
the arbitrary column, contained 1 in the chosen row, is
exchanged with the first column. Besides, the columns
contained remaining (r-1) 1's are exchanged with the last
6. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 02 Issue: 04 | Apr-2013, Available @ http://www.ijret.org 606
columns of V. After that, the chosen rows are exclusive-ORed
into the rows of A below the chosen row that have a 1 in the
first column of V. Accompanied by the process above, i is
increased by 1 and u is increased by (r-1). We carry out the
first phase again until it ends successfully.
Phase II: - In the second phase, the submatrix U is partitioned
into the first i rows, U_upper, and the remaining (M-i) rows,
U_lower. If the rank of U_lower is u, Gaussian elimination is
performed on it, and converts it into a matrix where the first u
row is the identity matrix. Besides, the last (M-L) rows are
discarded.
Phase III & IV: - The task of the third and the fourth phase is
to convert the U_upper into a matrix with all values zero.
Once the Gaussian Elimination is complete the bit matrix, A
becomes {LxL} and invertible. The intermediate symbols C
can then be obtained as
C =A-1D (5)
Here D represents column vector of S+H zero symbols
followed by the N received symbols. The intermediate
symbols, C is then passed to LT decoding to regenerate the K
Source symbols. [10]
4. PERFORMANCE EVALUATION OF RAPTOR
CODES:
It is clear that the key points featuring the performance of an
AL-FEC scheme are the decoding failure performance with
respect to the number of additional symbols received and
further, as a direct consequence of this aspect, the amount of
the transmission redundancy required to confront different
packet losses patterns.
To this direction, firstly we investigate the decoding
performance compared to the reception overhead such a FEC
code requires to successfully recover the protected data.
Figure 3 presents the probability the FEC decoding process to
fail in function to the number of additional symbols received,
i.e., the reception overhead .For each reception overhead, the
results are acquired from 10,000 experimental trials with
symbols erased at random. Comparing the performance of the
standardized Raptor FEC code for K=254 each with case
N1=254(K), N2=127(K/2), N3=93(K/4).
Table -1: Comparison Table for RFC 5053 K=254 for
Latency 3
Sourc
e
Symb
ols
(K)
Repair
Symbols
(N)
Total
Symbols
(K+N)
PbE Encod
ing
Cost
Supp
ort
Pack
et
Loss
254 254(K) 508 0.163
7
50% 49.4
%
254 127(K/2) 381 0.142
0
33.33
%
32.54
%
254 64(K/4) 318 0.121
3
20.12
%
19.18
%
Here Encoding Cost= Repair Symbols /Total Symbols .For
Supporting Packet Loss we are noting all the values for
Reception Overhead 3 (which resembles the latency). So we
have to receive 3 more extra symbols which mean we have to
receive 254+3 = 257 symbols. or N = K case, total 508
symbols we are transmitting. out of which we must receive
257 symbols, hence channel can drop at maximum (508-257 =
251 symbols).so allowable channel loss is Maximum dropped
packets / Total Transmitting symbols = 251/508 = 49.4 %For
N = K/4 case, total 318 symbols we are transmitting. Out of
which we must receive 257 symbols, hence channel can drop
at maximum (318-257 = 61 symbols) so allowable channel
loss is Maximum dropped packets / Total Transmitting
symbols = 61/318 = 19.18 %
For case N=K/4 64 Repair symbols were used which shows
that decoding failure probability also decreases with less
bandwidth utilization
From Table 1 it is clear that for worst channel condition N=K
case is useful as supporting Packet loss is 49.4% and for
N=K/4 case is useful for good channel condition as encoding
cost and PbE are less compared to N=K.
7. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 02 Issue: 04 | Apr-2013, Available @ http://www.ijret.org 607
0 1 2 3 4 5 6 7
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Reception Overhead
ProbabilityofFailure
RFC5053: K = 254
N1 = K
N2 = K/2
N3 = K/4
Figure 3: FEC decoding failure probability versus
Reception overhead for K=254
Comparing the three plotted curves behavior in fig 3 and fig 4,
we can immediately remark that a Raptor failure probability
decreases exponentially with the growth of the number of
additional FEC symbols.
0 1 2 3 4 5 6 7
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Reception Overhead
ProbabilityofFailure
RFC5053: K = 370
N1 = K
N2 = K/2
N3 = K/4
Figure 4: FEC decoding failure probability versus
Reception overhead for K=370
0 1 2 3 4 5 6 7
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Reception Overhead
ProbabilityofFailure
Theoretical Vs Practical
Theoretical Limit
K = 254
K = 370
Figure 5: FEC decoding failure probability versus
Reception overhead for Theoretical Vs Practical
Also as per equation (1) we have plotted curve as shown in fig
5 for theoretical Vs RFC 5053 (practical) which shows that we
are approaching to the theoretical limit and even at some point
we are getting better results than theoretical with lower
decoding failure probability.
CONCLUSION AND FUTURE WORK
In this work we have implemented and then provided an
extensive performance evaluation of AL-FEC based on the
Raptor codes(IETF RFC 5053) provided by Digital Fountain
to provide reliability against packet losses in 3GPP LTE
eMBMS services. We have examined how the Raptor
overhead varies during different network conditions and which
is the optimal overhead that a multicast sender should
introduce to the transmission to achieve successful delivery of
the multimedia content in a PTM manner.
Some future actions that may result from this work may be
possible; consideration is the design and evaluation of an
adaptive algorithm which calculates the optimum FEC
encoding process. This mechanism could be based on a system
of feedback reports on network conditions, as proposed in the
database and reports coding parameters. Finally, recently
appeared AL-FEC RaptorQ scheme, could further enhance the
research field of reliable multicast in mobile networks.
ACKNOWLEDGEMENTS
We would like to thank Yashesh Buch for providing such an
interesting topic. We are also grateful to Harish Padi for
valuable discussions and friendly support.
REFERENCES:
[1] 3GPP TS 26.346 V10.4.0.,” Technical Specification
Group Services and System Aspects: MBMS, Protocols
and codecs”, (Release 10), 2012.
[2] P. Elias. Coding for two noisy channels. in Proc. 3rd
London Symposium on Information Theory, pages 61–76,
1955.
[3] Han Wang, “Hardware Design for LT Coding”, MSc
Thesis,University of Delft, Holland, 2006.
[4] Berlekamp, E.R.: “Algebraic coding theory”, McGraw-
Hill, New York, 1968.
[5] M. Luby, M. Mitzenmacher, A. Shokrollahi, and D.
Spielman, “Efficient erasure correcting codes”, IEEE
Trans. Inf. Theory, vol. 47, no. 2, pp. 569–584, Feb. 2001
[6] David J. C. MacKay, “Fountain Codes”,
Communications, IEEE proceedings vol. 152, no. 6, Dec.
2005, pp. 1062 – 1068.
[7] M. Luby, “LT codes”, Proc. 43rd Ann. IEEE Symp. On
Foundations of Computer Science, Nov. 2002, pp. 271–
282.
[8] A. Shokrollahi, “Raptor Codes”, in Proc. IEEE Int. Symp.
Information Theory, Chicago, IL, Jun. /Jul. 2004, p. 36.
8. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 02 Issue: 04 | Apr-2013, Available @ http://www.ijret.org 608
[9] A. Shokrollahi, “Raptor codes,” Information Theory,
IEEE Transactions on, vol. 52, no. 6, pp. 2551–2567,
June 2006.
[10] M. Luby, A. Shokrollahi, M. Watson, and T.
Stockhammer, “Raptor forward error correction scheme
for object delivery,” September 2007, Internet
Engineering Task Force, RFC 5053. Available at
http://tools.ietf.org/html/rfc5053.
[11] Luby M, Watson M, Gasiba T, Stockhammer T, Xu W.”
Raptor codes for reliable download delivery in wireless
broadcast systems.” In Proc. of CCNC, 2006.
[12] T. Stockhammer, A. Shokrollahi, M. Watson, M. Luby,
and T. Gasiba,“Application Layer Forward Error
Correction for mobile multimedia broadcasting,”
Handbook of Mobile Broadcasting: DVB-H, DMB,
ISDBT and Media Flo, CRC Press, pp. 239–280, 2008.
[13] S. Dasgupta, C. H. Papadimitriou, and U. V. Vazirani, “
Algorithms ,” McGraw-Hill, July 18, 2006
BIOGRAPHIES:
Avani U Pandya has received the B.E (Electronics&
Communication Engineering) degree from North Gujarat
University (Government Engineering College, Modasa) in
2007. She is currently a M.E (Communication) student (4th
semester) in G .H. Patel College of Engineering &
Technology Gujarat from Gujarat Technological University,
India. Her current research interests include 4G LTE eMBMS
Network, Wireless communication
Prof Sameer D Trapasiya is Presently working in
Electronics& Communication Engineering Dept,G .H. Patel
College of Engineering & Technology., Gujarat.He has more
than 6 years of experience in teaching. His current areas of
interest are Cognitive radio, MIMO Communication., Cross
layer optimization, Wireless Sensor Network.
Santhi S Chinnam is Presently Telecom Engineer in 4G RAN
& Device Validation at Rancore Technologies Pvt Ltd, Navi
Mumbai. He has received his Mtech in Communications from
IIT Roorkee