This document describes a new cache consistency maintenance scheme called Scalable Distributed Cache Indexing for Cache Consistency (SDCI) for mobile environments. SDCI uses three key features: 1) standard bits at the server and mobile node's cache to maintain consistency, 2) a local cache standard for invalidated entries to maximize broadcast efficiency, and 3) marking all valid cache entries as uncertain when the mobile node wakes up. Simulation results show SDCI has superior performance to existing algorithms, supporting more mobile nodes and lower average access delays. The algorithm aims to be scalable with minimal database overhead through its design.
A Survey on Neural Network Based Minimization of Data Center in Power Consump...IJSTA
This document summarizes a survey on using neural networks to minimize data center power consumption. It discusses how load prediction using neural networks can balance server loads and optimize energy usage by selectively powering down unused servers. The document reviews several related works applying neural network prediction to schedule virtual machines and transition servers to low-power sleep states. The survey concludes neural network models trained on historical load data can accurately predict future loads to enable reliable and optimized load balancing across servers.
This document summarizes a research paper on developing an improved LEACH (Low-Energy Adaptive Clustering Hierarchy) communication protocol for energy efficient data mining in multi-feature sensor networks. It begins with background on wireless sensor networks and issues like energy efficiency. It then discusses the existing LEACH protocol and its drawbacks. The proposed improved LEACH protocol includes cluster heads, sub-cluster heads, and cluster nodes to address LEACH's limitations. This new version aims to minimize energy consumption during cluster formation and data aggregation in multi-feature sensor networks.
Multilink Routing in Wireless Sensor Networks with Integrated Approach for Hi...rahulmonikasharma
Now a day’s people make use of sensors in order to have a distant communication without any intervention and to avoid the use of wires so that our communication will be mobile, but these sensors suffers a problem of battery drainage. There are various Energy Efficient Protocols for WSN that are being created which aspire to successfully deliver the data packets from sensor node (source) to the Base Station. These protocols have certain parameters like distance to identify the route. These protocols have a considerable amount of energy to find the minimum distance. Our aim is to formulate a protocol which has a target to calculate an efficient path at the same time save the energy of sensors in order to enhance the lifetime of network. In our project we proposed an Optimum Path and Energy Aware Sensor Routing Protocol (OPEASRP) which makes use of load as a parameter for calculation of optimal path and LEACH for conservation of energy of the nodes. At the same time we are providing the strong security to the network for preventing the network from different attacks. The main function of this protocol is for authorized multiple network user. So, with the help of different security parameters the system provides a high security to the wireless sensor network. Energy efficient new algorithm is also used because it is difficult to crack.
COMPRESSIVE DATA GATHERING TECHNIQUE BY AVOIDING CORRELATED DATA IN WSNpharmaindexing
This document proposes a technique for compressive data gathering in wireless sensor networks using mobile data collectors. It involves identifying correlated sensor data within clusters near polling points and defining a tour plan for mobile collectors that avoids visiting these correlated sensors. This is done using a spatial correlation method. The goal is to identify new optimal polling points by avoiding correlated sensors, which reduces the tour length of mobile collectors and the number of polling points needed. This extends the lifetime of the wireless sensor network. The document provides background on related work using mobile data collectors and discusses how the proposed approach improves upon prior methods.
The document discusses wireless sensor networks and describes their key characteristics. It notes that wireless sensor networks consist of low-power smart sensor nodes distributed over a large field to enable wireless sensing and data networking. The sensor nodes contain sensors, processors, memory, and radios. Wireless sensor networks can be either unstructured with dense node distribution or structured with few scattered nodes.
A Cooperative Cache Management Scheme for IEEE802.15.4 based Wireless Sensor ...IJECEIAES
Wireless Sensor Networks (WSNs) based on the IEEE 802.15.4 MAC and PHY layer standards is a recent trend in the market. It has gained tremendous attention due to its low energy consumption characteristics and low data rates. However, for larger networks minimizing energy consumption is still an issue because of the dissemination of large overheads throughout the network. This consumption of energy can be reduced by incorporating a novel cooperative caching scheme to minimize overheads and to serve data with minimal latency and thereby reduce the energy consumption. This paper explores the possibilities to enhance the energy efficiency by incorporating a cooperative caching strategy.
Qos group based optimal retransmission medium access protocol for wireless se...IJCNCJournal
This paper presents, a Group Based Optimal Retransmission Medium Access (GORMA) Protocol is
designed that combines protocol of Collision Avoidance (CA) and energy management for low-cost, shortrange,
low-data rate and low-energy sensor nodes applications in environment monitoring, agriculture,
industrial plants etc. In this paper, the GORMA protocol focuses on efficient MAC protocol to provide
autonomous Quality of Service (QoS) to the sensor nodes in one-hop QoS retransmission group and two
QoS groups in WSNs where the source nodes do not have receiver circuits. Hence, they can only transmit
data to a sink node, but cannot receive any control signals from the sink node. The proposed protocol
GORMA provides QoS to the nodes which work independently on predefined time by allowing them to
transmit each packet an optimal number of times within a given period. Our simulation results shows that
the performance of GORMA protocol, which maximize the delivery probability of one-hop QoS group and
two QoS groups and minimize the energy consumption.
This document discusses approaches to improve reliability in wireless sensor networks. It proposes a dynamic sectoring scheme where the network area is divided into sectors with a sensor node assigned as sector head for each. When an event occurs, only that sector is activated, reducing congestion and energy use. This is expected to enhance packet delivery ratio and reduce losses. Prior work on using data fusion and opportunistic flooding algorithms to improve reliability is also reviewed. The dynamic sectoring approach aims to reliably transmit data with low congestion and energy usage.
A Survey on Neural Network Based Minimization of Data Center in Power Consump...IJSTA
This document summarizes a survey on using neural networks to minimize data center power consumption. It discusses how load prediction using neural networks can balance server loads and optimize energy usage by selectively powering down unused servers. The document reviews several related works applying neural network prediction to schedule virtual machines and transition servers to low-power sleep states. The survey concludes neural network models trained on historical load data can accurately predict future loads to enable reliable and optimized load balancing across servers.
This document summarizes a research paper on developing an improved LEACH (Low-Energy Adaptive Clustering Hierarchy) communication protocol for energy efficient data mining in multi-feature sensor networks. It begins with background on wireless sensor networks and issues like energy efficiency. It then discusses the existing LEACH protocol and its drawbacks. The proposed improved LEACH protocol includes cluster heads, sub-cluster heads, and cluster nodes to address LEACH's limitations. This new version aims to minimize energy consumption during cluster formation and data aggregation in multi-feature sensor networks.
Multilink Routing in Wireless Sensor Networks with Integrated Approach for Hi...rahulmonikasharma
Now a day’s people make use of sensors in order to have a distant communication without any intervention and to avoid the use of wires so that our communication will be mobile, but these sensors suffers a problem of battery drainage. There are various Energy Efficient Protocols for WSN that are being created which aspire to successfully deliver the data packets from sensor node (source) to the Base Station. These protocols have certain parameters like distance to identify the route. These protocols have a considerable amount of energy to find the minimum distance. Our aim is to formulate a protocol which has a target to calculate an efficient path at the same time save the energy of sensors in order to enhance the lifetime of network. In our project we proposed an Optimum Path and Energy Aware Sensor Routing Protocol (OPEASRP) which makes use of load as a parameter for calculation of optimal path and LEACH for conservation of energy of the nodes. At the same time we are providing the strong security to the network for preventing the network from different attacks. The main function of this protocol is for authorized multiple network user. So, with the help of different security parameters the system provides a high security to the wireless sensor network. Energy efficient new algorithm is also used because it is difficult to crack.
COMPRESSIVE DATA GATHERING TECHNIQUE BY AVOIDING CORRELATED DATA IN WSNpharmaindexing
This document proposes a technique for compressive data gathering in wireless sensor networks using mobile data collectors. It involves identifying correlated sensor data within clusters near polling points and defining a tour plan for mobile collectors that avoids visiting these correlated sensors. This is done using a spatial correlation method. The goal is to identify new optimal polling points by avoiding correlated sensors, which reduces the tour length of mobile collectors and the number of polling points needed. This extends the lifetime of the wireless sensor network. The document provides background on related work using mobile data collectors and discusses how the proposed approach improves upon prior methods.
The document discusses wireless sensor networks and describes their key characteristics. It notes that wireless sensor networks consist of low-power smart sensor nodes distributed over a large field to enable wireless sensing and data networking. The sensor nodes contain sensors, processors, memory, and radios. Wireless sensor networks can be either unstructured with dense node distribution or structured with few scattered nodes.
A Cooperative Cache Management Scheme for IEEE802.15.4 based Wireless Sensor ...IJECEIAES
Wireless Sensor Networks (WSNs) based on the IEEE 802.15.4 MAC and PHY layer standards is a recent trend in the market. It has gained tremendous attention due to its low energy consumption characteristics and low data rates. However, for larger networks minimizing energy consumption is still an issue because of the dissemination of large overheads throughout the network. This consumption of energy can be reduced by incorporating a novel cooperative caching scheme to minimize overheads and to serve data with minimal latency and thereby reduce the energy consumption. This paper explores the possibilities to enhance the energy efficiency by incorporating a cooperative caching strategy.
Qos group based optimal retransmission medium access protocol for wireless se...IJCNCJournal
This paper presents, a Group Based Optimal Retransmission Medium Access (GORMA) Protocol is
designed that combines protocol of Collision Avoidance (CA) and energy management for low-cost, shortrange,
low-data rate and low-energy sensor nodes applications in environment monitoring, agriculture,
industrial plants etc. In this paper, the GORMA protocol focuses on efficient MAC protocol to provide
autonomous Quality of Service (QoS) to the sensor nodes in one-hop QoS retransmission group and two
QoS groups in WSNs where the source nodes do not have receiver circuits. Hence, they can only transmit
data to a sink node, but cannot receive any control signals from the sink node. The proposed protocol
GORMA provides QoS to the nodes which work independently on predefined time by allowing them to
transmit each packet an optimal number of times within a given period. Our simulation results shows that
the performance of GORMA protocol, which maximize the delivery probability of one-hop QoS group and
two QoS groups and minimize the energy consumption.
This document discusses approaches to improve reliability in wireless sensor networks. It proposes a dynamic sectoring scheme where the network area is divided into sectors with a sensor node assigned as sector head for each. When an event occurs, only that sector is activated, reducing congestion and energy use. This is expected to enhance packet delivery ratio and reduce losses. Prior work on using data fusion and opportunistic flooding algorithms to improve reliability is also reviewed. The dynamic sectoring approach aims to reliably transmit data with low congestion and energy usage.
Iaetsd extending sensor networks into the cloud using tpss and lbssIaetsd Iaetsd
The document proposes two schemes, TPSS and LBSS, to improve the usefulness of sensory data and reliability of wireless sensor networks (WSNs) integrated with mobile cloud computing (MCC). TPSS uses a time and priority-based approach to selectively transmit sensor data to the cloud based on user requests. LBSS schedules sensor sleep states based on user location histories to optimize energy efficiency while maintaining reliable data collection. The schemes aim to balance useful data collection from WSNs with reliable delivery to mobile cloud users.
Security based Clock Synchronization technique in Wireless Sensor Network for...iosrjce
This document proposes a secure clock synchronization technique for wireless sensor networks used in event-driven measurement applications. The technique aims to 1) provide high synchronization accuracy around detected events, 2) ensure long network lifetime, and 3) provide secure packet transmission. It divides nodes into an improved synchronization subset (ISS) with high accuracy around events, and a default synchronization subset (DSS) with lower accuracy elsewhere. When an event is detected, neighboring nodes in the ISS exchange synchronization packets more frequently for better accuracy. Authentication is used to securely transmit packets and identify intercepted messages. Simulation results show the technique accurately records event occurrence times while maintaining network lifetime through efficient energy usage.
Concealed Data Aggregation with Dynamic Intrusion Detection System to Remove ...csandit
Data Aggregation is a vital aspect in WSNs (Wireless Sensor Networks) and this is because it
reduces the quantity of data to be transmitted over the complex network. In earlier studies
authors used homomorphic encryption properties for concealing statement during aggregation
such that encrypted data can be aggregated algebraically without decrypting them. These
schemes are not applicable for multi applications which lead to proposal of Concealed Data
Aggregation for Multi Applications (CDAMA). It is designed for multi applications, as it
provides secure counting ability. In wireless sensor networks SN are unarmed and are
susceptible to attacks. Considering the defence aspect of wireless environment we have used
DYDOG (Dynamic Intrusion Detection Protocol Model) and a customized key generation
procedure that uses Digital Signatures and also Two Fish Algorithms along with CDAMA for
augmentation of security and throughput. To prove our proposed scheme’s robustness and
effectiveness, we conducted the simulations, inclusive analysis and comparisons at the ending.
Comparison of Health Care System ArchitecturesIJEACS
Body area sensor network is an important
technology which is suitable for monitoring the patient’s health
and real time diagnosing the diseases. The body area network
includes the sensors which can be spread over the body or the
wearable cloth and a coordinator node which can be a mobile
or a tablet or a PDA, which receives the signal of a person’s
sensors. In the new architecture the coordinator node sends the
information to the central data server via internet or GPRS or
MANET. The central data server is responsible for saving and
analyzing and representing the received data in the text and
graphical mode and sending SMS to the patient’s family or the
nearest ambulance or physician, or the operator can call them.
The received information is analyzed by the data mining tools.
The necessary information will be sent to the physician’s
computer. Every patient has a GPS, and it is supposed that the
encryption is used for transferring information. In this paper
the new architecture is compared with the traditional one
which includes the base station and relay nodes. It is shown
that the new architecture has less delay than the traditional
one.
The document summarizes an enhanced version of the GPSR routing protocol for wireless sensor networks. It begins by describing the GPSR protocol and some of its limitations in wireless sensor network environments, such as asymmetric links and situations where the destination is outside the network boundary. It then proposes modifications to GPSR to address these issues. The enhanced protocol introduces aggregation nodes that are responsible for transmitting messages to distant base stations. It also utilizes a "head set" concept where a set of nodes takes turns transmitting data to save energy compared to always using the same node. The enhanced protocol is claimed to help ensure successful data delivery, reduce packet delay, and optimize energy consumption for wireless sensor networks.
This document summarizes a research paper that analyzes end-to-end delay distribution in wireless sensor networks. It first introduces the importance of average delay and end-to-end delay distribution for real-time quality of service in wireless sensor networks. It then discusses previous work that analyzed average delay but failed to consider single hop delay distribution or bursty traffic. The document proposes a comprehensive cross-layer analysis framework to model average delay and end-to-end delay distribution considering both deterministic and random node deployments. It also compares the performance of CSMA/CA and a cross-layer MAC protocol in terms of throughput, packet loss, and delay.
This document summarizes a survey of clustering algorithms for improving the lifetime of wireless sensor networks. It discusses how clustering partitions sensor networks into groups called clusters, with high-energy nodes acting as cluster heads. Clustering aims to reduce energy utilization and extend network lifetime by having cluster heads aggregate and transmit data to the base station on behalf of nodes in their clusters. The document reviews different types of clustering algorithms and heterogeneity in wireless sensor networks, noting that heterogeneity can further improve network lifetime, response time, and reliable data transmission through clustering.
Performance Analysis of Routing Protocols of Wireless Sensor NetworksDarpan Dekivadiya
The document summarizes different types of routing protocols that can be used in wireless sensor networks. It categorizes the protocols based on their mode of functioning, participation style of nodes, and network structure. Some key routing protocols discussed include LEACH, which is a proactive clustering protocol, SPIN that uses direct communication, and TEEN which is a reactive clustering protocol. The document also discusses challenges in routing for wireless sensor networks given the constraints of sensor nodes.
This document discusses improving the performance of mobile wireless sensor networks using a modified DBSCAN clustering algorithm. It first provides background on wireless sensor networks and discusses challenges related to mobility. It then reviews several existing works related to clustering, mobility, and extending network lifetime. The paper proposes using a modified DBSCAN algorithm that takes into account mobility, remaining energy, and distance to base station to select cluster heads. It evaluates the performance of this approach based on throughput, delay, and packet delivery ratio, finding improvements over other methods.
Efficient IOT Based Sensor Data Analysis in Wireless Sensor Networks with Cloudiosrjce
This summary provides the key details from the document in 3 sentences:
The document proposes an efficient IoT-based sensor data analysis system in wireless sensor networks using cloud computing. It utilizes the Greedy Perimeter Stateless Routing (GPSR) algorithm to route sensor data to cloud storage. The system is evaluated through simulations analyzing parameters like packet delivery ratio, energy consumption, and delay.
Energy Efficient Data Aggregation in Wireless Sensor Networks: A Surveyijsrd.com
The use of Wireless Sensor Networks (WSNs) is anticipated to bring lot of changes in data gathering, processing and dissemination for different environments and applications. However, a WSN is a power constrained system, since nodes run on limited power batteries which shorten its lifespan. Prolonging the network lifetime depends on efficient management of sensing node energy resource. Energy consumption is therefore one of the most crucial design issues in WSN. Hierarchical routing protocols are best known in regard to energy efficiency. By using a clustering technique hierarchical routing protocols greatly minimize energy consumed in collecting and disseminating data. To prolong the lifetime of the sensor nodes, designing efficient routing protocols is critical. In this paper, we have discussed various energy efficient data aggregation protocols for sensor networks.
The document proposes an improved data routing protocol for wireless sensor networks. It aims to address deficiencies in existing chain-based routing protocols like Chiron and PEGASIS that can cause longer transmission delays and redundant paths. The key aspects of the proposed protocol are:
1) The sensing area is divided into fan-shaped groups using beamforming from the base station, instead of concentric clusters. Shorter chains are formed within each group for data transmission.
2) The node with maximum residual energy in each chain is elected as the chain leader, rather than taking turns, to aggregate and transmit data to the base station.
3) Transmission between chain leaders is optimized to avoid longer distances and redundant paths.
Energy efficient routing in wireless sensor networksSpandan Spandy
The document summarizes several energy efficient multicast routing protocols for wireless sensor networks. It begins with an introduction to wireless sensor networks and routing challenges. It then summarizes the following protocols: MAODV, TEEN, APTEEN, SPEED, MMSPEED, RPAR, and LEACH. For each protocol, it provides a brief overview of the protocol's design, objectives, components, and how it aims to improve energy efficiency in wireless sensor network routing. The document concludes that providing energy-efficient multicast routing is important for wireless sensor network applications and that the protocols presented aim to achieve lower energy requirements through approaches like clustering, adaptive thresholding, and congestion control.
CODE AWARE DYNAMIC SOURCE ROUTING FOR DISTRIBUTED SENSOR NETWORKIJNSA Journal
Sensor network facilitates monitoring and controlling of physical environments. These wireless networks consist of dense collection of sensors capable of collection and dissemination of data. They have application in variety of fields such as military purposes, environment monitoring etc. Typical deployment of sensor network assumes central processing station or a gateway to which all other nodes route their data using dynamic source routing (DSR). This causes congestion at central station and thus reduces the efficiency of the network. In this work we will propose a better dynamic source routing technique using network coding to reduce total number of transmission in sensor networks resulting in better efficiency.
Routing protocols for wireless sensor networks face several unique challenges compared to other wireless networks. This document discusses routing challenges in wireless sensor networks and provides an overview of different routing protocol approaches, including flat routing, hierarchical routing, location-based routing, and QoS-based routing. It specifically describes two flat routing protocols: directed diffusion, which uses data negotiation and aggregation to reduce energy costs, and SPIN, which employs data description messages to avoid redundant transmissions through negotiation between sensor nodes.
An Efficient Approach for Data Gathering and Sharing with Inter Node Communi...cscpconf
In today’s era Wireless sensor networks (WSNs) have emerged as a solution for a wide range of
applications. Most of the traditional WSN architectures consist of static nodes which are densely deployed
over a sensing area. Recently, several WSN architectures based on mobile elements (MEs) have been
proposed. Most of them exploit mobility to address the problem of data collection in WSNs. The common
drawback among them is to data sharing between interconnected nodes. In this paper we propose an
Efficient Approach for Data Gathering and Sharing with Inter Node Communication in Mobile-Sink. Our
algorithm is divided into seven parts: Registration Phase, Authentication Phase, Request and Reply Phase,
Setup Phase, Setup Phase (NN), Data Gathering, and Forwarding to Sink. Our approach provides an
efficient way to handle data in between the intercommunication nodes. By the above approach we can
access the data from the node which is not in the list, by sharing the data from the node which is
approachable to the desired node. For accessing and sharing we need some security so that the data can
be shared between authenticated nodes. For this we use two way security approach one for the accessing
node and other for the sharing.
This document proposes an energy efficient three-level model for query optimization in wireless sensor networks (WSNs). At the three levels are: base station, cluster heads, and sensor nodes. The base station maintains metadata about cluster heads and sensor nodes. When a query is received, it first checks if the result is cached. If not, it checks the status of cluster heads and selects a new cluster head if needed. The query is then disseminated to cluster heads using a modified Bellman-Ford algorithm. Cluster heads aggregate data from relevant sensor nodes and send the result to the base station. This model aims to minimize communication costs during query processing in WSNs.
LOAD BALANCED CLUSTERING WITH MIMO UPLOADING TECHNIQUE FOR MOBILE DATA GATHER...Munisekhar Gunapati
A three-layer framework is proposed for mobile data collection in wireless sensor networks, which includes the sensor layer, cluster head layer, and mobile collector (called SenCar) layer. The framework employs distributed load balanced clustering and dual data uploading, which is referred to as LBC-MIMO. The objective is to achieve good scalability, long network lifetime and low data collection latency. At the sensor layer, a distributed load balanced clustering (LBC) algorithm is proposed for sensors to self-organize themselves into clusters. In contrast to existing clustering methods, our scheme generates multiple cluster heads in each cluster to balance the work load and facilitate dual data uploading. At the cluster head layer, the inter-cluster transmission range is carefully chosen to guarantee the connectivity among the clusters. Multiple cluster heads within a cluster cooperate with each other to perform energy-saving inter-cluster communications. Through inter-cluster transmissions, cluster head information is forwarded to SenCar for its moving trajectory planning. At the mobile collector layer, SenCar is equipped with two antennas, which enables two cluster heads to simultaneously upload data to SenCar in each time by utilizing multi-user multiple-input and multiple-output (MU-MIMO) technique. The trajectory planning for SenCar is optimized to fully utilize dual data uploading capability by properly selecting polling points in each cluster. By visiting each selected polling point, SenCar can efficiently gather data from cluster heads and transport the data to the static data sink. Extensive simulations are conducted to evaluate the effectiveness of the proposed LBC-MIMO scheme. The results show that when each cluster has at most two cluster heads, LBC-MIMO achieves over 50 percent energy saving per node and 60 percent energy saving on cluster heads comparing with data collection through multi-hop relay to the static data sink, and 20 percent shorter data collection time compared to traditional mobile data gathering.
Feasibility study on treatment plan of scouring wastewater in the printing an...eSAT Journals
Abstract
With the rapid development of printing and dyeing industry, the total emissions of printing and dyeing wastewater growing day by day, the components are more and more complex. At present, comprehensive treatment method to sewage treatment is used primarily in printing and dyeing wastewater. This not only increases the area, but also increases the processing cost. In this article, scouring wastewater in the printing and dyeing wastewater is treated separately. Meanwhile, compared the two kinds of processing methods in terms of economy and technology,demonstrated the feasibility of scouring wastewater treated separately, provided theoretical basis for diversified treatment of printing and dyeing wastewater.
Keywords: Comprehensive treatment, economic comparison, feasibility study, individual treatment, printing and dyeing wastewater, scouring wastewater.
Abstract In Software Engineering (SE) process Requirement Engineering (RE) is considered as an important part in Software Development Life Cycle (SDLC). Requirement Prioritization is very useful for making good decisions about product plan but most of the times it is ignored. In many cases it is seem that the product fails to meet its core objectives because lack of proper prioritization. Increased emphasis on requirement prioritization and high changing requirements makes management of composite services time consuming and a complicated task. When a project has tight schedule, restricted resources and high customer expectations, it becomes necessary to deploy the most critical and important features as early as possible. The problem can be solved by prioritizing the requirements. In Software Engineering process numbers of requirement prioritization methods are already present. This paper shows the comparison of some of these techniques and based on its advantages and disadvantages a new technique ‘Adaptive Fuzzy Hierarchical Cumulative Voting’ is proposed. Fuzzy logic is used with adaptation mechanism, to target the situations where composite service behaviour can be deviated from customer expectations and to also deal with uncertainty and ambiguity. The proposed Adaptive Fuzzy Hierarchical Cumulative Voting includes the analysis of different self-adaptive properties such as self-heal, self-configure, self-optimize, self-protect and the addition to Fuzzy HCV, in order to increase the coverage of events that can occur at runtime. It may be useful to prioritize the requirements at run time. Keywords: Requirement prioritization, Fuzzy system, Fuzzy adaptation, HCV, Fuzzy HCV
Treatability study of cetp wastewater using physico chemical process-a case s...eSAT Journals
Abstract The present study is focused on a Common Effluent Treatment Plant (CETP) located at Umaraya, District Baroda. Waste water from about thirty five small and medium scale industries majorly comprising of chemical manufacturing and pharmaceutical industries are treated in this CETP. The incoming wastewater was collected and mixed to prepare samples. They were then oxidized by Fenton’s reagent (Fe2+/H2O2) reduction in COD and BOD were observed at different H2O2 and FeSO4 doses to determine the optimum values. Thereafter pretreated wastewater was subjected to filtration with ordinary charcoal and COD and BOD reductions were noted.COD and BOD reduction of 64.35% and 68.57% respectively was achieved by Fenton’s reagent and after filtration the values were well within the disposal standards. The results clearly indicate that conventional system should be replaced by physicochemical process like oxidation and filtration. Index Terms: CETP, COD and BOD reduction, Fenton’s Reagent, Charcoal Filtration
Production of plaster of paris using solar energyeSAT Journals
Abstract Plaster of Paris (POP) is an important building material. Most of the units producing POP are in small scale sector. These units use wood, coal to calcine gypsum. The average consumption of wood to produce one ton of POP is 300kg. The electrical energy constitutes only 5% while rest is thermal energy. Most of POP units are situated in western Rajasthan. This region has about 300-320 days of clear sun shine. Since thermal energy has major contribution in energy mix, it makes sense to supplement the same with concentrated solar technology. Experiments were conducted to establish feasibility. A commercial parabolic concentrator of 4 sqm was used to calcine small samples (5kg) and the result show great promise. An industrial method of producing POP using commercially available solar concentrator technologies (CST) has been proposed. The payback period is observed to be of the order of 4 years.
Keywords: Gypsum, Plaster of Paris, Solar energy, Scheffler reflector, parabolic concentrator
Iaetsd extending sensor networks into the cloud using tpss and lbssIaetsd Iaetsd
The document proposes two schemes, TPSS and LBSS, to improve the usefulness of sensory data and reliability of wireless sensor networks (WSNs) integrated with mobile cloud computing (MCC). TPSS uses a time and priority-based approach to selectively transmit sensor data to the cloud based on user requests. LBSS schedules sensor sleep states based on user location histories to optimize energy efficiency while maintaining reliable data collection. The schemes aim to balance useful data collection from WSNs with reliable delivery to mobile cloud users.
Security based Clock Synchronization technique in Wireless Sensor Network for...iosrjce
This document proposes a secure clock synchronization technique for wireless sensor networks used in event-driven measurement applications. The technique aims to 1) provide high synchronization accuracy around detected events, 2) ensure long network lifetime, and 3) provide secure packet transmission. It divides nodes into an improved synchronization subset (ISS) with high accuracy around events, and a default synchronization subset (DSS) with lower accuracy elsewhere. When an event is detected, neighboring nodes in the ISS exchange synchronization packets more frequently for better accuracy. Authentication is used to securely transmit packets and identify intercepted messages. Simulation results show the technique accurately records event occurrence times while maintaining network lifetime through efficient energy usage.
Concealed Data Aggregation with Dynamic Intrusion Detection System to Remove ...csandit
Data Aggregation is a vital aspect in WSNs (Wireless Sensor Networks) and this is because it
reduces the quantity of data to be transmitted over the complex network. In earlier studies
authors used homomorphic encryption properties for concealing statement during aggregation
such that encrypted data can be aggregated algebraically without decrypting them. These
schemes are not applicable for multi applications which lead to proposal of Concealed Data
Aggregation for Multi Applications (CDAMA). It is designed for multi applications, as it
provides secure counting ability. In wireless sensor networks SN are unarmed and are
susceptible to attacks. Considering the defence aspect of wireless environment we have used
DYDOG (Dynamic Intrusion Detection Protocol Model) and a customized key generation
procedure that uses Digital Signatures and also Two Fish Algorithms along with CDAMA for
augmentation of security and throughput. To prove our proposed scheme’s robustness and
effectiveness, we conducted the simulations, inclusive analysis and comparisons at the ending.
Comparison of Health Care System ArchitecturesIJEACS
Body area sensor network is an important
technology which is suitable for monitoring the patient’s health
and real time diagnosing the diseases. The body area network
includes the sensors which can be spread over the body or the
wearable cloth and a coordinator node which can be a mobile
or a tablet or a PDA, which receives the signal of a person’s
sensors. In the new architecture the coordinator node sends the
information to the central data server via internet or GPRS or
MANET. The central data server is responsible for saving and
analyzing and representing the received data in the text and
graphical mode and sending SMS to the patient’s family or the
nearest ambulance or physician, or the operator can call them.
The received information is analyzed by the data mining tools.
The necessary information will be sent to the physician’s
computer. Every patient has a GPS, and it is supposed that the
encryption is used for transferring information. In this paper
the new architecture is compared with the traditional one
which includes the base station and relay nodes. It is shown
that the new architecture has less delay than the traditional
one.
The document summarizes an enhanced version of the GPSR routing protocol for wireless sensor networks. It begins by describing the GPSR protocol and some of its limitations in wireless sensor network environments, such as asymmetric links and situations where the destination is outside the network boundary. It then proposes modifications to GPSR to address these issues. The enhanced protocol introduces aggregation nodes that are responsible for transmitting messages to distant base stations. It also utilizes a "head set" concept where a set of nodes takes turns transmitting data to save energy compared to always using the same node. The enhanced protocol is claimed to help ensure successful data delivery, reduce packet delay, and optimize energy consumption for wireless sensor networks.
This document summarizes a research paper that analyzes end-to-end delay distribution in wireless sensor networks. It first introduces the importance of average delay and end-to-end delay distribution for real-time quality of service in wireless sensor networks. It then discusses previous work that analyzed average delay but failed to consider single hop delay distribution or bursty traffic. The document proposes a comprehensive cross-layer analysis framework to model average delay and end-to-end delay distribution considering both deterministic and random node deployments. It also compares the performance of CSMA/CA and a cross-layer MAC protocol in terms of throughput, packet loss, and delay.
This document summarizes a survey of clustering algorithms for improving the lifetime of wireless sensor networks. It discusses how clustering partitions sensor networks into groups called clusters, with high-energy nodes acting as cluster heads. Clustering aims to reduce energy utilization and extend network lifetime by having cluster heads aggregate and transmit data to the base station on behalf of nodes in their clusters. The document reviews different types of clustering algorithms and heterogeneity in wireless sensor networks, noting that heterogeneity can further improve network lifetime, response time, and reliable data transmission through clustering.
Performance Analysis of Routing Protocols of Wireless Sensor NetworksDarpan Dekivadiya
The document summarizes different types of routing protocols that can be used in wireless sensor networks. It categorizes the protocols based on their mode of functioning, participation style of nodes, and network structure. Some key routing protocols discussed include LEACH, which is a proactive clustering protocol, SPIN that uses direct communication, and TEEN which is a reactive clustering protocol. The document also discusses challenges in routing for wireless sensor networks given the constraints of sensor nodes.
This document discusses improving the performance of mobile wireless sensor networks using a modified DBSCAN clustering algorithm. It first provides background on wireless sensor networks and discusses challenges related to mobility. It then reviews several existing works related to clustering, mobility, and extending network lifetime. The paper proposes using a modified DBSCAN algorithm that takes into account mobility, remaining energy, and distance to base station to select cluster heads. It evaluates the performance of this approach based on throughput, delay, and packet delivery ratio, finding improvements over other methods.
Efficient IOT Based Sensor Data Analysis in Wireless Sensor Networks with Cloudiosrjce
This summary provides the key details from the document in 3 sentences:
The document proposes an efficient IoT-based sensor data analysis system in wireless sensor networks using cloud computing. It utilizes the Greedy Perimeter Stateless Routing (GPSR) algorithm to route sensor data to cloud storage. The system is evaluated through simulations analyzing parameters like packet delivery ratio, energy consumption, and delay.
Energy Efficient Data Aggregation in Wireless Sensor Networks: A Surveyijsrd.com
The use of Wireless Sensor Networks (WSNs) is anticipated to bring lot of changes in data gathering, processing and dissemination for different environments and applications. However, a WSN is a power constrained system, since nodes run on limited power batteries which shorten its lifespan. Prolonging the network lifetime depends on efficient management of sensing node energy resource. Energy consumption is therefore one of the most crucial design issues in WSN. Hierarchical routing protocols are best known in regard to energy efficiency. By using a clustering technique hierarchical routing protocols greatly minimize energy consumed in collecting and disseminating data. To prolong the lifetime of the sensor nodes, designing efficient routing protocols is critical. In this paper, we have discussed various energy efficient data aggregation protocols for sensor networks.
The document proposes an improved data routing protocol for wireless sensor networks. It aims to address deficiencies in existing chain-based routing protocols like Chiron and PEGASIS that can cause longer transmission delays and redundant paths. The key aspects of the proposed protocol are:
1) The sensing area is divided into fan-shaped groups using beamforming from the base station, instead of concentric clusters. Shorter chains are formed within each group for data transmission.
2) The node with maximum residual energy in each chain is elected as the chain leader, rather than taking turns, to aggregate and transmit data to the base station.
3) Transmission between chain leaders is optimized to avoid longer distances and redundant paths.
Energy efficient routing in wireless sensor networksSpandan Spandy
The document summarizes several energy efficient multicast routing protocols for wireless sensor networks. It begins with an introduction to wireless sensor networks and routing challenges. It then summarizes the following protocols: MAODV, TEEN, APTEEN, SPEED, MMSPEED, RPAR, and LEACH. For each protocol, it provides a brief overview of the protocol's design, objectives, components, and how it aims to improve energy efficiency in wireless sensor network routing. The document concludes that providing energy-efficient multicast routing is important for wireless sensor network applications and that the protocols presented aim to achieve lower energy requirements through approaches like clustering, adaptive thresholding, and congestion control.
CODE AWARE DYNAMIC SOURCE ROUTING FOR DISTRIBUTED SENSOR NETWORKIJNSA Journal
Sensor network facilitates monitoring and controlling of physical environments. These wireless networks consist of dense collection of sensors capable of collection and dissemination of data. They have application in variety of fields such as military purposes, environment monitoring etc. Typical deployment of sensor network assumes central processing station or a gateway to which all other nodes route their data using dynamic source routing (DSR). This causes congestion at central station and thus reduces the efficiency of the network. In this work we will propose a better dynamic source routing technique using network coding to reduce total number of transmission in sensor networks resulting in better efficiency.
Routing protocols for wireless sensor networks face several unique challenges compared to other wireless networks. This document discusses routing challenges in wireless sensor networks and provides an overview of different routing protocol approaches, including flat routing, hierarchical routing, location-based routing, and QoS-based routing. It specifically describes two flat routing protocols: directed diffusion, which uses data negotiation and aggregation to reduce energy costs, and SPIN, which employs data description messages to avoid redundant transmissions through negotiation between sensor nodes.
An Efficient Approach for Data Gathering and Sharing with Inter Node Communi...cscpconf
In today’s era Wireless sensor networks (WSNs) have emerged as a solution for a wide range of
applications. Most of the traditional WSN architectures consist of static nodes which are densely deployed
over a sensing area. Recently, several WSN architectures based on mobile elements (MEs) have been
proposed. Most of them exploit mobility to address the problem of data collection in WSNs. The common
drawback among them is to data sharing between interconnected nodes. In this paper we propose an
Efficient Approach for Data Gathering and Sharing with Inter Node Communication in Mobile-Sink. Our
algorithm is divided into seven parts: Registration Phase, Authentication Phase, Request and Reply Phase,
Setup Phase, Setup Phase (NN), Data Gathering, and Forwarding to Sink. Our approach provides an
efficient way to handle data in between the intercommunication nodes. By the above approach we can
access the data from the node which is not in the list, by sharing the data from the node which is
approachable to the desired node. For accessing and sharing we need some security so that the data can
be shared between authenticated nodes. For this we use two way security approach one for the accessing
node and other for the sharing.
This document proposes an energy efficient three-level model for query optimization in wireless sensor networks (WSNs). At the three levels are: base station, cluster heads, and sensor nodes. The base station maintains metadata about cluster heads and sensor nodes. When a query is received, it first checks if the result is cached. If not, it checks the status of cluster heads and selects a new cluster head if needed. The query is then disseminated to cluster heads using a modified Bellman-Ford algorithm. Cluster heads aggregate data from relevant sensor nodes and send the result to the base station. This model aims to minimize communication costs during query processing in WSNs.
LOAD BALANCED CLUSTERING WITH MIMO UPLOADING TECHNIQUE FOR MOBILE DATA GATHER...Munisekhar Gunapati
A three-layer framework is proposed for mobile data collection in wireless sensor networks, which includes the sensor layer, cluster head layer, and mobile collector (called SenCar) layer. The framework employs distributed load balanced clustering and dual data uploading, which is referred to as LBC-MIMO. The objective is to achieve good scalability, long network lifetime and low data collection latency. At the sensor layer, a distributed load balanced clustering (LBC) algorithm is proposed for sensors to self-organize themselves into clusters. In contrast to existing clustering methods, our scheme generates multiple cluster heads in each cluster to balance the work load and facilitate dual data uploading. At the cluster head layer, the inter-cluster transmission range is carefully chosen to guarantee the connectivity among the clusters. Multiple cluster heads within a cluster cooperate with each other to perform energy-saving inter-cluster communications. Through inter-cluster transmissions, cluster head information is forwarded to SenCar for its moving trajectory planning. At the mobile collector layer, SenCar is equipped with two antennas, which enables two cluster heads to simultaneously upload data to SenCar in each time by utilizing multi-user multiple-input and multiple-output (MU-MIMO) technique. The trajectory planning for SenCar is optimized to fully utilize dual data uploading capability by properly selecting polling points in each cluster. By visiting each selected polling point, SenCar can efficiently gather data from cluster heads and transport the data to the static data sink. Extensive simulations are conducted to evaluate the effectiveness of the proposed LBC-MIMO scheme. The results show that when each cluster has at most two cluster heads, LBC-MIMO achieves over 50 percent energy saving per node and 60 percent energy saving on cluster heads comparing with data collection through multi-hop relay to the static data sink, and 20 percent shorter data collection time compared to traditional mobile data gathering.
Feasibility study on treatment plan of scouring wastewater in the printing an...eSAT Journals
Abstract
With the rapid development of printing and dyeing industry, the total emissions of printing and dyeing wastewater growing day by day, the components are more and more complex. At present, comprehensive treatment method to sewage treatment is used primarily in printing and dyeing wastewater. This not only increases the area, but also increases the processing cost. In this article, scouring wastewater in the printing and dyeing wastewater is treated separately. Meanwhile, compared the two kinds of processing methods in terms of economy and technology,demonstrated the feasibility of scouring wastewater treated separately, provided theoretical basis for diversified treatment of printing and dyeing wastewater.
Keywords: Comprehensive treatment, economic comparison, feasibility study, individual treatment, printing and dyeing wastewater, scouring wastewater.
Abstract In Software Engineering (SE) process Requirement Engineering (RE) is considered as an important part in Software Development Life Cycle (SDLC). Requirement Prioritization is very useful for making good decisions about product plan but most of the times it is ignored. In many cases it is seem that the product fails to meet its core objectives because lack of proper prioritization. Increased emphasis on requirement prioritization and high changing requirements makes management of composite services time consuming and a complicated task. When a project has tight schedule, restricted resources and high customer expectations, it becomes necessary to deploy the most critical and important features as early as possible. The problem can be solved by prioritizing the requirements. In Software Engineering process numbers of requirement prioritization methods are already present. This paper shows the comparison of some of these techniques and based on its advantages and disadvantages a new technique ‘Adaptive Fuzzy Hierarchical Cumulative Voting’ is proposed. Fuzzy logic is used with adaptation mechanism, to target the situations where composite service behaviour can be deviated from customer expectations and to also deal with uncertainty and ambiguity. The proposed Adaptive Fuzzy Hierarchical Cumulative Voting includes the analysis of different self-adaptive properties such as self-heal, self-configure, self-optimize, self-protect and the addition to Fuzzy HCV, in order to increase the coverage of events that can occur at runtime. It may be useful to prioritize the requirements at run time. Keywords: Requirement prioritization, Fuzzy system, Fuzzy adaptation, HCV, Fuzzy HCV
Treatability study of cetp wastewater using physico chemical process-a case s...eSAT Journals
Abstract The present study is focused on a Common Effluent Treatment Plant (CETP) located at Umaraya, District Baroda. Waste water from about thirty five small and medium scale industries majorly comprising of chemical manufacturing and pharmaceutical industries are treated in this CETP. The incoming wastewater was collected and mixed to prepare samples. They were then oxidized by Fenton’s reagent (Fe2+/H2O2) reduction in COD and BOD were observed at different H2O2 and FeSO4 doses to determine the optimum values. Thereafter pretreated wastewater was subjected to filtration with ordinary charcoal and COD and BOD reductions were noted.COD and BOD reduction of 64.35% and 68.57% respectively was achieved by Fenton’s reagent and after filtration the values were well within the disposal standards. The results clearly indicate that conventional system should be replaced by physicochemical process like oxidation and filtration. Index Terms: CETP, COD and BOD reduction, Fenton’s Reagent, Charcoal Filtration
Production of plaster of paris using solar energyeSAT Journals
Abstract Plaster of Paris (POP) is an important building material. Most of the units producing POP are in small scale sector. These units use wood, coal to calcine gypsum. The average consumption of wood to produce one ton of POP is 300kg. The electrical energy constitutes only 5% while rest is thermal energy. Most of POP units are situated in western Rajasthan. This region has about 300-320 days of clear sun shine. Since thermal energy has major contribution in energy mix, it makes sense to supplement the same with concentrated solar technology. Experiments were conducted to establish feasibility. A commercial parabolic concentrator of 4 sqm was used to calcine small samples (5kg) and the result show great promise. An industrial method of producing POP using commercially available solar concentrator technologies (CST) has been proposed. The payback period is observed to be of the order of 4 years.
Keywords: Gypsum, Plaster of Paris, Solar energy, Scheffler reflector, parabolic concentrator
Gesture control algorithm for personal computerseSAT Journals
Abstract As our dependency on computers is increasing every day, these intelligent machines are making inroads in our daily life and society. This requires more friendly methods for interaction between humans and computers (HCI) than the conventionally used interaction devices (mouse & keyboard) because they are unnatural and cumbersome to use at times (by disabled people). Gesture Recognition can be useful under such conditions and provides intuitive interaction. Gestures are natural and intuitive means of communication and mostly occur from hands or face of human beings. This work introduces a hand gesture recognition system to recognize real time gestures of the user (finger movements) in unstrained environments. This is an effort to adapt computers to our natural means of communication: Speech and Hand Movements. All the work has been done using Matlab 2011b and in a real time environment which provides a robust technique for feature extraction and speech processing. A USB 2.0 camera continuously tracks the movement of user’s finger which is covered with red marker by filtering out green and blue colors from the RGB color space. Java based commands are used to implement the mouse movement through moving finger and GUI keyboard. Then a microphone is used to make use of the speech processing and instruct the system to click on a particular icon or folder throughout the screen of the system. So it is possible to take control of the whole computer system. Experimental results show that proposed method has high accuracy and outperforms Sub-gesture Modeling based methods [5] Keywords: Hand Gesture Recognition (HGR), Human-Computer Interaction (HCI), Intuitive Interaction
A survey on various resource allocation policies in cloud computing environmenteSAT Journals
Abstract Cloud computing is bringing a revolution in computing environment replacing traditional software installations, licensing issues into complete on-demand services through internet. In Cloud computing multiple cloud users can request number of cloud services simultaneously. So there must be a provision that all resources are made available to requesting user in efficient manner to satisfy their need. Resource allocation is based on quality of service and service level agreement. In cloud computing environment, to allocate resources to the user there are several methods but provider should consider the efficient way to guarantee that the applications’ requirements are attended to correctly and satisfy the user’s need This paper survey different resource allocation policies used in cloud computing environment. Keywords: Cloud computing, Resource allocation
Design of vco using current mode logic with low supply sensitivityeSAT Journals
Abstract The function of an oscillator is to convert dc energy into RF. Voltage Controlled Oscillator (VCO) is a type of oscillator whose output frequency can be varied by a control voltage. Here the VCO is designed using 0.18μm CMOS-RF technology using current mode logic. The Current mode logic is the fastest logic and is mainly used in Clock and Data Recovery applications. Current mode logic (CML) is a differential logic and provides noise immunity. Using the proposed design the current mode logic VCO operates with a supply voltage of 1.8 V and frequency varies from 2 MHz to 2.5 MHz. Its normalized supply sensitivity is 0.976 and optimum sensitivity is 0.966. Keywords: VCO, Supply sensitivity, CML, Ring oscillator.
Optimization study on trailer arm chassis by finite element methodeSAT Journals
Abstract: Chassis is the important part of an automobile. It supports the body and different parts of an automobile. Chassis consists of engine, brakes, power train, steering system and wheels mounted on a frame. Ƭhe frame is the main part of the chassis on which remaining parts of chassis are placed. Ƭhe chassis should be rigid enough to withstand the twist, shock, stresses, vibrations and bending moments to which it is subjected while vehicle is moving on road. Ƭhe trailer arm chassis frame has to carry and sustain the heavy loads which are applied on it. Hence it is very important to design and analysis of the trailer arm chassis frame. Ƭhe design of trailer arm chassis is carried out by taking the base model structure of chassis as a standard. The optimization technique to redesign Chassis (Frame) of a trailer is carried out here. The trailer had dimensional limits and must be able to reduce the overall size and shape and still lift the same amount of load. Different load cases with given boundary conditions & loadings are used. Ƭo improve the performance of the chassis finite element analysis is carried for various alternatives. Ƭhe analysis is performed by varying the thickness, shape and material to get the best possible design of the chassis. Static analysis is carried out for both basic and modified designs to determine the high stress regions, maximum displacement, and normal stress at critical positions of the chassis. Normal modal analysis for base model is carried out to find the first natural frequency. Normal modal analysis is carried out for all modified designs to improve the first natural frequency of the chassis in order to avoid the resonance. The whole challenging task, starting with pre processing, analysis and post processing is completed using Altair’s HyperMesh, Abaqus and HyperView FE package. Key Words: Trailer arm chassis, Static analysis and Modal analysis.
Acoustic emission condition monitoring an application for wind turbine fault...eSAT Journals
Abstract Low speed rotating machines which are the most critical components in drive train of wind turbines are often menaced by several technical and environmental defects. These factors contribute to mount the economic requirement for Health Monitoring and Condition Monitoring of the systems. When a defect is happened in such system result in reduced energy loss rates from related process and due to it Condition Monitoring techniques that detecting energy loss are very difficult if not possible to use. However, in the case of Acoustic Emission (AE)technique this issue is partly overcome and is well suited for detecting very small energy release rates. Acoustic Emission (AE) as a technique is more than 50 years old and in this new technology the sounds associated with the failure of materials were detected. Acoustic wave is a non-stationary signal which can discover elastic stress waves in a failure component, capable of online monitoring, and is very sensitive to the fault diagnosis. In this paper the history and background of discovering and developing AE is discussed, different ages of developing AE which include Age of Enlightenment (1950-1967), Golden Age of AE (1967-1980), Period of Transition (1980-Present). In the next section the application of AE condition monitoring in machinery process and various systems that applied AE technique in their health monitoring is discussed. In the end an experimental result is proposed by QUT test rig which an outer race bearing fault was simulated to depict the sensitivity of AE for detecting incipient faults in low speed high frequency machine. Index Terms: Low speed rotating machine, and Condition Monitoring Systems, Acoustic Emission (AE)
Features of wsn and various routing techniques for wsn a surveyeSAT Journals
Abstract A Wireless Sensor Network is the collection of large number of sensor nodes, which are technically or economically feasible and measure the ambient condition in the environment surrounding them. The difference between usual wireless networks and WSNs is that sensors are sensitive to energy consumption. Most of the attention is given to routing protocols, for energy awareness, since they might differ depending on the application and network architecture. Routing techniques for WSN are classified into three categories based on network structure: Flat, hierarchical and location-based routing. Furthermore, these protocols can be classified into multi-path based, query based, negotiation-based, QoS-based, and coherent–based, depending on the protocol operation. In this paper the survey of routing techniques in WSNs is shown. It is also outlined the design challenges and performance metrics for routing protocols in WSNs. Finally We also highlight the advantages and performance issues of different routing techniques by it’s comparative analysis. Future-directions for routing in sensor network is also described. Index Terms: Wireless sensor network, Routing techniques, Routing challenges and future directions.
Captcha a security measure against spam attackseSAT Journals
Abstract A CAPTCHA is challenge response test used to ensure that the response is generated by humans. CAPTCHA test are administrator by machines to differentiate between humans and machine. Because of this reason CAPTCHAs are also known as the Reverse Turing Test as contrast to Turing Test which is administrated by humans. CAPCHA is used as a simple puzzle, which restricts various automated programs (also known as internet-bots) to sign-up e-mail accounts, cracking passwords, spam sending etc. A common type of CAPTCHA requires user to recognize the letters from a distorted image, since normal human can easily recognize the CAPTCHA, while that particular text cannot be recognized by bot. In short CAPTCHA program challenges the automated program, which trying to access private data. So, CAPTCHA helps in preventing the access of personal mail accounts by some unauthorized automated spamming programs. Index Terms: CAPTCHA, Security, Spam Attacks, Reverse Turing Test.
Emission characteristics of a diesel engine using soyabean oil and diesel blendseSAT Journals
Abstract Diesel engines have been playing a vital role in the transportation and power generation sectors since from its invention. Despite of having better efficiency with diesel engine, the main concern is on emission of pollutants. There are various methods to reduce pollutant emission from a diesel engine. The prominent way to reduce pollutants is the usages of bio fuels with some modifications in the diesel engine. Diesel engine simulation models can be used to understand the combustion performance, prediction of emission concentration. These models can reduce the number of experiments. In this study, the performance and emissions characteristics of single cylinder, four stroke, and direct injection diesel engine operating on diesel and soybean blends have been investigated theoretically. The variations of various species concentration like CO2,CO and NOx with equivalence ratio have been analysed using diesel engine simulation models. These models can reduce the number of experiments. Computer simulation has contributed enormously towards new evaluation in the field of internal combustion engines. Mathematical tools have become very popular in recent years owing to the continuously increasing improvement in computational power. Index terms: Emissions, Bio fuels, Simulation Models
Effect of cobalt chloride on the oxygen consumption and ventilation rate of a...eSAT Journals
Abstract The fish Cirrhinus mrigala (Ham.) exposed to lethal and sublethal concentrations of cobalt chloride at selected periods showed a decrease in their ventilation rate up to 27.91% in lethal concentration at 240hr of exposure, while, in sublethal concentrations initially increased up to 23.95 & 27.91% at 96 and 240hr of exposure followed by a decline up to 24.70 and 12.94% at 960hr of exposure to 39.45 and 13.10 mg/l concentration respectively. The O2 uptake rate initially increased followed by a decline up to 54.47% at 240hr of exposure to lethal concentration (92.00 mg/l) & up to 28.80 & 10.65% in sublethal concentration at 960hr of exposure. Keywords: O2 uptake; ventilation rate: Cirrhinus mrigala; Cobalt chloride
Dynamic selection of cluster head in in networks for energy managementeSAT Journals
Abstract In this project, we presented Multipath Region Routing (MRR) protocol for energy conservation in Wireless Sensor Networks (WSNs). Large scale dense WSNs are used in different types of applications for accurate monitoring. Energy conservation is an important issue in WSNs. In order to save energy, Multipath Region Routing protocol is used which provides balance in energy consumption and sustains the network life-span. By using this method, we can reduce the number of energy dissipation because the cluster head will collect data directly from other nodes. Hence, the energy can be preserved and network life time is extended to reasonable time. Keywords: Clustering; Wireless Sensor Networks; Security; Multipath Region Routing;
Multistage interconnection networks a transition to opticaleSAT Journals
Abstract Several types of interconnection networks have been proposed for parallel processing. One of them is MIN’s (Multistage interconnection networks). With the better way of communication between processor and memory, MIN’s are less complex, good fault tolerance, high reliability, fast in communication and low cost. To achieving reliable, fast and flexible communication optical communication are necessary. Through optical networks we can achieve high performance and low latency also. With their great advantages over electronic networks, OMIN’s also having their own demands and problems. This paper focuses on the qualities of Optical MINs. Here we compare Optical MINs with electronic MIN’s with their design issues and gives some solution approaches for optical MINs to solve crosstalk.
Index Terms— Multistage Interconnection Networks (MIN), Optical networks, Crosstalk, Window methods.
Analytical study and implementation of digital excitation system for diesel g...eSAT Journals
Abstract
This paper aims at the study of DIGITAL EXCITATION SYSTEM [Micro Controller based automatic voltage regulator] employed for
synchronous generators .The main component of the digital excitation system is the Automatic Voltage Regulator. This maintains the
machine terminal voltage in desired reference, which can be varied in order to accomplish different power system requirements .The
digital excitation system is developed from the digital signal processor whichwill capture of variables, performs mathematical
operations and control algorithms and sends the control signals to the actuator. Automatic Voltage regulator (automatic/manual
control), over excitation limiter, Under excitation limité and volts/hertz limiter are included in this digital excitation system. The
exciter is controlled three phase SCR’s bridge
Index Terms: DIGITAL EXCITATION SYSTEM, Automatic Voltage Regulator, SCR Bridge, synchronous
generator,dsPIC30F4012.
Effect of process parameters on tensile strength in gas metal arc welded join...eSAT Journals
Abstract Aluminum and its alloys have been used in recent times due to their light weight, moderate strength and good corrosion resistance. Aluminum alloy 7075-T6 has been researched upon especially as a potential candidate for aircraft material. This alloy is difficult to weld using conventional welding techniques like GTAW and GMAW. An attempt has been made in this paper to weld 7075-T6 alloy using GMAW with argon as a shielding gas. The welding experiment was carried out on Odor, Chennai and tensile strength tested on Ministry of Micro and small and Medium Enterprises Testing Center, Government of India, Chennai for the purpose of improving tensile strength. In order to formulate the equation between important welding parameters like current (I), Voltage (V), Welding Speed (WS) and Gas flow (GS) (predictors) and tensile strength (response) so that multiple regression as chosen and validated this model SPSS 16 and formulated the transfer function with interaction between predictors reported by Minitab 15. Keywords: AA 7075 aluminum alloy, Gas Metal arc welding, Regression Model, Response Surface model
Study on the fire resistant design of reinforced concrete flexural memberseSAT Journals
Abstract The inherent fire resistance property of concrete is one of its benefits. This thesis mainly focuses on fire resistant design of RC flexural member’s viz. beams and slabs using finite element software ANSYS 13. Both thermal and thermo-structural analysis was carried out for various parameters. Thermal analysis is done by taking into concern several parameters viz. aggregate type, cover to the reinforcement, concrete thickness, strength of concrete etc. The results are compared with IS 456:2000 provisions. Thermo-structural analysis is conducted for various support conditions and load ratio. Elements were modeled using SOLID 70 element and LINK 33 element for thermal analysis. For thermo-structural analysis instead of SOLID 70 element, SOLID 65 element was used. The parameters having a paramount influence on fire resistance are support conditions, dimensions of members, action of members under load, cover to the reinforcement, type of aggregates etc. Effect of the parameters on fire resistance is found out. Techniques to develop fire resistance are then found out. Moreover, it is found out that by changing some parameters, better fire resistant design for structural elements can be achieved. Key Words: Thermal analysis, Thermo-structural analysis, SOLID 70, SOLID 65, LINK 33
Plant selection criteria for landscaping in gulbargaeSAT Journals
Abstract
Plants are part and parcel of any urban landscape. Plants in urban areas serve many functions. They are preferred for climatic,
environmental, visual, medicinal, functional and religious purposes. Planting design being a part of process of landscaping, it needs
to be based on site conditions. Gulbarga poses limiting conditions for growth of plants in terms of hot dry climate, soil type and water
quality and physical environment. Plant selection becomes a major task in landscape process. Analyses of effects of these limiting
conditions on plants are necessary to make planting successful. Plant selection criteria are worked out based on the limiting factors
and adaptability of the plants to make landscaping successful.
Keywords: plants in urban environment, planting design, hot dry climate, black soil, hard water, plant selection, criteria
of plant selection
Secured client cache sustain for maintaining consistency in manetseSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Mobile data gathering with load balanced clustering and dual data uploading i...Shakas Technologies
In this paper, a three-layer framework is proposed for mobile data collection in wireless sensor networks, which includes the sensor layer, cluster head layer, and mobile collector (called SenCar) layer.
This document discusses performance analysis and fault tolerance in software environments. It begins by introducing the importance of performance analysis and fault tolerance for software, as faults can lead to losses. It then discusses different fault tolerance techniques, which generally involve some type of replication to handle node and network failures. The two main approaches are replication and coordination, which rely on modeling computation as a deterministic state machine. The document will analyze performance and fault tolerance of software environments.
A NOVEL CACHE RESOLUTION TECHNIQUE FOR COOPERATIVE CACHING IN WIRELESS MOBILE...cscpconf
Cooperative caching is used in mobile ad hoc networks to reduce the latency perceived by the mobile clients while retrieving data and to reduce the traffic load in the network. Caching also increases the availability of data due to server disconnections. The implementation of a cooperative caching technique essentially involves four major design considerations (i) cache placement and resolution, which decides where to place and how to locate the cached data (ii) Cache admission control which decides the data to be cached (iii) Cache replacement which makes the replacement decision when the cache is full and (iv) consistency maintenance, i.e. maintaining consistency between the data in server and cache. In this paper we propose an effective cache resolution technique, which reduces the number of messages flooded in to the network to find the requested data. The experimental results gives a promising result based on the metrics of studies.
A novel cache resolution technique for cooperative caching in wireless mobile...csandit
Cooperative caching is used in mobile ad hoc networks to reduce the latency perceived by the
mobile clients while retrieving data and to reduce the traffic load in the network. Caching also
increases the availability of data due to server disconnections. The implementation of a
cooperative caching technique essentially involves four major design considerations (i) cache
placement and resolution, which decides where to place and how to locate the cached data (ii)
Cache admission control which decides the data to be cached (iii) Cache replacement which
makes the replacement decision when the cache is full and (iv) consistency maintenance, i.e.
maintaining consistency between the data in server and cache. In this paper we propose an
effective cache resolution technique, which reduces the number of messages flooded in to the
network to find the requested data. The experimental results gives a promising result based on
the metrics of studies.
Energy Efficient Data Mining in Multi-Feature Sensor Networks Using Improved...IOSR Journals
This document proposes an improved LEACH (Low-Energy Adaptive Clustering Hierarchy) communication protocol for energy efficient data mining in multi-feature sensor networks. The original LEACH protocol has drawbacks like random cluster head selection and uneven energy consumption. The improved protocol designates both a cluster head and sub-cluster head to take over if the head dies. This addresses the issues with the cluster head dying and the cluster becoming useless. The improved LEACH protocol is proposed to cluster sensor nodes in multi-feature networks to enhance energy efficiency and reliability of data transfer compared to the original LEACH protocol.
In this paper we explore the issue of store determination in a portable shared specially appointed system. In our vision reserve determination ought to fulfill the accompanying prerequisites: (i) it ought to bring about low message overhead and (ii) the data ought to be recovered with least postponement. In this paper, we demonstrate that these objectives can be accomplished by part the one bounce neighbors into two sets in view of the transmission run. The proposed approach lessens the quantity of messages overflowed into the system to discover the asked for information. This plan is completely circulated and comes requiring little to no effort as far as store overhead. The test comes about gives a promising outcome in view of the measurements of studies.
Mobile Data Gathering with Load Balanced Clustering and Dual Data Uploading i...1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
Mobile data gathering with load balanced clustering and dual data uploading i...shanofa sanu
We provide project guidance for final year MTech, BTech, MSc, MCA, ME, BE, BSc, BCA & Diploma students in Electronics, Computer Science, Information Technology, Instrumentation, Electrical & Electronics, Power electronics, Mechanical, Automobile etc. We provide live project assistance and will make the students involve throughout the project. We specialize in Matlab, VLSI, CST, JAVA, .NET, ANDROID, PHP, NS2, EMBEDDED, ARDUINO, ARM, DSP, etc based areas. We research in Image processing, Signal Processing, Wireless communication, Cloud computing, Data mining, Networking, Artificial Intelligence and several other areas. We provide complete support in project completion, documentation and other works related to project.Success is a lousy teacher. It seduces smart people into thinking they can't lose.we have better knowledge in this field and updated with new innovative technologies.
Call me at: 9037291113.
Maintain load balancing in wireless sensor networks using virtual grid based ...zaidinvisible
This document summarizes a routing protocol called the Virtual Grid Based Routing Protocol (VGRP) that aims to maximize the lifetime of wireless sensor networks by balancing the data traffic load evenly among sensor nodes. The VGRP splits the sensor field into a grid of equal sized sub-cells and uses clustering and chain techniques to collect sensed data. It is compared to the CFDASC routing protocol through simulation, and results show VGRP outperforms CFDASC in terms of network stability and load balancing. The document provides background on wireless sensor networks and reviews several related grid-based and load balancing routing protocols.
CONCEALED DATA AGGREGATION WITH DYNAMIC INTRUSION DETECTION SYSTEM TO REMOVE ...cscpconf
Data Aggregation is a vital aspect in WSNs (Wireless Sensor Networks) and this is because it
reduces the quantity of data to be transmitted over the complex network. In earlier studies
authors used homomorphic encryption properties for concealing statement during aggregation
such that encrypted data can be aggregated algebraically without decrypting them. These
schemes are not applicable for multi applications which lead to proposal of Concealed Data
Aggregation for Multi Applications (CDAMA). It is designed for multi applications, as it
provides secure counting ability. In wireless sensor networks SN are unarmed and are
susceptible to attacks. Considering the defence aspect of wireless environment we have used
DYDOG (Dynamic Intrusion Detection Protocol Model) and a customized key generation
procedure that uses Digital Signatures and also Two Fish Algorithms along with CDAMA for
augmentation of security and throughput. To prove our proposed scheme’s robustness and
effectiveness, we conducted the simulations, inclusive analysis and comparisons at the ending.
Peer to peer cache resolution mechanism for mobile ad hoc networksijwmn
In this paper we investigate the problem of cache resolution in a mobile peer to peer ad hoc network. In our
vision cache resolution should satisfy the following requirements: (i) it should result in low message
overhead and (ii) the information should be retrieved with minimum delay. In this paper, we show that
these goals can be achieved by splitting the one hop neighbours in to two sets based on the transmission
range. The proposed approach reduces the number of messages flooded in to the network to find the
requested data. This scheme is fully distributed and comes at very low cost in terms of cache overhead. The
experimental results gives a promising result based on the metrics of studies
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability.
A TIME INDEX BASED APPROACH FOR CACHE SHARING IN MOBILE ADHOC NETWORKS cscpconf
Initially wireless networks were fully infrastructure based and hence imposed the necessity to
install base station. Base station leads to single point of failure and causes scalability problems.
With the advent of mobile adhoc networks, these problems are mitigated, by allowing certain
mobile nodes to form a dynamic and temporary communication network without any preexisting
infrastructure. Caching is an important technique to enhance the performance in any network. Particularly, in MANETs, it is important to cache frequently accessed data not only to reduce average latency and wireless bandwidth but also to avoid heavy traffic near the data centre. With data being cached by mobile nodes, a request to the data centre can easily be serviced by a nearby mobile node instead of the data centre alone. In this paper we propose a system , Time Index Based Approach that focuses on providing recent data on demand basis. In this system, the data comes along with a time stamp. In our work we propose three policies namely Item Discovery, Item Admission and Item Replacement, to provide data availability even with limited resources. Data consistency is ensured here since if the mobile client receives the same data item with an updated time, the previous content along with time is replaced to provide only recent data. Data availability is promised by mobile nodes, instead of the data server. We enhance the space availability in a node by deploying automated replacement policy.
Minimum Process Coordinated Checkpointing Scheme For Ad Hoc Networks pijans
The wireless mobile ad hoc network (MANET) architecture is one consisting of a set of mobile hosts
capable of communicating with each other without the assistance of base stations. This has made possible
creating a mobile distributed computing environment and has also brought several new challenges in
distributed protocol design. In this paper, we study a very fundamental problem, the fault tolerance
problem, in a MANET environment and propose a minimum process coordinated checkpointing scheme.
Since potential problems of this new environment are insufficient power and limited storage capacity, the
proposed scheme tries to reduce the amount of information saved for recovery. The MANET structure used
in our algorithm is hierarchical based. The scheme is based for Cluster Based Routing Protocol (CBRP)
which belongs to a class of Hierarchical Reactive routing protocols. The protocol proposed by us is nonblocking coordinated checkpointing algorithm suitable for ad hoc environments. It produces a consistent
set of checkpoints; the algorithm makes sure that only minimum number of nodes in the cluster are
required to take checkpoints; it uses very few control messages. Performance analysis shows that our
algorithm outperforms the existing related works and is a novel idea in the field. Firstly, we describe an
organization of the cluster. Then we propose a minimum process coordinated checkpointing scheme for
cluster based ad hoc routing protocols.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A DDS-Based Scalable and Reconfigurable Framework for Cyber-Physical Systemsijseajournal
Cyber-Physical Systems (CPSs) involve the interconnection of heterogeneous computing devices which are
closely integrated with the physical processes under control. Often, these systems are resource-constrained
and require specific features such as the ability to adapt in a timeliness and efficient fashion to dynamic
environments. Also, they must support fault tolerance and avoid single points of failure. This paper
describes a scalable framework for CPSs based on the OMG DDS standard. The proposed solution allows
reconfiguring this kind of systems at run-time and managing efficiently their resources.
This document summarizes an article from the International Journal of Electronics and Communication Engineering & Technology about implementing a scalable queue manager on an FPGA. It discusses how traditional per-flow queue managers require a large number of queues that scales with the number of flows, consuming significant memory. The proposed architecture uses dynamic queue sharing to allocate a limited number of physical queues for active flows only. It can operate in per-flow or per-class modes depending on traffic conditions to reduce queue exhaustion. The algorithms were implemented on an FPGA and showed reductions in required memory and device utilization compared to only using dynamic queue sharing.
Similar to Sdci scalable distributed cache indexing for cache consistency for mobile environments (20)
Mechanical properties of hybrid fiber reinforced concrete for pavementseSAT Journals
Abstract
The effect of addition of mono fibers and hybrid fibers on the mechanical properties of concrete mixture is studied in the present
investigation. Steel fibers of 1% and polypropylene fibers 0.036% were added individually to the concrete mixture as mono fibers and
then they were added together to form a hybrid fiber reinforced concrete. Mechanical properties such as compressive, split tensile and
flexural strength were determined. The results show that hybrid fibers improve the compressive strength marginally as compared to
mono fibers. Whereas, hybridization improves split tensile strength and flexural strength noticeably.
Keywords:-Hybridization, mono fibers, steel fiber, polypropylene fiber, Improvement in mechanical properties.
Material management in construction – a case studyeSAT Journals
Abstract
The objective of the present study is to understand about all the problems occurring in the company because of improper application
of material management. In construction project operation, often there is a project cost variance in terms of the material, equipments,
manpower, subcontractor, overhead cost, and general condition. Material is the main component in construction projects. Therefore,
if the material management is not properly managed it will create a project cost variance. Project cost can be controlled by taking
corrective actions towards the cost variance. Therefore a methodology is used to diagnose and evaluate the procurement process
involved in material management and launch a continuous improvement was developed and applied. A thorough study was carried
out along with study of cases, surveys and interviews to professionals involved in this area. As a result, a methodology for diagnosis
and improvement was proposed and tested in selected projects. The results obtained show that the main problem of procurement is
related to schedule delays and lack of specified quality for the project. To prevent this situation it is often necessary to dedicate
important resources like money, personnel, time, etc. To monitor and control the process. A great potential for improvement was
detected if state of the art technologies such as, electronic mail, electronic data interchange (EDI), and analysis were applied to the
procurement process. These helped to eliminate the root causes for many types of problems that were detected.
Managing drought short term strategies in semi arid regions a case studyeSAT Journals
Abstract
Drought management needs multidisciplinary action. Interdisciplinary efforts among the experts in various fields of the droughts
prone areas are helpful to achieve tangible and permanent solution for this recurring problem. The Gulbarga district having the total
area around 16, 240 sq.km, and accounts 8.45 per cent of the Karnataka state area. The district has been situated with latitude 17º 19'
60" North and longitude of 76 º 49' 60" east. The district is situated entirely on the Deccan plateau positioned at a height of 300 to
750 m above MSL. Sub-tropical, semi-arid type is one among the drought prone districts of Karnataka State. The drought
management is very important for a district like Gulbarga. In this paper various short term strategies are discussed to mitigate the
drought condition in the district.
Keywords: Drought, South-West monsoon, Semi-Arid, Rainfall, Strategies etc.
Life cycle cost analysis of overlay for an urban road in bangaloreeSAT Journals
Abstract
Pavements are subjected to severe condition of stresses and weathering effects from the day they are constructed and opened to traffic
mainly due to its fatigue behavior and environmental effects. Therefore, pavement rehabilitation is one of the most important
components of entire road systems. This paper highlights the design of concrete pavement with added mono fibers like polypropylene,
steel and hybrid fibres for a widened portion of existing concrete pavement and various overlay alternatives for an existing
bituminous pavement in an urban road in Bangalore. Along with this, Life cycle cost analyses at these sections are done by Net
Present Value (NPV) method to identify the most feasible option. The results show that though the initial cost of construction of
concrete overlay is high, over a period of time it prove to be better than the bituminous overlay considering the whole life cycle cost.
The economic analysis also indicates that, out of the three fibre options, hybrid reinforced concrete would be economical without
compromising the performance of the pavement.
Keywords: - Fatigue, Life cycle cost analysis, Net Present Value method, Overlay, Rehabilitation
Laboratory studies of dense bituminous mixes ii with reclaimed asphalt materialseSAT Journals
Abstract
The issue of growing demand on our nation’s roadways over that past couple of decades, decreasing budgetary funds, and the need to
provide a safe, efficient, and cost effective roadway system has led to a dramatic increase in the need to rehabilitate our existing
pavements and the issue of building sustainable road infrastructure in India. With these emergency of the mentioned needs and this
are today’s burning issue and has become the purpose of the study.
In the present study, the samples of existing bituminous layer materials were collected from NH-48(Devahalli to Hassan) site.The
mixtures were designed by Marshall Method as per Asphalt institute (MS-II) at 20% and 30% Reclaimed Asphalt Pavement (RAP).
RAP material was blended with virgin aggregate such that all specimens tested for the, Dense Bituminous Macadam-II (DBM-II)
gradation as per Ministry of Roads, Transport, and Highways (MoRT&H) and cost analysis were carried out to know the economics.
Laboratory results and analysis showed the use of recycled materials showed significant variability in Marshall Stability, and the
variability increased with the increase in RAP content. The saving can be realized from utilization of recycled materials as per the
methodology, the reduction in the total cost is 19%, 30%, comparing with the virgin mixes.
Keywords: Reclaimed Asphalt Pavement, Marshall Stability, MS-II, Dense Bituminous Macadam-II
Laboratory investigation of expansive soil stabilized with natural inorganic ...eSAT Journals
This document summarizes a study on stabilizing expansive black cotton soil with the natural inorganic stabilizer RBI-81. Laboratory tests were conducted to evaluate the effect of RBI-81 on the soil's engineering properties. The tests showed that with 2% RBI-81 and 28 days of curing, the unconfined compressive strength increased by around 250% and the CBR value improved by approximately 400% compared to the untreated soil. Overall, the study found that RBI-81 effectively improved the strength properties of the black cotton soil and its suitability as a soil stabilizer was supported.
Influence of reinforcement on the behavior of hollow concrete block masonry p...eSAT Journals
Abstract
Reinforced masonry was developed to exploit the strength potential of masonry and to solve its lack of tensile strength. Experimental
and analytical studies have been carried out to investigate the effect of reinforcement on the behavior of hollow concrete block
masonry prisms under compression and to predict ultimate failure compressive strength. In the numerical program, three dimensional
non-linear finite elements (FE) model based on the micro-modeling approach is developed for both unreinforced and reinforced
masonry prisms using ANSYS (14.5). The proposed FE model uses multi-linear stress-strain relationships to model the non-linear
behavior of hollow concrete block, mortar, and grout. Willam-Warnke’s five parameter failure theory has been adopted to model the
failure of masonry materials. The comparison of the numerical and experimental results indicates that the FE models can successfully
capture the highly nonlinear behavior of the physical specimens and accurately predict their strength and failure mechanisms.
Keywords: Structural masonry, Hollow concrete block prism, grout, Compression failure, Finite element method,
Numerical modeling.
Influence of compaction energy on soil stabilized with chemical stabilizereSAT Journals
This document summarizes a study on the influence of compaction energy on soil stabilized with a chemical stabilizer. Laboratory tests were conducted on locally available loamy soil treated with a patented polymer liquid stabilizer and compacted at four different energy levels. The study found that increasing the compaction effort increased the density of both untreated and treated soil, but the rate of increase was lower for stabilized soil. Treating the soil with the stabilizer improved its unconfined compressive strength and resilient modulus, and reduced accumulated plastic strain, with these properties further improved by higher compaction efforts. The stabilized soil exhibited strength and performance benefits compared to the untreated soil.
Geographical information system (gis) for water resources managementeSAT Journals
This document describes a hydrological framework developed in the form of a Hydrologic Information System (HIS) to meet the information needs of various government departments related to water management in a state. The HIS consists of a hydrological database coupled with tools for collecting and analyzing spatial and non-spatial water resources data. It also incorporates a hydrological model to indirectly assess water balance components over space and time. A web-based GIS portal was created to allow users to access and visualize the hydrological data, as well as outputs from the SWAT hydrological model. The framework is intended to facilitate integrated water resources planning and management across different administrative levels.
Forest type mapping of bidar forest division, karnataka using geoinformatics ...eSAT Journals
Abstract
The study demonstrate the potentiality of satellite remote sensing technique for the generation of baseline information on forest types
including tree plantation details in Bidar forest division, Karnataka covering an area of 5814.60Sq.Kms. The Total Area of Bidar
forest division is 5814Sq.Kms analysis of the satellite data in the study area reveals that about 84% of the total area is Covered by
crop land, 1.778% of the area is covered by dry deciduous forest, 1.38 % of mixed plantation, which is very threatening to the
environmental stability of the forest, future plantation site has been mapped. With the use of latest Geo-informatics technology proper
and exact condition of the trees can be observed and necessary precautions can be taken for future plantation works in an appropriate
manner
Keywords:-RS, GIS, GPS, Forest Type, Tree Plantation
Factors influencing compressive strength of geopolymer concreteeSAT Journals
Abstract
To study effects of several factors on the properties of fly ash based geopolymer concrete on the compressive strength and also the
cost comparison with the normal concrete. The test variables were molarities of sodium hydroxide(NaOH) 8M,14M and 16M, ratio of
NaOH to sodium silicate (Na2SiO3) 1, 1.5, 2 and 2.5, alkaline liquid to fly ash ratio 0.35 and 0.40 and replacement of water in
Na2SiO3 solution by 10%, 20% and 30% were used in the present study. The test results indicated that the highest compressive
strength 54 MPa was observed for 16M of NaOH, ratio of NaOH to Na2SiO3 2.5 and alkaline liquid to fly ash ratio of 0.35. Lowest
compressive strength of 27 MPa was observed for 8M of NaOH, ratio of NaOH to Na2SiO3 is 1 and alkaline liquid to fly ash ratio of
0.40. Alkaline liquid to fly ash ratio of 0.35, water replacement of 10% and 30% for 8 and 16 molarity of NaOH and has resulted in
compressive strength of 36 MPa and 20 MPa respectively. Superplasticiser dosage of 2 % by weight of fly ash has given higher
strength in all cases.
Keywords: compressive strength, alkaline liquid, fly ash
Experimental investigation on circular hollow steel columns in filled with li...eSAT Journals
Abstract
Composite Circular hollow Steel tubes with and without GFRP infill for three different grades of Light weight concrete are tested for
ultimate load capacity and axial shortening , under Cyclic loading. Steel tubes are compared for different lengths, cross sections and
thickness. Specimens were tested separately after adopting Taguchi’s L9 (Latin Squares) Orthogonal array in order to save the initial
experimental cost on number of specimens and experimental duration. Analysis was carried out using ANN (Artificial Neural
Network) technique with the assistance of Mini Tab- a statistical soft tool. Comparison for predicted, experimental & ANN output is
obtained from linear regression plots. From this research study, it can be concluded that *Cross sectional area of steel tube has most
significant effect on ultimate load carrying capacity, *as length of steel tube increased- load carrying capacity decreased & *ANN
modeling predicted acceptable results. Thus ANN tool can be utilized for predicting ultimate load carrying capacity for composite
columns.
Keywords: Light weight concrete, GFRP, Artificial Neural Network, Linear Regression, Back propagation, orthogonal
Array, Latin Squares
Experimental behavior of circular hsscfrc filled steel tubular columns under ...eSAT Journals
This document summarizes an experimental study that tested circular concrete-filled steel tube columns with varying parameters. 45 specimens were tested with different fiber percentages (0-2%), tube diameter-to-wall-thickness ratios (D/t from 15-25), and length-to-diameter (L/d) ratios (from 2.97-7.04). The results found that columns filled with fiber-reinforced concrete exhibited higher stiffness, equal ductility, and enhanced energy absorption compared to those filled with plain concrete. The load carrying capacity increased with fiber content up to 1.5% but not at 2.0%. The analytical predictions of failure load closely matched the experimental values.
Evaluation of punching shear in flat slabseSAT Journals
Abstract
Flat-slab construction has been widely used in construction today because of many advantages that it offers. The basic philosophy in
the design of flat slab is to consider only gravity forces; this method ignores the effect of punching shear due to unbalanced moments
at the slab column junction which is critical. An attempt has been made to generate generalized design sheets which accounts both
punching shear due to gravity loads and unbalanced moments for cases (a) interior column; (b) edge column (bending perpendicular
to shorter edge); (c) edge column (bending parallel to shorter edge); (d) corner column. These design sheets are prepared as per
codal provisions of IS 456-2000. These design sheets will be helpful in calculating the shear reinforcement to be provided at the
critical section which is ignored in many design offices. Apart from its usefulness in evaluating punching shear and the necessary
shear reinforcement, the design sheets developed will enable the designer to fix the depth of flat slab during the initial phase of the
design.
Keywords: Flat slabs, punching shear, unbalanced moment.
Evaluation of performance of intake tower dam for recent earthquake in indiaeSAT Journals
Abstract
Intake towers are typically tall, hollow, reinforced concrete structures and form entrance to reservoir outlet works. A parametric
study on dynamic behavior of circular cylindrical towers can be carried out to study the effect of depth of submergence, wall thickness
and slenderness ratio, and also effect on tower considering dynamic analysis for time history function of different soil condition and
by Goyal and Chopra accounting interaction effects of added hydrodynamic mass of surrounding and inside water in intake tower of
dam
Key words: Hydrodynamic mass, Depth of submergence, Reservoir, Time history analysis,
Evaluation of operational efficiency of urban road network using travel time ...eSAT Journals
This document evaluates the operational efficiency of an urban road network in Tiruchirappalli, India using travel time reliability measures. Traffic volume and travel times were collected using video data from 8-10 AM on various roads. Average travel times, 95th percentile travel times, and buffer time indexes were calculated to assess reliability. Non-motorized vehicles were found to most impact reliability on one road. A relationship between buffer time index and traffic volume was developed. Finally, a travel time model was created and validated based on length, speed, and volume.
Estimation of surface runoff in nallur amanikere watershed using scs cn methodeSAT Journals
Abstract
The development of watershed aims at productive utilization of all the available natural resources in the entire area extending from
ridge line to stream outlet. The per capita availability of land for cultivation has been decreasing over the years. Therefore, water and
the related land resources must be developed, utilized and managed in an integrated and comprehensive manner. Remote sensing and
GIS techniques are being increasingly used for planning, management and development of natural resources. The study area, Nallur
Amanikere watershed geographically lies between 110 38’ and 110 52’ N latitude and 760 30’ and 760 50’ E longitude with an area of
415.68 Sq. km. The thematic layers such as land use/land cover and soil maps were derived from remotely sensed data and overlayed
through ArcGIS software to assign the curve number on polygon wise. The daily rainfall data of six rain gauge stations in and around
the watershed (2001-2011) was used to estimate the daily runoff from the watershed using Soil Conservation Service - Curve Number
(SCS-CN) method. The runoff estimated from the SCS-CN model was then used to know the variation of runoff potential with different
land use/land cover and with different soil conditions.
Keywords: Watershed, Nallur watershed, Surface runoff, Rainfall-Runoff, SCS-CN, Remote Sensing, GIS.
Estimation of morphometric parameters and runoff using rs & gis techniqueseSAT Journals
This document summarizes a study that used remote sensing and GIS techniques to estimate morphometric parameters and runoff for the Yagachi catchment area in India over a 10-year period. Morphometric analysis was conducted to understand the hydrological response at the micro-watershed level. Daily runoff was estimated using the SCS curve number model. The results showed a positive correlation between rainfall and runoff. Land use/land cover changes between 2001-2010 were found to impact estimated runoff amounts. Remote sensing approaches provided an effective means to model runoff for this large, ungauged area.
Effect of variation of plastic hinge length on the results of non linear anal...eSAT Journals
Abstract The nonlinear Static procedure also well known as pushover analysis is method where in monotonically increasing loads are applied to the structure till the structure is unable to resist any further load. It is a popular tool for seismic performance evaluation of existing and new structures. In literature lot of research has been carried out on conventional pushover analysis and after knowing deficiency efforts have been made to improve it. But actual test results to verify the analytically obtained pushover results are rarely available. It has been found that some amount of variation is always expected to exist in seismic demand prediction of pushover analysis. Initial study is carried out by considering user defined hinge properties and default hinge length. Attempt is being made to assess the variation of pushover analysis results by considering user defined hinge properties and various hinge length formulations available in literature and results compared with experimentally obtained results based on test carried out on a G+2 storied RCC framed structure. For the present study two geometric models viz bare frame and rigid frame model is considered and it is found that the results of pushover analysis are very sensitive to geometric model and hinge length adopted. Keywords: Pushover analysis, Base shear, Displacement, hinge length, moment curvature analysis
Effect of use of recycled materials on indirect tensile strength of asphalt c...eSAT Journals
Abstract
Depletion of natural resources and aggregate quarries for the road construction is a serious problem to procure materials. Hence
recycling or reuse of material is beneficial. On emphasizing development in sustainable construction in the present era, recycling of
asphalt pavements is one of the effective and proven rehabilitation processes. For the laboratory investigations reclaimed asphalt
pavement (RAP) from NH-4 and crumb rubber modified binder (CRMB-55) was used. Foundry waste was used as a replacement to
conventional filler. Laboratory tests were conducted on asphalt concrete mixes with 30, 40, 50, and 60 percent replacement with RAP.
These test results were compared with conventional mixes and asphalt concrete mixes with complete binder extracted RAP
aggregates. Mix design was carried out by Marshall Method. The Marshall Tests indicated highest stability values for asphalt
concrete (AC) mixes with 60% RAP. The optimum binder content (OBC) decreased with increased in RAP in AC mixes. The Indirect
Tensile Strength (ITS) for AC mixes with RAP also was found to be higher when compared to conventional AC mixes at 300C.
Keywords: Reclaimed asphalt pavement, Foundry waste, Recycling, Marshall Stability, Indirect tensile strength.
Applications of artificial Intelligence in Mechanical Engineering.pdfAtif Razi
Historically, mechanical engineering has relied heavily on human expertise and empirical methods to solve complex problems. With the introduction of computer-aided design (CAD) and finite element analysis (FEA), the field took its first steps towards digitization. These tools allowed engineers to simulate and analyze mechanical systems with greater accuracy and efficiency. However, the sheer volume of data generated by modern engineering systems and the increasing complexity of these systems have necessitated more advanced analytical tools, paving the way for AI.
AI offers the capability to process vast amounts of data, identify patterns, and make predictions with a level of speed and accuracy unattainable by traditional methods. This has profound implications for mechanical engineering, enabling more efficient design processes, predictive maintenance strategies, and optimized manufacturing operations. AI-driven tools can learn from historical data, adapt to new information, and continuously improve their performance, making them invaluable in tackling the multifaceted challenges of modern mechanical engineering.
Software Engineering and Project Management - Introduction, Modeling Concepts...Prakhyath Rai
Introduction, Modeling Concepts and Class Modeling: What is Object orientation? What is OO development? OO Themes; Evidence for usefulness of OO development; OO modeling history. Modeling
as Design technique: Modeling, abstraction, The Three models. Class Modeling: Object and Class Concept, Link and associations concepts, Generalization and Inheritance, A sample class model, Navigation of class models, and UML diagrams
Building the Analysis Models: Requirement Analysis, Analysis Model Approaches, Data modeling Concepts, Object Oriented Analysis, Scenario-Based Modeling, Flow-Oriented Modeling, class Based Modeling, Creating a Behavioral Model.
An improved modulation technique suitable for a three level flying capacitor ...IJECEIAES
This research paper introduces an innovative modulation technique for controlling a 3-level flying capacitor multilevel inverter (FCMLI), aiming to streamline the modulation process in contrast to conventional methods. The proposed
simplified modulation technique paves the way for more straightforward and
efficient control of multilevel inverters, enabling their widespread adoption and
integration into modern power electronic systems. Through the amalgamation of
sinusoidal pulse width modulation (SPWM) with a high-frequency square wave
pulse, this controlling technique attains energy equilibrium across the coupling
capacitor. The modulation scheme incorporates a simplified switching pattern
and a decreased count of voltage references, thereby simplifying the control
algorithm.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Design and optimization of ion propulsion dronebjmsejournal
Electric propulsion technology is widely used in many kinds of vehicles in recent years, and aircrafts are no exception. Technically, UAVs are electrically propelled but tend to produce a significant amount of noise and vibrations. Ion propulsion technology for drones is a potential solution to this problem. Ion propulsion technology is proven to be feasible in the earth’s atmosphere. The study presented in this article shows the design of EHD thrusters and power supply for ion propulsion drones along with performance optimization of high-voltage power supply for endurance in earth’s atmosphere.
Discover the latest insights on Data Driven Maintenance with our comprehensive webinar presentation. Learn about traditional maintenance challenges, the right approach to utilizing data, and the benefits of adopting a Data Driven Maintenance strategy. Explore real-world examples, industry best practices, and innovative solutions like FMECA and the D3M model. This presentation, led by expert Jules Oudmans, is essential for asset owners looking to optimize their maintenance processes and leverage digital technologies for improved efficiency and performance. Download now to stay ahead in the evolving maintenance landscape.
Sdci scalable distributed cache indexing for cache consistency for mobile environments
1. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 02 Issue: 04 | Apr-2013, Available @ http://www.ijret.org 428
SDCI: SCALABLE DISTRIBUTED CACHE INDEXING FOR CACHE
CONSISTENCY FOR MOBILE ENVIRONMENTS
G. Lingamaiah1
, S. G. Nawaz2
, B. Dhanunjaya3
, G. Peddi Raju4
, K. Somasena Reddy5
Gujjala.lingamaiah@gmail.com, sngnawaz@gmail.com, Bandidhanunjaya700@gmail.com, peddi.raju15@gmail.com,
Somasena12@gmail.com
Abstract
In this paper, we suggest a new cache consistency maintenance scheme, namely Scalable distributed cache indexing for Cache
Consistency (SDCI), for mobile environments. It mainly depends on 3 key features. They are: (1) Utilization of standard bits at server
and Mobile Node's cache for maintaining cache consistency; (2) Utilization of Local Cache Standard (LC-standard) for all entries in
Mobile Node's cache after its invalidation for maximizing the broadcast bandwidth efficiency; (3) Making every valid entry of Mobile
Node's cache to whenever it wakes up. These features make the SDCI a good scalable algorithm with minimum database management
overhead. By observing Comprehensive simulation results we can see that the performance of SDCI is superior to those of existing
algorithms.
Keywords: indexing, cache updating, mobile network, SDCI, SSUM
-----------------------------------------------------------------------***-----------------------------------------------------------------------
1. INTRODUCTION
The use of wireless communication has been increasing day-
by-day and has become a significant means for people to
access different kinds dynamically changing data objects, such
as news, stock price, and traffic information. But, wireless
mobile computing environments are restricted by
communication bandwidth and battery power [1], and have to
survive with Mobile Node's disconnectedness and mobility.
So, data communication in wireless mobile networks is
difficult compared to that of wired networks.
Caching frequently used data objects at the local buffer of a
Mobile Node is a better way to decrease query delay, save
bandwidth and ameliorate system performance. However, in
wireless mobile computing environments, difficulty in cache
consistency arises with the frequent disconnection and
roaming of an Mobile Node. A strategy that is successful must
be able to efficiently handle both disconnectedness and
mobility. The advantage with the broadcast is that it is able to
serve an arbitrary number of Mobile Nodes with minimum
bandwidth consumption. So, efficient mobile data
transmission architecture should prudently design its broadcast
and cache management schemes to maximize and minimize
delay. Efficient mobile data transmission architecture should
also be scalable, such that it works efficiently for large
database systems and also upholds a large number of Mobile
Nodes.
In the literature, there exist 2 types of cache consistency
maintenance algorithms for wireless mobile computing envi-
ronments. They are stateless and state-full. In the case of
stateless approach, the server doesn’t know the client's cache
content. The client requires for verifying the validity of cached
entries from the server before each and every query. Even
though stateless approaches use simple database management,
their scalability and capacity to uphold disconnectedness are
poor. On the other hand, approaches are scalable. However,
they cause significant overhead because of server database
management. So, there is a necessity to develop scalable and
efficient algorithms for maintaining cache consistency in
mobile environments.
Inspired by the necessity for a scalable and efficient cache
consistency maintenance mechanism, we put forward a new
algorithm, namely scalable asynchronous cache consistency
schema (SDCI) that maintains cache consistency between the
Central Data Provider (CDP) and Mobile Node's caches. SDCI
is a highly scalable, efficient, and low complexity algorithm
because of the 3 key features: (1) Utilization of standard bits at
server and Mobile Node's cache for maintaining cache
consistency; (2) Utilization of an iD for each and every entry
in Mobile Node's cache after its invalidation for maximizing
the broadcast bandwidth efficiency; (3) Making all valid
entries of Mobile Node's cache to uncertain state when it
wakes up.
Comprehensive simulation results reveal that SDCI gives
superior performance than existing algorithms. Taking an
example, in a system having different types of Mobile Node
access patterns and data object update frequencies, SDCI will
be able to support about 44 percent and 270 percent more
Mobile Nodes than Timestamp (SSUM) schemes,
respectively. It also makes sure that the average access delay
is no larger than seconds.
2. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 02 Issue: 04 | Apr-2013, Available @ http://www.ijret.org 429
Remaining part of the paper is organized as given below.
Section ii is a brief overview of the related work. In Section
III, complete description of the SDCI algorithm is given.
Section IV gives comprehensive simulation results of the
algorithm and after that compares it with already existing
procedures. Section V is about the conclusions drawn.
2. RELATED WORK
We summarize existing stateless and stateful processes in this
section. In the stateless approach [2]-[7], a CDP presumes no
knowledge of Mobile Node's cache contents. CDP sends
CUAs to its Mobile Nodes sporadically. At an Mobile Node, a
data object request cannot be serviced until the next IR from
the CDP is received. In the stateful approach [8], an CDP
preserves object state for each Mobile Node's cache and only
broadcasts CUAs for those objects.
Barbara and Imielinksi [2] put forwarded 3 stateless algo-
rithms: Timestamps (SSUM), Amnesic Terminals (AT) and
Signature (SIG). In these algorithms, the CDP broadcasts
CUAs, that consist all data object IDs that are updated during
the past seconds (where is a positive integer), every seconds.
The benefit of these algorithms is that an CDP does not
maintain any state information about its Mobile Node's
caches, and this makes the management of CDP database very
easy. But, there are many disadvantages with these algorithms.
The first problem is that they do not scale well to large
databases and/or fast updating data systems, because of
increased number of IR messages. The second one is that the
average access latency is usually longer than half of the
broadcast period. This is because all requests can be answered
only after the next IR. Lastly all cached data objects are
dropped in the case of the sleep time is longer than.
For handling the long sleep-wake up patterns, various
algorithms have been suggested. Going by example, in the bit-
sequence (BS) algorithm because of Jing, et al. [3], every
cache entry is deleted only when half or more of the data
entries in the cache have been nullified. But, the model needs
the broadcast of a very large number of IR messages
compared to that of SSUM and AT schemes. Wu et al. [5]
suggested an uplink validation check scheme which will be
able to deal with long sleep-wake up themes better than
SSUM and AT. However the problem with this approach is
that this approach needs more uplink bandwidth and cannot
deal with long sleep-wake up patterns.
A very few stateful cache consistency maintenance algorithms
have been suggested for wireless mobile computing
environments. Khaleel Mershad et al[12] proposed a smart
server update mechanism for Maintaining Cache Consistency
in Mobile Environments. Though the solution is impressive
when compared to most frequently cited solutions in earlier
literature, but still it is probabilistic to make available data in
cache for nodes,.
In AS, an CDP records all retrieved data object for each and
every Mobile Node. After an Mobile Node first gets back a
data object after it wakes up, the CDP sends an IR, depending
on the Mobile Node's cache content record and sleep-wake up
time, to that respective Mobile Node. When an CDP gets an
update from the original server for all the recorded data
objects, it broadcasts that data object's IR to Mobile Nodes.
The benefit of the AS scheme is that the CDP averts needless
broadcast of CUAs to Mobile Nodes and also it can deal with
any sleep-wake-up pattern without losing any valid data ob-
ject. But, for maintaining each Mobile Node's cache state in
the CDP, the CDP must and should record all cached data
objects for each and every Mobile Node. So an Mobile Node
can be able to download data objects which it requested using
the uplink. This renders the channel utility inefficient and
sensitive to the number of Mobile Nodes.
3. SCALABLE DISTRIBUTED CACHE INDEXING FOR
CACHE CONSISTENCY (SDCI)
In this particular paper, we suggest a new Scalable distributed
cache indexing for Cache Consistency (SDCI) for maintaining
Mobile Node's cache consistency that is used in a read-only
system. SDCI is a hybrid of stateless and stateful procedures
which means that it maintains minimum state information.
But, unlike the stateful algorithm [12] that makes the CDP to
remember all data objects for all Mobile Node's cache, SDCI
needs only the CDP to recognize which data objects in its
database might be valid in Mobile Node's caches, making the
management of the CDP database much simpler than usual.
However, unlike prevailing synchronous stateless processes,
SDCI does not need periodic broadcast of CUAs, decreasing
IR messages that are required to be sent through the downlink
broadcast channel. Also, by appending uncertain and Local-
Cache states in Mobile Node's caches, SDCI makes easy
handling of arbitrary sleep-wake-up patterns and mobility, and
good cooperation among all Mobile Nodes that greatly
enhances broadcast channel efficiency. The subsections that
follow explain the suggested algorithm in detail.
A. Data Structures and Message Formats
For each and every data object having distinct identifier x , the
data structure for CDP and Mobile Node's cache are as given
below:
• ( , , )x x xd t f Data entry format for each data object in
CDP. Here xd , is the data object, xt is the last update time
for the data object and xf is a flag bit.
• ( , , )x x xd ts s :data entry format in an Mobile Node's
cache. Here xd is defined above, xts is the time stamp
showing the last updated time for the cached data object
xd dx, and xs is a two-bit flag recognizing 4 data entry
3. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 02 Issue: 04 | Apr-2013, Available @ http://www.ijret.org 430
states: stable , uncertain, waiting And
LC standard respectively.
B. Mobile Node Cache Management
As this paper is mainly about the cache consistency mainte-
nance design, we utilize the LRU (Least Recently Used) based
replacement algorithm for managing Mobile Node's caches.
The effect of the cache replacement algorithms on SDCI is
studied later.
In LRU based replacement scheme, a data object that is
recently cached or a cached data object that gets a hit is
relocated to the head of the cache list. Whenever a data object
requires to be cached and the cache is full, data entries with sx
2 from the tail are removed for creating enough space in
order to accommodate this new data object (the data object
with sx = 2 must be kept so that some requests are waiting for
its confirmation).
Any refreshed data objects from uncertain state or Local-
Cache state are moved to their original location and again, if
required, sufficient data entries from the tail are deleted.
C. Algorithm Description
We have given 2 procedures, i.e., CDP-trans () and Mobile
Node transactions, for the SDCI. The CDP continuously
performs the CDP-trans () and each Mobile Node
continuously performs the Mobile Node Transactions method
during its awake period.
The pseudo codes for CDP-trans () and Mobile Node
Transactions are as shown in Figures 1 and 2.
CDP-trans ( ( )request x ) {
( )sts findStatus x
( )if sts available begin
Broadcast , ,x xd t x
rValue available
( 0)xif f
1xf
End;
( )if sts uncertain
( )x xif t ts
Broadcast ( , )xx t
rValue confirm
else
Broadcast , ,x xd t x
rValue available
( 0)xif f
1xf
( )if sts update
Update the database entry (
'&& 'x x x xd d t t )
( 1)xif f begin
Broadcast
( )cua x
0xf
End;
Fig1:CDP-trans
D. Cache Consistency Maintenance
We explain in detail how SDCI sustains consistency between
an CDP database and Mobile Node caches. We presume that
the consistency between the CDP database and original
servers is sustained using wired network consistency
algorithms [9], [10].
For each and every cached data object, SDCI utilizes a single
flag bit xf , for maintaining the consistency between the CDP
and Mobile Node caches. Whenever xd is recovered by an
Mobile Node, fx is set, showing that a valid copy of xd may
be ready to use in an Mobile Node's cache. Whenever the
CDP gets an updated xd
, it broadcasts and resets. This action
shows that there are no valid copies of xd in any Mobile
Node's caches. Also, further updates do not necessitate
broadcast of IR(x).
The flag is set again whenever the CDP services restoration
(comprising request and confirmation) for xd
by an Mobile
Node.
In mobile environments, an Mobile Node's cache is in either
of two states: (i) awake or (ii) sleep. In the case of an Mobile
Node is awake at the time of IR(x) broadcast, the xd
copy is
invalidated and an ID-only
4. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 02 Issue: 04 | Apr-2013, Available @ http://www.ijret.org 431
mnRequestHandler(
( )xreq d ) {
( )xsts reqStatus d
( )if sts stable
Update cache list
Return
( )xcached d
( )if sts uncertain
Add x to waiting
list
Set 2xs
Update cache list
head with this entry
Return
( , )xuncertain x ts
( )if sts unavailble
send request to
CDP
add x to query
waiting list
}
mnResponseHandler(
( )xresp d ){
( )sts status x
( )if sts available
Remove x from cache
Add x to cache head
return xd
( )if sts waiting
//ELSE IF (entry x is ID-
only entry in cache )
mnRequestHandler(
( )xreq d
( )elseif sts uncertain
( )x xif ts t
0
x x
x
ts t
s
mnRequestHandler(
( )xreq d
else
0xs
}
mnCacheUpdateAlertHand
ler( ( )cua x ){
( ( ))msg reqStatus cua x
( )xsts reqStatus d
(( )&&( ||if msg update sts available sts uncerta
begin
3xs
( )xremove d
end
}
( )&&( )&&( xelseif msg confirm sts uncertain ts
0xs
( )&&( )elseif msg confirm sts waiting
Return xd
else
( )xdelete d
3xs
}
Fig2: Mobile Node Transactions
Entry is sustained by the Mobile Node. The data objects of a
Mobile Node in the sleep state are not affected until it wakes
up. Whenever an Mobile Node wakes up, it sets all cached
valid data objects (including) into the uncertain state. As a
result sleeping Mobile Nodes and the cached objects are not
affected by missing broadcast.
E. Efficiency and Cooperation
As stated earlier, a good cache consistency maintenance
algorithm must be able to get scaled and also must be efficient
in terms of the database size and the number of Mobile Nodes.
SDCI are able to handle large and fast updating data systems
as the CDP has some knowledge of Mobile Node's cache.
Only data entries that have standard bits set result in the
broadcast of IBs whenever data objects get updated. As a
result, the IR broadcast frequency is the minimum of the
uplink query/confirmation frequency and the data object
update frequency. Similarly, the broadcast channel bandwidth
consumption for CUAs is minimized.
mnWakeupHandler(){
( 0)
1
foreach s
s
}
5. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 02 Issue: 04 | Apr-2013, Available @ http://www.ijret.org 432
In addition to IR traffic, all other traffic in SDCI is also mini-
mized because of the strong cooperation among the Mobile
Node's caches. Specifically, this is because of the introduction
of the uncertain state and the Local-Cache state for the Mobile
Node's caches. The recovery of a data object, xd from the
CDP allotted by any given Mobile Node makes the x entries in
the uncertain or Local-Cache state in all the awake Mobile
Node's caches to a valid state. Also, a single uplink con-
firmation for in the uncertain state makes the entries in
uncertain state for all the awake Mobile Node's caches to
either valid or Local-Cache state. The addition of the uncertain
state also helps an Mobile Node's cache in order to keep all the
valid data objects after it wakes up from duration of sleep
time. In contrast, for SSUM algorithms, every invalidated
data object is completely removed from the Mobile Node's
cache. This helps in bringing little cooperation among the
Mobile Nodes, creating a dramatic increase of traffic volume
between the CDP and the Mobile Nodes as the number of
Mobile Nodes increases (see in Section V). Even though it
improves the scalability of SSUM by holding the invalid data
objects, it decreases the cache efficiency by keeping the
invalid data objects, instead of IDs as is the case in our SDCI
approach, in the Mobile Node's cache.
In variance with the AS scheme that needs O(MN) buffer
space in the CDP for keeping all the Mobile Node's cache
state, our procedure only requires one bit per data object in the
CDP showing that if the broadcast is necessary whenever the
data object is updated. Also, the added management overhead
is minimal as it needs only a single bit check and set/reset.
F. Mobility
Generally, synchronous stateless procedures handle Mobile
Node's mobility by presuming that all CDPs broadcast the
same CUAs .But, in the case of the number of CDPs being
large, those systems cannot be scaled. There is no other
alternate way for handling the Mobile Node's mobility in the
literature. In our approach, an Mobile Node roaming into a
new cell is considered as if it just woke up from the sleep
state, i.e., every valid data object is set to the uncertain state.
The consistency is guaranteed by using this approach and
every valid data object is retained. Moreover, SDCI is simple
because it is transparent to the CDPs engaged. In this
approach, the roaming effect is nothing but the addition of a
new sleep-wake-up pattern and should not have any
substantial impact on the overall performance. This paper
emulates the roaming effect by a sleep-wake-up pattern.
4. PERFORMANCE EVALUATION BY
SIMULATION
A. simulation setup
We take into consideration a single cell system with one CDP
database and multiple Mobile Nodes having identical cache
size. The request process for each Mobile Node is presumed to
be Poisson distributed and the update processes for data
objects are also Poisson distributed. Moreover, the sleep-
wake-up process is made into a model having a two-state
Markov chain.
In our simulation, we utilized a single channel with bandwidth
CB for both downlink as well as uplink data transmission.
Every message is queued and serviced based on a first-come-
first- serve basis. Every request is neglected whenever an
Mobile Node is in the sleep state. Error recovering cost and
software overhead are also neglected. Whenever a requested
data object is available at an Mobile Node's local cache, the
delay AQD is counted as 0. A Zip f—like distribution Mobile
Node access pattern [11] is utilized in the simulation.
The following subsections give a comprehensive comparison
of the suggested SDCI with SSUM algorithms by means of
metrics and for 3 distinct cases. The average access delay is a
significant measurement of system performance, a shorter
AQD, a better performance. The uplink per query is associated
to cache hit ratio. Whenever an Mobile Node receives a query,
in the case of the queried data object being valid in the cache,
a cache hit is counted, and no uplink is required for the query.
So, higher hit ratio, fewer uplink per query.
B. Case Studies
Three cases are given here. In each case, ub =64 and bd = 64
for both SDCI, and bu = 10 and bd = 10 for SSUM[12].
The bandwidth is set as CB=10000bps. Cache size is in units
of the number of data objects and the maximum number of ID-
only entries is also set to in the first two cases. Data object
sizes and data object update frequencies. They will be
provided later in detail and @ is used to denote them in Table
II. Each and every case study correlates to a parameter effect
that is shown by * in the column.
Case 1: Effect of CDP Database Size
Table 1 shows the simulation results of AQD and UPQS for
two algorithms with distinct database sizes.
From Table III, we see that SDCI outperforms the SSUM [12]
on both performance metrics for distinct database sizes (N).
Whenever the database size reaches approx 13000, AQD for
SSUM is over 160 sec. But in case of SDCI, we can observe a
slow increase of the performance metrics as increases all the
way up to approx N value 13000.
In brief, the performances of SDCI are insensitive to the
database size N and so it can be used to scale large database
sizes. But, the performance of SSUM is responsive to the
database size, specifically in terms of the delay
performance, and so SSUM cannot be used to scale large
database sizes.
Case 2: Effect of Data Update Rate
6. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 02 Issue: 04 | Apr-2013, Available @ http://www.ijret.org 433
Table IV shows the simulation results of the effect of update
interval ut . Moreover, it is observed SDCI outperforms SSUM
[12] for all these performance metrics. As assumed, as ut
declines, both performance metrics increase for all these
algorithms. But, SDCI show slower rates of increase
compared to SSUM for all the metrics. Particularly, whenever
ut is declined to 10 sec, the delay performance of SSUM is
very large (about 50 sec) beyond acceptance. At this update
interval, SDCI gets considerable delay performance gain over
SSUM. For uplink per query, SDCI outperforms SSUM in the
range of 5 to 30 percent.
Case 3: Effect of the Number of Mobile Nodes. In this case
study, we take into consideration a system identical to a real
situation. We presume that a cell has 5 distinct Mobile Node
query patterns as
1
(10 50* )i
and s=0.9-0.2*I and Ts =
500 * (i + 1) sec with i = 0,1,2,3 and 4.
Each and every query pattern group has M/5 Mobile Nodes in
it. We presume the query patterns for all the groups follow Zip
f-like (z=1 ) distribution. The access popularity ranking for the
neighboring groups is moved by 10, i.e., group I has declining
popularity from data object 1 to 1000 and group 2from 11to
1000 and then from 1to 10, and so on.
We also presume that there are 5 types of data objects in the
CDP as:
500*( 1)pb i
and
1
10 seci
uT
with i=
0,1,2,3 and 4
Table1: Average Query Delay per no of records
100 200 400 80
0
1600 3200 640
0
1280
0
SD
CI
0.16
5
0.29
8
0.42
9
0.5
64
0.67
9
0.78
4
0.8
96
1.01
5
SSU
M
12.3
14
12.9
74
13.8
62
14.
43
9
14.9
94
15.4
94
17.
465
161.
977
per no of transactions
10 40 160 640 2560 10240
SDCI 3.004 1.6 0.934 0.667 0.59
6
0.573
SSUM 50.006 36.9 17.139 15.488 15.0
96
14.879
Table 2: Uplink per query per no of records
100 200 400 800 1600 3200 6400 12800
SD
CI
0.2
34
0.3
34
0.4
28
0.495 0.55
8
0.59
9
0.64
8
0.673
SS 0.7 0.7 0.8 0.861 0.89 0.89 0.92 0.924
UM 56 93 32 3 5 2
per no of transactions
(a) Per no of records
(b) Per no of transactions
Fig 3: Line chart representation of Average query delay
comparison between SDCI and SSUM
(a)Per no of records
10 40 160 640 2560 10240
SDCI 0.838 0.717 0.584 0.509 0.513 0.520
SSUM 0.999 0.986 0.956 0.893 0.839 0.825
7. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 02 Issue: 04 | Apr-2013, Available @ http://www.ijret.org 434
(b)Per no of transactions
Fig 4: Line chart representation of uplink per query
comparison between SDCI and SSUM
Each and every data type group has N/5 data objects.
The parameter values that are selected above depend on the
understanding that a highly frequent query of Mobile Node
usually implies a shorter awake time and in the case of a faster
updated data object generally has a smaller size. The cache
size is 150 Kbytes. The maximum number of data ID-only
entries which can be put in each Mobile Node cache is set at
100.
Figures 3 and 4 describe AQD and uplink per query
comparison between SDCI and SSUM, respectively. As we
can observe, SDCI scales much better compared to SSUM in
terms of both performance metrics.
CONCLUSIONS AND FUTURE WORK
In this paper, we suggested a Scalable distributed cache
indexing for Cache Consistency (SDCI) for mobile
environments. 3 key features of SDCI are: (i) use of standard
bits at CDP and Mobile Node's caches to sustain cache
consistency; (ii) utilization of a Local-Cache state for each and
every entry in Mobile Node's cache after a data object
becomes nullified; (iii) all acceptable data entries are set to the
uncertain state after an Mobile Node wakes up. These main
features make the suggested algorithm highly scalable and
efficient. To be straight forward, SDCI is a hybrid of stateful
and stateless algorithms. But, unlike stateful algorithms, SDCI
sustains one flag bit for each and every data object in CDP to
ascertain whenever to broadcast CUAs. On the flipside, unlike
prevailing synchronous stateless procedures, SDCI does not
need periodic broadcast of CUAs. So SDCI largely declines
IR messages that are required to be sent through the downlink
broadcast channel. SDCI gets the positive features from both
stateful as well as stateless algorithms. Comprehensive
simulation results reveal that the suggested algorithm higher
performance compared to SSUM scheme. Future work will
comprise studying the effect of distinct replacement
algorithms on the performance of SDCI. Future study will also
delve into the CDP cache management algorithm and the
effective transfer cached data objects among CDPs when
Mobile Nodes roam among distinct CDPs.
REFERENCES
[1]G. Forman and J. Zahorjan, "The challenge of mobile
computing", IEEE Computer, 27(6), pp38-47, April 1994.
[2]D. Barbara and T. Imielinksi," Sleeper and Workaholics:
caching strat¬egy in mobile environments ", In Proceedings of
the ACM SIGMOD Conference on Management of Data, pp1-
12, 1994.
[3]J. Jing, A. Elmagarmid, A. Heal, and R. Alonso. "Bit-
sequences: an adaptive cache invalidation method in mobile
client/server environ¬ments", Mobile Networks and
Applications, pp 115-127, 1997.
[4]Q. Hu and D.K. Lee, "Cache algorithms based on adaptive
invalidation reports for mobile environments", Cluster
Computing, pp 39-50, 1998.
[5]K.L. Wu, P.S. Yu and M.S. Chen, "Energy-efficient
caching for wire¬less mobile computing", In 20th
International Conference on Data En¬gineering, pp 336-345,
1996
[6]G. Cao, "A scalable low-latency cache invalidation strategy
for mobile environments", ACM Intl. Conf. on Computing and
Networking (Mo- bicom), pp200-209, August, 2001
[7]K. Tan, J. Cai and B. ooi, "An evaluation of cache
invalidation strate¬gies in wireless environments", IEEE
Trans. on Parallel and Dis¬tributed System, 12(8), pp789-897,
2001
[8]A. Kahol, S. Khurana, S.K.S. Gupta and P.K. Srimani," A
strategy to manage cache consistency in a distributed mobile
wireless environ¬ment", IEEE Trans. on Parallel and
Distributed System, 12(7), pp 686¬700, 2001.
[9]H. Yu, L. Breslau and S. Shenker, "A scalable web cache
consistency architecture", In Proceedings ofthe ACM
SIGCOMM, August, 1999.
[10]J. Gwertzman and M. Seltzer, "World-Wide Web cache
consistency", In Proceedings of The USENIX Symposium on
Internet Technologies and Systems, December, 1997.
[11]L. Breslau, P. Cao, J. Fan, G. Phillips and S. Shenker,
"Web caching and Zipf-like distributions: evidence and
implications, ", Proceedings of IEEE INFOCOM'99,pp126-
134, 1999
[12]Khaleel Mershad and Hassan Artail, Senior Member,
IEEE; "SSUM: Smart Server Update Mechanism for
Maintaining Cache Consistency in Mobile Environments";
IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL.
9, NO. 6, JUNE 2010
[13] A. Elmagarmid, J. Jing, A. Helal, and C. Lee, “Scalable
Cache Invalidation Algorithms for Mobile Data Access,”
IEEE Trans. Knowledge and Data Eng., vol. 15, no. 6, pp.
1498-1511, Nov. 2003.
[14] H. Jin, J. Cao, and S. Feng, “A Selective Push Algorithm
for Cooperative Cache Consistency Maintenance over
MANETs,” Proc. Third IFIP Int’l Conf. Embedded and
Ubiquitous Computing, Dec. 2007.
8. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 02 Issue: 04 | Apr-2013, Available @ http://www.ijret.org 435
[15] IEEE Standard 802.11, Wireless LAN Medium Access
Control (MAC) and Physical Layer (PHY) Specification,
IEEE, 1999.
[16] J. Jing, A. Elmagarmid, A. Helal, and R. Alonso, “Bit-
Sequences: An Adaptive Cache Invalidation Method in
Mobile Client/Server Environments,” Mobile Networks and
Applications, vol. 15, no. 2, pp. 115-127, 1997.
[17] J. Jung, A.W. Berger, and H. Balakrishnan, “Modeling
TTL-Based Internet Caches,” Proc. IEEE INFOCOM, Mar.
2003.
[18] X. Kai and Y. Lu, “Maintain Cache Consistency in
Mobile Database Using Dynamical Periodical Broadcasting
Strategy,” Proc. Second Int’l Conf. Machine Learning and
Cybernetics, pp. 2389- 2393, 2003.
[19] B. Krishnamurthy and C.E. Wills, “Piggyback Server
Invalidation for Proxy Cache Coherency,” Proc. Seventh
World Wide Web (WWW) Conf., Apr. 1998.
[20] B. Krishnamurthy and C.E. Wills, “Study of Piggyback
Cache Validation for Proxy Caches in the World Wide Web,”
Proc. USENIX Symp. Internet Technologies and Systems,
Dec. 1997.
[21] D. Li, P. Cao, and M. Dahlin, “WCIP: Web Cache
Invalidation Protocol,” IETF Internet Draft,
http://tools.ietf.org/html/draftdanli- wrec-wcip-01, Mar. 2001.
[22] W. Li, E. Chan, Y. Wang, and D. Chen, “Cache
Invalidation Strategies for Mobile Ad Hoc Networks,” Proc.
Int’l Conf. Parallel Processing, Sept. 2007.
[23] S. Lim, W.-C. Lee, G. Cao, and C.R. Das, “Performance
Comparison of Cache Invalidation Strategies for Internet-
Based Mobile-Ad Hoc Networks,” Proc. IEEE Int’l Conf.
Mobile Ad-Hoc and Sensor Systems, pp. 104-113, Oct. 2004.
[24] M.N. Lima, A.L. dos Santos, and G. Pujolle, “A Survey
of Survivability in Mobile Ad Hoc Networks,” IEEE Comm.
Surveys and Tutorials, vol. 11, no. 1, pp. 66-77, First Quarter
2009.
[25] H. Maalouf and M. Gurcan, “Minimisation of the Update
Response Time in a Distributed Database System,”
Performance Evaluation, vol. 50, no. 4, pp. 245-266, 2002.
[26] P. Papadimitratos and Z.J. Haas, “Secure Data
Transmission in Mobile Ad Hoc Networks,” Proc. ACM
Workshop Wireless Security (WiSe ’03), pp. 41-50, 2003.
[27] J.P. Sheu, C.M. Chao, and C.W. Sun, “A Clock
Synchronization Algorithm for Multi-Hop Wireless Ad Hoc
Networks,” Proc. 24th Int’l Conf. Distributed Computing
Systems, pp. 574-581, 2004.
[28] W. Stallings, Cryptography and Network Security, fourth
ed. Prentice Hall, 2006.
[29] D. Wessels, “Squid Internet Object Cache,” http://www.
squid-cache.org, Aug. 1998.
[30] J. Xu, X. Tang, and D. Lee, “Performance Analysis of
Location- Dependent Cache Invalidation Schemes for Mobile
Environments,” IEEE Trans. Knowledge and Data Eng., vol.
15, no. 2, pp. 474-488, Feb. 2003.
[31] L. Yin, G. Cao, and Y. Cai, “A Generalized Target
Driven Cache Replacement Policy for Mobile Environments,”
Proc. Int’l Symp. Applications and the Internet (SAINT ’03),
Jan. 2003.