A major role is played in the layout and evaluation of any empirical wireless structure to manifest is the goal of this paper that counterfeit mode architectures affect counterfeit conduct, regarding structure accomplishment metrics, essentially and therefore, the excellent architecture should be explored in order to accomplish the most accurate and reliable results. It is found that the most analytical factors it is found that that actuate counterfeit mode accomplishment are counterfeit time, structure event organizing and
grade of adequate. It is, also, found that counterfeit time in relation to event existence in the real structure
along with the usage of modern architectural concepts such as multi-interweave technology complement
analytical issues too in the advancement of an adequate counterfeit organization for wireless communications. In order to evaluate the above findings an extensive empirical review has been
demeanored analysising several distinct events counterfeitorganizations towards presenting the relation
between channel designing collections, counterfeit time and structure accomplishment.
EOE-DRTSA: end-to-end distributed real-time system scheduling algorithmDr Amira Bibo
In this paper, scheduling dependent threads in distributed real-time
system where considered. We present a distributed real-time
scheduling algorithm called (EOE-DRTSA (end-to-end distributed
real time system Scheduling algorithm)). Now a day completed realtime
systems are distributed. One of least developed areas of realtime
scheduling is distributed scheduling where in Distributed
systems action and information timeliness is often end-to-end.
Designers and users of distributed systems often need to dependably
reason about end-to-end timeliness. Our scheduling model includes
threads and their time constraints depend on developed DTUF value
and maintaining end-to-end prosperities of distributed real-time
system.
A DENIAL OF SERVICE STRATEGY TO ORCHESTRATE STEALTHY ATTACK PATTERNS IN CLOUD...IAEME Publication
The triumph of the cloud computing archetype is owing to its on-demand, self-service, and pay-by-use nature. The possessions of Denial of Service (DoS) attacks engross not only the quality of the delivered service, but also the service continuance costs in terms of reserve utilization. Explicitly, the longer the detection delay is, the elevated the costs to be incurred. Consequently, a fastidious consideration has to be paid for stealthy DoS attacks. They aim at minimizing their visibility, and are sophisticated attacks adapted to influence the worst-case recital of the target system through definite periodic, pulsing, and low-rate traffic patterns. A strategy to orchestrate stealthy attack patterns, which reveal a slowly-increasing-intensity inclination premeditated to impose the utmost financial cost to the cloud Customer has been proposed, while relating to the job size and the service advent rate obligatory by the detection mechanisms. It is described both how to apply the proposed strategy, and its effects on the target system deployed in the cloud.
A REVIEW OF MEMORY UTILISATION AND MANAGEMENT RELATED ISSUES IN WIRELESS SENS...IAEME Publication
In WSN applications, conditions do arise where it is mandatory to update existing firmware residing in each node with the aim of remotely effecting essential software improvements. Classically, programming the nodes involves the use of wired communication Interface, however using these links remotely is not feasible. In addition, it is economically impracticable to reprogram each node in the field especially when they can’t be Reach and are in large numbers. Hence, the only viable option left is to employ a reliable over-the-air update process.
Updatable Queue Protocol Based On TCP For Virtual Reality Environment IJCSEA Journal
The variance in number and types of tasks required to be implemented within Distributed Virtual Environments (DVE) highlights the needs for communication protocols can achieve consistency. In addition, these applications have to handle an increasing number of participants and deal with the difficult problem of scalability. Moreover, the real-time requirements of these applications make the scalability problem more difficult to solve. In this paper, we have implemented Updatable Queue Abstraction protocol (UQA) on TCP (TCP-UQA) and compared it with original TCP, UDP, and Updatable Queue Abstraction based on UDP (UDP-UQA) protocols. Results showed that TCP-UQA was the best in queue management.
Clock synchronization: Clocks, events and process states, Synchronizing physical clocks,
Logical time and logical clocks, Lamport’s Logical Clock, Global states, Distributed mutual
exclusion algorithms: centralized, decentralized, distributed and token ring algorithms,
election algorithms, Multicast communication.
Distributed Objects and Remote Invocation: Communication between distributed objects
Remote procedure call, Events and notifications, operating system layer Protection, Processes
and threads, Operating system architecture. Introduction to Distributed shared memory,
Design and implementation issue of DSM.Case Study: CORBA and JAVA RMI.
GENERATIVE SCHEDULING OF EFFECTIVE MULTITASKING WORKLOADS FOR BIG-DATA ANALYT...IAEME Publication
This document proposes an evolutionary ordinal optimization (eOO) approach for scheduling dynamic and multitasking workloads for big data analytics in cloud computing environments. The eOO approach iteratively applies ordinal optimization to obtain suboptimal schedules faster than exhaustive searching, while adapting to workload fluctuations over time. Experimental results show the eOO approach achieves up to 30% higher task throughput compared to existing Monte Carlo and blind pick scheduling methods.
IRJET- Dynamic Resource Allocation of Heterogeneous Workload in CloudIRJET Journal
This document discusses dynamic resource allocation of heterogeneous workloads in cloud computing. It proposes a new approach called SAMR that aims to minimize resource allocation delay and reduce energy waste. SAMR uses a Markov chain model to predict workloads and determine the optimal number of active physical machines. It then allocates virtual machines to machines using its scheduling algorithm. The goal is to balance minimizing user delay through fast allocation while avoiding underutilization of resources that wastes energy. The existing approaches have drawbacks like high computational cost for algorithms like Viterbi used in hidden Markov models. SAMR aims to improve on this with its predictions of workloads and dynamic provisioning approach.
EOE-DRTSA: end-to-end distributed real-time system scheduling algorithmDr Amira Bibo
In this paper, scheduling dependent threads in distributed real-time
system where considered. We present a distributed real-time
scheduling algorithm called (EOE-DRTSA (end-to-end distributed
real time system Scheduling algorithm)). Now a day completed realtime
systems are distributed. One of least developed areas of realtime
scheduling is distributed scheduling where in Distributed
systems action and information timeliness is often end-to-end.
Designers and users of distributed systems often need to dependably
reason about end-to-end timeliness. Our scheduling model includes
threads and their time constraints depend on developed DTUF value
and maintaining end-to-end prosperities of distributed real-time
system.
A DENIAL OF SERVICE STRATEGY TO ORCHESTRATE STEALTHY ATTACK PATTERNS IN CLOUD...IAEME Publication
The triumph of the cloud computing archetype is owing to its on-demand, self-service, and pay-by-use nature. The possessions of Denial of Service (DoS) attacks engross not only the quality of the delivered service, but also the service continuance costs in terms of reserve utilization. Explicitly, the longer the detection delay is, the elevated the costs to be incurred. Consequently, a fastidious consideration has to be paid for stealthy DoS attacks. They aim at minimizing their visibility, and are sophisticated attacks adapted to influence the worst-case recital of the target system through definite periodic, pulsing, and low-rate traffic patterns. A strategy to orchestrate stealthy attack patterns, which reveal a slowly-increasing-intensity inclination premeditated to impose the utmost financial cost to the cloud Customer has been proposed, while relating to the job size and the service advent rate obligatory by the detection mechanisms. It is described both how to apply the proposed strategy, and its effects on the target system deployed in the cloud.
A REVIEW OF MEMORY UTILISATION AND MANAGEMENT RELATED ISSUES IN WIRELESS SENS...IAEME Publication
In WSN applications, conditions do arise where it is mandatory to update existing firmware residing in each node with the aim of remotely effecting essential software improvements. Classically, programming the nodes involves the use of wired communication Interface, however using these links remotely is not feasible. In addition, it is economically impracticable to reprogram each node in the field especially when they can’t be Reach and are in large numbers. Hence, the only viable option left is to employ a reliable over-the-air update process.
Updatable Queue Protocol Based On TCP For Virtual Reality Environment IJCSEA Journal
The variance in number and types of tasks required to be implemented within Distributed Virtual Environments (DVE) highlights the needs for communication protocols can achieve consistency. In addition, these applications have to handle an increasing number of participants and deal with the difficult problem of scalability. Moreover, the real-time requirements of these applications make the scalability problem more difficult to solve. In this paper, we have implemented Updatable Queue Abstraction protocol (UQA) on TCP (TCP-UQA) and compared it with original TCP, UDP, and Updatable Queue Abstraction based on UDP (UDP-UQA) protocols. Results showed that TCP-UQA was the best in queue management.
Clock synchronization: Clocks, events and process states, Synchronizing physical clocks,
Logical time and logical clocks, Lamport’s Logical Clock, Global states, Distributed mutual
exclusion algorithms: centralized, decentralized, distributed and token ring algorithms,
election algorithms, Multicast communication.
Distributed Objects and Remote Invocation: Communication between distributed objects
Remote procedure call, Events and notifications, operating system layer Protection, Processes
and threads, Operating system architecture. Introduction to Distributed shared memory,
Design and implementation issue of DSM.Case Study: CORBA and JAVA RMI.
GENERATIVE SCHEDULING OF EFFECTIVE MULTITASKING WORKLOADS FOR BIG-DATA ANALYT...IAEME Publication
This document proposes an evolutionary ordinal optimization (eOO) approach for scheduling dynamic and multitasking workloads for big data analytics in cloud computing environments. The eOO approach iteratively applies ordinal optimization to obtain suboptimal schedules faster than exhaustive searching, while adapting to workload fluctuations over time. Experimental results show the eOO approach achieves up to 30% higher task throughput compared to existing Monte Carlo and blind pick scheduling methods.
IRJET- Dynamic Resource Allocation of Heterogeneous Workload in CloudIRJET Journal
This document discusses dynamic resource allocation of heterogeneous workloads in cloud computing. It proposes a new approach called SAMR that aims to minimize resource allocation delay and reduce energy waste. SAMR uses a Markov chain model to predict workloads and determine the optimal number of active physical machines. It then allocates virtual machines to machines using its scheduling algorithm. The goal is to balance minimizing user delay through fast allocation while avoiding underutilization of resources that wastes energy. The existing approaches have drawbacks like high computational cost for algorithms like Viterbi used in hidden Markov models. SAMR aims to improve on this with its predictions of workloads and dynamic provisioning approach.
Adaptive job scheduling with load balancing for workflow applicationiaemedu
This document discusses adaptive job scheduling with load balancing for workflow applications in a grid platform. It begins with an abstract that describes grid computing and how scheduling plays a key role in performance for grid workflow applications. Both static and dynamic scheduling strategies are discussed, but they require high scheduling costs and may not produce good schedules. The paper then proposes a novel semi-dynamic algorithm that allows the schedule to adapt to changes in the dynamic grid environment through both static and dynamic scheduling. Load balancing is incorporated to handle situations where jobs are delayed due to resource fluctuations or overloading of processors. The rest of the paper outlines the related works, proposed scheduling algorithm, system model, and evaluation of the approach.
Scheduling -Issues in Load Distributing, Components for Load Distributing Algorithms,
Different Types Distributed of Load Distributing Algorithms, Fault-tolerant services Highly
available services, Introduction to Distributed Database and Multimedia system
Introduction: Definition, Design Issues, Goals, Types of distributed systems, Centralized
Computing, Advantages of Distributed systems over centralized system .Limitation of
Distributed systems Architectural models of distributed system, Client-server
communication, Introduction to DCE
This document summarizes research on transaction reordering techniques. It discusses transaction reordering approaches based on reducing resource conflicts and increasing resource sharing. Specifically, it covers:
1) A "steal-on-abort" technique that reorders an aborted transaction behind the transaction that caused the abort to avoid repeated conflicts.
2) A replication protocol that attempts to reorder transactions during certification to avoid aborts rather than restarting immediately.
3) Transaction reordering and grouping during continuous data loading to prevent deadlocks when loading data for materialized join views.
This document proposes using a linear prediction model to detect a wide range of flooding distributed denial of service (DDoS) attacks. It models the entropy of incoming network traffic over time using a linear prediction technique commonly applied to financial time series. The model is tested on simulated network data containing normal traffic and introduced attacks of varying rates. Results show the linear prediction model can successfully detect attacks with low rates and delays by identifying anomalies in the modeled entropy time series compared to normal traffic patterns. This approach aims to provide a fast and effective method for detecting different types of flooding DDoS attacks.
DFAA- A Dynamic Flow Aggregation Approach Against SDDOS Attacks in CloudIRJET Journal
This document proposes a new method called DFAA (Dynamic Flow Aggregation Approach) to detect periodic shrew distributed denial of service (DDoS) attacks in cloud computing. The method uses frequency-domain characteristics extracted from the autocorrelation of network flows as clustering features. It groups end-user flows using the BIRCH clustering algorithm and then refines the clusters. The evaluation shows the method can categorize abnormal network flows with fast response times and high detection accuracy, while avoiding lower impact groups of abnormal flows.
This document discusses resource management techniques in distributed systems. It covers three main scheduling techniques: task assignment approach, load balancing approach, and load sharing approach. It also outlines desirable features of good global scheduling algorithms such as having no a priori knowledge about processes, being dynamic in nature, having quick decision-making capability, balancing system performance and scheduling overhead, stability, scalability, fault tolerance, and fairness of service. Finally, it discusses policies for load estimation, process transfer, state information exchange, location, priority assignment, and migration limiting that distributed load balancing algorithms employ.
A fault tolerant tokenbased atomic broadcast algorithm relying on responsive ...Neelamani Samal
This document summarizes a fault tolerant token-based atomic broadcast algorithm that relies on an unreliable failure detector and satisfies the responsive property. The algorithm aims to tolerate processor-level failures in a distributed system. It divides a job into tasks, uses a token to control access to shared resources, and monitors task execution times. If a task does not respond within the timeout period, it is declared faulty and removed from the ready queue. The algorithm was implemented on a multi-core processor to simulate fault tolerance capabilities in a distributed system within a specified time interval.
Integration of feature sets with machine learning techniquesiaemedu
This document summarizes a research paper that proposes a novel approach for spam filtering using selective feature sets combined with machine learning techniques. The paper presents an algorithm and system architecture that extracts feature sets from emails and uses machine learning to classify emails and generate rules to identify spam. Several metrics are identified to evaluate the efficiency of the feature sets, including false positive rate. An experiment is described that uses keyword lists as feature sets to train filters and compares the proposed approach to other spam filtering methods.
A survey on weighted clustering techniques in manetsIAEME Publication
This document summarizes existing weighted clustering techniques for mobile ad hoc networks (MANETs). It discusses clustering approaches that assign weights to nodes based on parameters like degree, transmission power, mobility, and battery power. The Weighted Clustering Algorithm (WCA) and Connectivity-Based Distributed Mobility-aware clustering algorithm (CBDM) are highlighted as approaches that use weighted factors to select cluster heads in a distributed manner. WCA uses four parameters - degree, transmission power, mobility, and battery power - while CBDM extends WCA by additionally considering connectivity and a more accurate battery power metric. Both aim to form stable clustering structures with low overhead through periodic re-evaluation of node weights and selection of cluster heads.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
EXPOSURE AND AVOIDANCE MECHANISM OF BLACK HOLE AND JAMMING ATTACK IN MOBILE A...ijcseit
Mobile ad hoc network (MANETs) is an infrastructure-less/self-configurable system in which every node
carries on as host or router and every node can participate in the transmission of packets. Because of its
dynamic behaviour such system is more susceptible against various sorts of security threats, for example,
Black hole, Wormhole , Jamming , Sybil, Byzantine attack and so on which may block the transmission of
the system. Black hole attack and Jamming attack is one of them which promote itself has shortest or new
fresh route to the destination while jamming attack which make activity over the system. This paper
introduces the thorough literature study for the Black hole attack and jamming attack of both the attack by
various researchers.
1) The document discusses quality of service (QoS)-aware data replication for data-intensive applications in cloud computing systems. It aims to minimize data replication cost and number of QoS violated replicas.
2) It presents a mathematical model and algorithm to optimally place QoS-satisfied and QoS-violated data replicas. The algorithm uses minimum-cost maximum flow to obtain the optimal placement.
3) The algorithm takes as input a set of requested nodes and outputs the optimal placement for QoS-satisfied and QoS-violated replicas by modeling the problem as a network flow graph and applying existing polynomial-time algorithms.
This document discusses resource management for computer operating systems. It argues that traditional OS architecture is outdated given changes in hardware and software. The authors propose an approach where the OS allocates resources like CPU cores, memory, and bandwidth to processes to optimize responsiveness based on penalty functions that model how run time affects user experience. The goal is to continuously minimize the total penalty by adjusting resource allocations over time as user needs and process requirements change.
A NOVEL SLOTTED ALLOCATION MECHANISM TO PROVIDE QOS FOR EDCF PROTOCOLIAEME Publication
The IEEE 802.11e EDCF mechanism cannot guarantee the QOS of high-priority traffic as the bandwidth consumption of the low-priority traffic increases. Also, in the presence of high priority traffic dampen link utilization of low priority traffic. To overcome these problems, we propose the Novel mechanism in our research that extends IEEE 802.11e EDCF by introducing a Super Slot and Virtual Collision. Compared to EDCF, our proposed approach has EDCF has two advantages: (a) Higher priority traffic achieves Quality of service regardless of the amount of low priority traffic, and (b) Low priority traffic obtains a higher throughput in the presence of same amount of high priority traffic.
This document discusses synchronization in distributed systems and various algorithms for achieving mutual exclusion. It covers centralized, distributed, and token ring algorithms for mutual exclusion. The centralized algorithm uses a coordinator but has a single point of failure. Distributed algorithms overcome this but require more messages. The token ring algorithm passes a token between processes but can lose the token if a process crashes. In comparing the algorithms, the document examines their message requirements, delay before entry, and potential problems.
Decrease in hardware costs and advances in computer networking technologies have led to increased interest in
the use of large-scale parallel and distributed computing systems. Distributed computing systems offer the potential for improved performance and resource sharing. In this paper we have made an overview on distributed computing. In this paper we studied the difference between parallel and distributed computing, terminologies used in distributed computing, task allocation in distributed computing and performance parameters in distributed computing system, parallel distributed algorithm models, and advantages of distributed computing and scope of distributed computing.
This document discusses the detection of distributed denial of service (DDoS) attacks using different classifiers on the UCLA dataset. It presents a system with modules for packet collection, preprocessing, feature extraction, training/testing data splitting, and classification using K-nearest neighbors (KNN), support vector machines (SVM), and naive Bayesian classifiers. The system is evaluated using metrics like accuracy, sensitivity, specificity, precision, F-measure, and time complexity. Experimental results on the UCLA dataset show that KNN achieved the best performance with 94% accuracy and 96% precision in classifying attack packets from normal packets.
A review of security attacks and intrusion detection schemes in wireless sens...ijwmn
Wireless sensor networks are currently the greatest innovation in the field of telecommunications. WSNs
have a wide range of potential applications, including security and surveillance, control, actuation and
maintenance of complex systems and fine-grain monitoring of indoor and outdoor environments. However
security is one of the major aspects of Wireless sensor networks due to the resource limitations of sensor
nodes. Those networks are facing several threats that affect their functioning and their life. In this paper we
present security attacks in wireless sensor networks, and we focus on comparison and analysis of recent
Intrusion Detection schemes in WSNs.
Due to the rapidly increasing data speed requirement, it has become essential to smartly utilize the available frequency spectrum. In wireless communications systems, channel quality parameters are often used to enable resource allocation techniques that improve system capacity and user quality. The uncoded bit or symbol error rate (SER) is specified as an important parameter in the second and third generation partnership project (3GPP). Nonetheless, techniques to estimate the uncoded SER are usually not much published. This paper introduces a novel uncoded bit error rate (BER) estimation method using the
accurate-bits sequence of the new channel codes over the AWGN channel. Here, we have used the new channel codes as a forward error correction coding scheme for our communication system. This paper also presents the simulation results to demonstrate and compare the estimation accuracy of the proposed method over the AWGN channel.
Adaptive job scheduling with load balancing for workflow applicationiaemedu
This document discusses adaptive job scheduling with load balancing for workflow applications in a grid platform. It begins with an abstract that describes grid computing and how scheduling plays a key role in performance for grid workflow applications. Both static and dynamic scheduling strategies are discussed, but they require high scheduling costs and may not produce good schedules. The paper then proposes a novel semi-dynamic algorithm that allows the schedule to adapt to changes in the dynamic grid environment through both static and dynamic scheduling. Load balancing is incorporated to handle situations where jobs are delayed due to resource fluctuations or overloading of processors. The rest of the paper outlines the related works, proposed scheduling algorithm, system model, and evaluation of the approach.
Scheduling -Issues in Load Distributing, Components for Load Distributing Algorithms,
Different Types Distributed of Load Distributing Algorithms, Fault-tolerant services Highly
available services, Introduction to Distributed Database and Multimedia system
Introduction: Definition, Design Issues, Goals, Types of distributed systems, Centralized
Computing, Advantages of Distributed systems over centralized system .Limitation of
Distributed systems Architectural models of distributed system, Client-server
communication, Introduction to DCE
This document summarizes research on transaction reordering techniques. It discusses transaction reordering approaches based on reducing resource conflicts and increasing resource sharing. Specifically, it covers:
1) A "steal-on-abort" technique that reorders an aborted transaction behind the transaction that caused the abort to avoid repeated conflicts.
2) A replication protocol that attempts to reorder transactions during certification to avoid aborts rather than restarting immediately.
3) Transaction reordering and grouping during continuous data loading to prevent deadlocks when loading data for materialized join views.
This document proposes using a linear prediction model to detect a wide range of flooding distributed denial of service (DDoS) attacks. It models the entropy of incoming network traffic over time using a linear prediction technique commonly applied to financial time series. The model is tested on simulated network data containing normal traffic and introduced attacks of varying rates. Results show the linear prediction model can successfully detect attacks with low rates and delays by identifying anomalies in the modeled entropy time series compared to normal traffic patterns. This approach aims to provide a fast and effective method for detecting different types of flooding DDoS attacks.
DFAA- A Dynamic Flow Aggregation Approach Against SDDOS Attacks in CloudIRJET Journal
This document proposes a new method called DFAA (Dynamic Flow Aggregation Approach) to detect periodic shrew distributed denial of service (DDoS) attacks in cloud computing. The method uses frequency-domain characteristics extracted from the autocorrelation of network flows as clustering features. It groups end-user flows using the BIRCH clustering algorithm and then refines the clusters. The evaluation shows the method can categorize abnormal network flows with fast response times and high detection accuracy, while avoiding lower impact groups of abnormal flows.
This document discusses resource management techniques in distributed systems. It covers three main scheduling techniques: task assignment approach, load balancing approach, and load sharing approach. It also outlines desirable features of good global scheduling algorithms such as having no a priori knowledge about processes, being dynamic in nature, having quick decision-making capability, balancing system performance and scheduling overhead, stability, scalability, fault tolerance, and fairness of service. Finally, it discusses policies for load estimation, process transfer, state information exchange, location, priority assignment, and migration limiting that distributed load balancing algorithms employ.
A fault tolerant tokenbased atomic broadcast algorithm relying on responsive ...Neelamani Samal
This document summarizes a fault tolerant token-based atomic broadcast algorithm that relies on an unreliable failure detector and satisfies the responsive property. The algorithm aims to tolerate processor-level failures in a distributed system. It divides a job into tasks, uses a token to control access to shared resources, and monitors task execution times. If a task does not respond within the timeout period, it is declared faulty and removed from the ready queue. The algorithm was implemented on a multi-core processor to simulate fault tolerance capabilities in a distributed system within a specified time interval.
Integration of feature sets with machine learning techniquesiaemedu
This document summarizes a research paper that proposes a novel approach for spam filtering using selective feature sets combined with machine learning techniques. The paper presents an algorithm and system architecture that extracts feature sets from emails and uses machine learning to classify emails and generate rules to identify spam. Several metrics are identified to evaluate the efficiency of the feature sets, including false positive rate. An experiment is described that uses keyword lists as feature sets to train filters and compares the proposed approach to other spam filtering methods.
A survey on weighted clustering techniques in manetsIAEME Publication
This document summarizes existing weighted clustering techniques for mobile ad hoc networks (MANETs). It discusses clustering approaches that assign weights to nodes based on parameters like degree, transmission power, mobility, and battery power. The Weighted Clustering Algorithm (WCA) and Connectivity-Based Distributed Mobility-aware clustering algorithm (CBDM) are highlighted as approaches that use weighted factors to select cluster heads in a distributed manner. WCA uses four parameters - degree, transmission power, mobility, and battery power - while CBDM extends WCA by additionally considering connectivity and a more accurate battery power metric. Both aim to form stable clustering structures with low overhead through periodic re-evaluation of node weights and selection of cluster heads.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
EXPOSURE AND AVOIDANCE MECHANISM OF BLACK HOLE AND JAMMING ATTACK IN MOBILE A...ijcseit
Mobile ad hoc network (MANETs) is an infrastructure-less/self-configurable system in which every node
carries on as host or router and every node can participate in the transmission of packets. Because of its
dynamic behaviour such system is more susceptible against various sorts of security threats, for example,
Black hole, Wormhole , Jamming , Sybil, Byzantine attack and so on which may block the transmission of
the system. Black hole attack and Jamming attack is one of them which promote itself has shortest or new
fresh route to the destination while jamming attack which make activity over the system. This paper
introduces the thorough literature study for the Black hole attack and jamming attack of both the attack by
various researchers.
1) The document discusses quality of service (QoS)-aware data replication for data-intensive applications in cloud computing systems. It aims to minimize data replication cost and number of QoS violated replicas.
2) It presents a mathematical model and algorithm to optimally place QoS-satisfied and QoS-violated data replicas. The algorithm uses minimum-cost maximum flow to obtain the optimal placement.
3) The algorithm takes as input a set of requested nodes and outputs the optimal placement for QoS-satisfied and QoS-violated replicas by modeling the problem as a network flow graph and applying existing polynomial-time algorithms.
This document discusses resource management for computer operating systems. It argues that traditional OS architecture is outdated given changes in hardware and software. The authors propose an approach where the OS allocates resources like CPU cores, memory, and bandwidth to processes to optimize responsiveness based on penalty functions that model how run time affects user experience. The goal is to continuously minimize the total penalty by adjusting resource allocations over time as user needs and process requirements change.
A NOVEL SLOTTED ALLOCATION MECHANISM TO PROVIDE QOS FOR EDCF PROTOCOLIAEME Publication
The IEEE 802.11e EDCF mechanism cannot guarantee the QOS of high-priority traffic as the bandwidth consumption of the low-priority traffic increases. Also, in the presence of high priority traffic dampen link utilization of low priority traffic. To overcome these problems, we propose the Novel mechanism in our research that extends IEEE 802.11e EDCF by introducing a Super Slot and Virtual Collision. Compared to EDCF, our proposed approach has EDCF has two advantages: (a) Higher priority traffic achieves Quality of service regardless of the amount of low priority traffic, and (b) Low priority traffic obtains a higher throughput in the presence of same amount of high priority traffic.
This document discusses synchronization in distributed systems and various algorithms for achieving mutual exclusion. It covers centralized, distributed, and token ring algorithms for mutual exclusion. The centralized algorithm uses a coordinator but has a single point of failure. Distributed algorithms overcome this but require more messages. The token ring algorithm passes a token between processes but can lose the token if a process crashes. In comparing the algorithms, the document examines their message requirements, delay before entry, and potential problems.
Decrease in hardware costs and advances in computer networking technologies have led to increased interest in
the use of large-scale parallel and distributed computing systems. Distributed computing systems offer the potential for improved performance and resource sharing. In this paper we have made an overview on distributed computing. In this paper we studied the difference between parallel and distributed computing, terminologies used in distributed computing, task allocation in distributed computing and performance parameters in distributed computing system, parallel distributed algorithm models, and advantages of distributed computing and scope of distributed computing.
This document discusses the detection of distributed denial of service (DDoS) attacks using different classifiers on the UCLA dataset. It presents a system with modules for packet collection, preprocessing, feature extraction, training/testing data splitting, and classification using K-nearest neighbors (KNN), support vector machines (SVM), and naive Bayesian classifiers. The system is evaluated using metrics like accuracy, sensitivity, specificity, precision, F-measure, and time complexity. Experimental results on the UCLA dataset show that KNN achieved the best performance with 94% accuracy and 96% precision in classifying attack packets from normal packets.
A review of security attacks and intrusion detection schemes in wireless sens...ijwmn
Wireless sensor networks are currently the greatest innovation in the field of telecommunications. WSNs
have a wide range of potential applications, including security and surveillance, control, actuation and
maintenance of complex systems and fine-grain monitoring of indoor and outdoor environments. However
security is one of the major aspects of Wireless sensor networks due to the resource limitations of sensor
nodes. Those networks are facing several threats that affect their functioning and their life. In this paper we
present security attacks in wireless sensor networks, and we focus on comparison and analysis of recent
Intrusion Detection schemes in WSNs.
Due to the rapidly increasing data speed requirement, it has become essential to smartly utilize the available frequency spectrum. In wireless communications systems, channel quality parameters are often used to enable resource allocation techniques that improve system capacity and user quality. The uncoded bit or symbol error rate (SER) is specified as an important parameter in the second and third generation partnership project (3GPP). Nonetheless, techniques to estimate the uncoded SER are usually not much published. This paper introduces a novel uncoded bit error rate (BER) estimation method using the
accurate-bits sequence of the new channel codes over the AWGN channel. Here, we have used the new channel codes as a forward error correction coding scheme for our communication system. This paper also presents the simulation results to demonstrate and compare the estimation accuracy of the proposed method over the AWGN channel.
Path constrained data gathering scheme for wireless sensor networks with mobi...ijwmn
This document summarizes a research paper on using mobile elements to gather data in wireless sensor networks. The paper proposes a heuristic algorithm called Graph Partitioning (GP) to address the K-Hop tour planning (KH-tour) problem of designing the shortest possible tour for a mobile element to visit a subset of sensor nodes called caching points, such that any node is at most k hops from the tour.
The GP algorithm works by first partitioning the sensor network graph into partitions where the depth of each partition is bounded by k hops. It then identifies the minimum number of caching points needed in each partition. Finally, it constructs the mobile element's tour to visit all the identified caching points.
Mobility models for delay tolerant network a surveyijwmn
Delay Tolerant Network (DTN) is an emerging networking technology that is widely used in the
environment where end-to-end paths do not exist. DTN follows store-carry-forward mechanism to route
data. This mechanism exploits the mobility of nodes and hence the performances of DTN routing and
application protocols are highly dependent on the underlying mobility of nodes and its characteristics.
Therefore, suitable mobility models are required to be incorporated in the simulation tools to evaluate DTN
protocols across many scenarios. In DTN mobility modelling literature, a number of mobility models have
been developed based on synthetic theory and real world mobility traces. Furthermore, many researchers
have developed specific application oriented mobility models. All these models do not provide accurate
evaluation in the all scenarios. Therefore, model selection is an important issue in DTN protocol
simulation. In this study, we have summarized various widely used mobility models and made a comparison
of their performances. Finally, we have concluded with future research directions in mobility modelling for
DTN simulation.
Intelligent transportation system (ITS) is an application which provides intelligence to the transportation
and traffic management systems. Although the word ITS applies to all systems in the transportation but as
per the European union directive it is the application of Information and communication technology in the
field of transportation is defined as ITS. The communication technology has evolved greatly today from
2G/3G to long term evolution (LTE). In this paper we focus on the LTE and its application in the ITS. Since
LTE offers excellent QoS, wide area coverage and high availability it is a preferred choice for vehicle to
infrastructure (V2I) service. At the same time the LTE customer base is increasing day by day which results
in congestion and accessing the network to send or request resources becomes difficult. In this paper we
have proposed a group based node selection algorithm to reduce the preamble ID collision otherwise this
uncoordinated preamble ID transmission by vehicle node (VN) will eventually clog the network and there
will be a massive congestion and re-transmissions attempts by VNs to obtain the random access channel
(RACH).
Stochastic analysis of random ad hoc networks with maximum entropy deploymentsijwmn
In this paper, we present the first stochastic analysis of the link performance of an ad hoc network modelled
by a single homogeneous Poisson point process (HPPP). According to the maximum entropy principle, the
single HPPP model is mathematically the best model for random deployments with a given node density.
However, previous works in the literature only consider a modified model which shows a discrepancy in the
interference distribution with the more suitable single HPPP model. The main contributions of this paper
are as follows. 1) It presents a new mathematical framework leading to closed form expressions of the
probability of success of both one-way transmissions and handshakes for a deployment modelled by a
single HPPP. Our approach, based on stochastic geometry, can be extended to complex protocols. 2) From
the obtained results, all confirmed by comparison to simulated data, optimal PHY and MAC layer
parameters are determined and the relations between them is described in details. 3) The influence of the
routing protocol on handshake performance is taken into account in a realistic manner, leading to the
confirmation of the intuitive result that the effect of imperfect feedback on the probability of success of a
handshake is only negligible for transmissions to the first neighbour node.
Data mining is important process to extract the useful information and pattern from huge amount of data.
NS-2 is an efficient tool to build the environment of network. The results from simulate these environment
in NS-2 is trace file that contains several columns and lines represent the network events. This trace file
can be used to analyse the network according to performance metrics but it has redundant columns and
rows. So, this paper is to perform the data mining in order to find only the necessary information in
analysis operation to reduce the execution time and the storage size of the trace file.
In ad hoc networks, routing plays a pertinent role. Deploying the appropriate routing protocol is very important in order to achieve best routing performance and reliability. Equally important is the mobility model that is used in the routing protocol. Various mobility models are available and each can have different impact on the performance of the routing protocol. In this paper, we focus on this issue by examining how the routing protocol, Optimized Link State Routing protocol, behaves as the mobility model is varied. For this, three random mobility models, viz., random waypoint, random walk and random direction are considered. The performance metrics used for assessment of Optimized Link State Routing protocol are Optimized Link State Routing protocol, end-to-end delay and packet delivery ratio.
An approach to dsr routing qos by fuzzy genetic algorithmsijwmn
This document summarizes a research paper that proposes improvements to the DSR routing protocol in mobile ad hoc networks through the use of genetic and fuzzy algorithms. The proposed GA-DSR protocol adds link costs to route request packets and has the destination node use the received route requests as input to a genetic algorithm to find the two best routes. It then sends these routes back to the source in a route reply packet. The protocol also uses fuzzy logic to dynamically adjust the route update period based on route error counts. The researchers believe this approach can help improve quality of service in DSR routing by selecting optimal paths based on link costs and maintaining up-to-date routes.
Bandwidth aware on demand multipath routing in manetsijwmn
The document proposes a modification to the AOMDV routing protocol to utilize available bandwidth in MANETs. It describes enhancing AOMDV to select multiple paths during route discovery based on available bandwidth, and using periodic detector packets to monitor bandwidth on alternate paths. Simulation results showed this bandwidth-aware multipath approach improves end-to-end throughput, packet delivery ratio, and end-to-end delay compared to the original AOMDV protocol.
A novel resource efficient dmms approach for network monitoring and controlli...ijwmn
In this paper, we propose a novel Distributed MANET Management System (DMMS) approach to use cross layer models to demonstrate a simplified way of efficiently managing the overall performance of individual network resources (nodes) and the network itself which is critical for not only monitoring the traffic, but also dynamically controlling the end-to-end Quality of Service (QoS) for different applications. In the proposed DMMS architecture, each network resource maintains a set of Management Information Base (MIB) elements and stores resource activities in their abstraction in terms of counters, timer, flag and threshold values. The abstract data is exchanged between different management agents residing in different resources on a need-to-know basis and each agent logically executes management functions locally to develop understanding of the behavior of all network resources to ensure that user protocols can function smoothly. However, in traditional network management systems, they collect statistical data such as resource usage and performance by spoofing of resources. The amount of data that is exchanged with other resources through management protocols that can be extremely high and the bandwidth for overhead management functions increases significantly. Also, the data storage requirements in each network resource for management functions increases and become inefficient as it increases the power usage for processing. Our proposed scheme targets at solving the problems.
Performance analysis of voip traffic over integrating wireless lan and wan us...ijwmn
A simulation model is presented to analyze and evaluate the performance of VoIP based integrated
wireless LAN/WAN with taking into account various voice encoding schemes. The network model was
simulated using OPNET Modeler software. Different parameters that indicate the QoS like MOS, jitter,
end to end delay, traffic send and traffic received are calculated and analyzed in Wireless LAN/WAN
scenarios. Depending on this evaluation, Selection codecs G.729A consider the best choice for VoIP.
Analysis of security threats in wireless sensor networkijwmn
Wireless Sensor Network(WSN) is an emerging technology and explored field of researchers worldwide
in the past few years, so does the need for effective security mechanisms. The sensing technology
combined with processing power and wireless communication makes it lucrative for being exploited in
abundance in future. The inclusion of wireless communication technology also incurs various types of
security threats due to unattended installation of sensor nodes as sensor networks may interact with
sensitive data and /or operate in hostile unattended environments. These security concerns be addressed
from the beginning of the system design. The intent of this paper is to investigate the security related
issues in wireless sensor networks. In this paper we have explored general security threats in wireless
sensor network with extensive study.
A cross layer delay-aware node disjoint multipath routing algorithm for mobil...ijwmn
Mobile Ad Hoc Networks (MANETS) require reliable routing and Quality of Service(QoS) mechanism to
support diverse applications with varying and stringent requirements for delay, jitter, bandwidth, packets
loss. Routing protocols such as AODV, AOMDV, DSR and OLSR use shortest path with minimum hop
count as the main metric for path selection, hence are not suitable for delay sensitive real time
applications. To support such applications delay constrained routing protocols are employed. These
Protocols makes path selection between source and destination based on the delay over the discovered
links during routing discovery and routing table calculations. We propose a variation of a node-disjoint
Multipath QoS Routing protocol called Cross Layer Delay aware Node Disjoint Multipath AODV (CLDMAODV)
based on delay constraint. It employs cross-layer communications between MAC and routing
layers to achieve link and channel-awareness. It regularly updates the path status in terms of lowest delay
incurred at each intermediate node. Performance of the proposed protocol is compared with single path
AODV and NDMR protocols. Proposed CLDM-AODV is superior in terms of better packet delivery and
reduced overhead between intermediate nodes.
New strategy to optimize the performance of spray and wait routing protocolijwmn
Delay Tolerant Networks have been (DTN) have been developed to support the irregular connectivity often
separate networks. The main routing problem in this type of network is embarrassed by time that is
extremely long, since connections are intermittent and opportunistic. Routing protocols must take into
account the maximum constraint encountered in this type of environment , use effective strategies
regarding the choice of relay nodes and buffer management nodes to improve the delivery of messages and
the time of their delivery . This article proposes a new strategy that optimizes the routing Spray and wait.
The proposed method uses the information contained in the messages delivered mostly paths traversed by
the messages before arriving at their destination and the time when nodes have receive these messages.
Simulation results show that the proposed strategy can increase the probability of delivery and minimizing
overhead unlike FIFO technology used with the default routing ' sprat and wait'
Virtual 2 d positioning system by using wireless sensors in indoor environmentijwmn
A 2D location detection system is constructed by using Wireless Sensor Nodes (WSN) to create aVirtual
Fingerprint map, specifically designed for use in an indoor environment. WSN technologies and
programmable ZigBee wireless network protocols are employed. This system is based on radio-location
fingerprinting technique. Both Linear taper functions and exponential taper functions are utilized with the
received signal strength distributions between the fingerprint nodes to generate virtual fingerprint maps.
Thus, areal and virtual combined fingerprint map is generated across the test area. K-nearest
neighborhood algorithm has been implemented on virtual fingerprint maps, in conjunction with weight
functions used to find the coordinates of the unknown objects. The system Localization accuracies of less
than a grid space areproved in calculations.
ENHANCED THREE TIER SECURITY ARCHITECTURE FOR WSN AGAINST MOBILE SINK REPLI...ijwmn
Recent developments on Wireless Sensor Networks have made their application in a wide range
such as military sensing and tracking, health monitoring, traffic monitoring, video surveillance and so on.
Wireless sensor nodes are restricted to computational resources, and are always deployed in a harsh,
unattended or unfriendly environment. Therefore, network security becomes a tough task and it involves
the authorization of admittance to data in a network. The problem of authentication and pair wise key
establishment in sensor networks with mobile sink is still not solved in the mobile sink replication attacks.
In q-composite key pre distribution scheme, a large number of keys are compromised by capturing a
small fraction of sensor nodes by the attacker. The attacker can easily take a control of the entire network
by deploying a replicated mobile sinks. Those mobile sinks which are preloaded with compromised keys
are used authenticate and initiate data communication with sensor node. To determine the above problem
the system adduces the three-tier security framework for authentication and pair wise key establishment
between mobile sinks and sensor nodes. The previous system used the polynomial key pre distribution
scheme for the sensor networks which handles sink mobility and continuous data delivery to the
neighbouring nodes and sinks, but this scheme makes high computational cost and reduces the life time of
sensors. In order to overcome this problem a random pair wise key pre distribution scheme is suggested
and further it helps to improve the network resilience. In addition to this an Identity Based Encryption is
used to encrypt the data and Mutual authentication scheme is proposed for the identification and
isolation of replicated mobile sink from the network.
Impact of client antenna’s rotation angle and height of 5g wi fi access point...ijwmn
This paper investigates the impact of antenna rotation’s angle at the receiver side and antenna height at
transmitter side on radio channel’s amount of fading. Amount of fading is considered as a measure of
severity of fading conditions in radio channels. It indicates how severe the fading level relative to Rayleigh
fading channel. The results give an input to optimize height of 5G Wi-Fi access point for better link
performance for different antenna’s rotation angles at receiver side. The investigation covers three
different indoor environments with different multipath dispersion levels in delay and direction domains;
lecture hall, corridor, and banquet hall.
Cellular wireless systems like GSM suffer from congestion resulting in overall system degradation and poor service delivery. When the traffic demand in a geographical area is high, the input traffic rate will exceed thecapacity of the output lines. This work focused on homogenous wireless network (the network traffic and resource dimensioning that are statistically identical) such that the network performance
evaluation can be reduced to a system with single cell and a single traffic type. Such system can employa queuing model to evaluate the performance metric of a cell in terms of blocking probability.
Five congestion control models were compared in the work to ascertain their peculiarities, they are Erlang B, Erlang C, Engset (cleared), Engset (buffered), and Bernoulli. To analyze the system, an aggregate onedimensional Markov chain wasderived, such that it describes a call arrival process under the assumption
that it is Poisson distributed. The models were simulated and their results show varying performances, however the Bernoulli model (Pb5) tends to show a situation that allows more users access to the system and the congestion level remain unaffected despite increase in the number of users and the offered traffic into the system.
Review and comparison of tasks scheduling in cloud computingijfcstjournal
Recently, there has been a dramatic increase in the popularity of cloud computing systems that rent
computing resources on-demand, bill on a pay-as-you-go basis, and multiplex many users on the same
physical infrastructure. It is a virtual pool of resources which are provided to users via Internet. It gives
users virtually unlimited pay-per-use computing resources without the burden of managing the underlying
infrastructure. One of the goals is to use the resources efficiently and gain maximum profit. Scheduling is a
critical problem in Cloud computing, because a cloud provider has to serve many users in Cloud
computing system. So scheduling is the major issue in establishing Cloud computing systems. The
scheduling algorithms should order the jobs in a way where balance between improving the performance
and quality of service and at the same time maintaining the efficiency and fairness among the jobs. This
paper introduces and explores some of the methods provided for in cloud computing has been scheduled.
Finally the waiting time and time to implement some of the proposed algorithm is evaluated
A Deterministic Model Of Time For Distributed SystemsJim Webb
This paper proposes a new deterministic, logical time model for distributed systems. The time model defines a total order of all messages in the system based on two partial order relations: messages sent before other messages ("=>") and strong causality ("~"). This ensures messages are delivered in a way that respects real-world causality. The time model can be implemented using message timestamps and delayed delivery to ensure the total order is maintained. This provides application programmers with a simple, linear view of time across a distributed system.
Review and Comparison of Tasks Scheduling in Cloud Computing ijfcstjournal
Recently, there has been a dramatic increase in the popularity of cloud computing systems that rent
computing resources on-demand, bill on a pay-as-you-go basis, and multiplex many users on the same
physical infrastructure. It is a virtual pool of resources which are provided to users via Internet. It gives
users virtually unlimited pay-per-use computing resources without the burden of managing the underlying
infrastructure. One of the goals is to use the resources efficiently and gain maximum profit. Scheduling is a
critical problem in Cloud computing, because a cloud provider has to serve many users in Cloud
computing system. So scheduling is the major issue in establishing Cloud computing systems. The
scheduling algorithms should order the jobs in a way where balance between improving the performance
and quality of service and at the same time maintaining the efficiency and fairness among the jobs. This
paper introduces and explores some of the methods provided for in cloud computing has been scheduled.
Finally the waiting time and time to implement some of the proposed algorithm is evaluated.
This document describes modeling and verifying a telecommunication application called Depannage using Live Sequence Charts (LSCs) and the Play-Engine tool. Depannage allows users to call for help from emergency services. The application is complex due to its distributed architecture, time constraints, and evolving components.
The authors specify Depannage using LSCs to describe component behaviors and interactions. Key components include Search, which locates emergency providers, and Users, which models user states. Universal LSCs define mandatory component behaviors, while existential LSCs define possible behaviors. The Play-Engine tool is used to simulate, animate, and formally verify the LSC specification.
The methodology captures requirements at a high level using
This document proposes ANGEL, an agent-based scheduling algorithm for real-time tasks in virtualized cloud environments. It employs a bidirectional announcement-bidding mechanism between agents to allocate tasks and dynamically provision resources. The mechanism consists of three phases: basic matching, forward announcement-bidding, and backward announcement-bidding. ANGEL also dynamically adds virtual machines to improve schedulability. Extensive experiments show ANGEL efficiently solves real-time task scheduling in clouds.
AVAILABILITY METRICS: UNDER CONTROLLED ENVIRONMENTS FOR WEB SERVICES ijwscjournal
Web Services technology has the potential to cater an enterprise’s needs, providing the ability to integrate
different systems and application types regardless of their platform, operating system, programming
language, or location. Web Services could be deployed in traditional, centralized or brokered client server
approach or they could be in peer to peer manner. Web Services could act as a server providing
functionality to a requester or act as a client, receiving functionality from any service. From the
performance perspective the availability of Web Services plays an important role among many parameters.
AVAILABILITY METRICS: UNDER CONTROLLED ENVIRONMENTS FOR WEB SERVICES ijwscjournal
Web Services technology has the potential to cater an enterprise’s needs, providing the ability to integrate different systems and application types regardless of their platform, operating system, programming language, or location. Web Services could be deployed in traditional, centralized or brokered client server approach or they could be in peer to peer manner. Web Services could act as a server providing functionality to a requester or act as a client, receiving functionality from any service. From the performance perspective the availability of Web Services plays an important role among many parameters.
AVAILABILITY METRICS: UNDER CONTROLLED ENVIRONMENTS FOR WEB SERVICES ijwscjournal
Web Services technology has the potential to cater an enterprise’s needs, providing the ability to integrate different systems and application types regardless of their platform, operating system, programming language, or location. Web Services could be deployed in traditional, centralized or brokered client server approach or they could be in peer to peer manner. Web Services could act as a server providing functionality to a requester or act as a client, receiving functionality from any service. From the performance perspective the availability of Web Services plays an important role among many parameters.
A New Way of Identifying DOS Attack Using Multivariate Correlation Analysisijceronline
This document summarizes a research paper that proposes a new method for identifying denial of service (DoS) attacks using multivariate correlation analysis (MCA). The method involves three main steps: 1) generating basic features from network traffic, 2) using MCA to extract correlations between features and generate triangle area maps, and 3) using an anomaly-based detection mechanism to distinguish attacks from normal traffic based on differences from pre-generated normal profiles. The researchers evaluate their method on the KDD Cup 99 dataset and achieve moderate detection performance. However, they identify issues related to differences in feature scales that reduce detection of some attacks. They propose using statistical normalization to address this.
DEVELOPING SCHEDULER TEST CASES TO VERIFY SCHEDULER IMPLEMENTATIONS IN TIMETR...ijesajournal
Despite that there is a “one-to-many” mapping between scheduling algorithms and scheduler
implementations, only a few studies have discussed the challenges and consequences of translating between
these two system models. There has been an argument that a wide gap exists between scheduling theory and
scheduling implementation in practical systems, where such a gap must be bridged to obtain an effective
validation of embedded systems. In this paper, we introduce a technique called “Scheduler Test Case”
(STC) aimed at bridging the gap between scheduling algorithms and scheduler implementations in singleprocessor embedded systems implemented using Time-Triggered Co-operative (TTC) architectures. We will
demonstrate how the STC technique can provide a simple and systematic way for documenting, verifying
(testing) and comparing various TTC scheduler implementations on particular hardware. However, STC is
a generic technique that provides a black-box tool for assessing and predicting the behaviour of
representative implementation sets of any real-time scheduling algorithm.
The document presents a framework for monitoring service systems from a language-action perspective. It formalizes interaction protocols, policies, and commitments using communicative acts to specify goals and monitors at multiple abstraction levels. This allows events from low-level message exchanges to be analyzed to understand compliance with higher-level policies. The framework is demonstrated with purchase order scenarios and is shown to provide effective monitoring of service interactions and processes.
Developing Scheduler Test Cases to Verify Scheduler Implementations In Time-T...ijesajournal
Despite that there is a “one-to-many” mapping between scheduling algorithms and scheduler implementations, only a few studies have discussed the challenges and consequences of translating between these two system models. There has been an argument that a wide gap exists between scheduling theory and scheduling implementation in practical systems, where such a gap must be bridged to obtain an effective validation of embedded systems. In this paper, we introduce a technique called “Scheduler Test Case” (STC) aimed at bridging the gap between scheduling algorithms and scheduler implementations in single-processor embedded systems implemented using Time-Triggered Co-operative (TTC) architectures. We will demonstrate how the STC technique can provide a simple and systematic way for documenting, verifying (testing) and comparing various TTC scheduler implementations on particular hardware. However, STC is a generic technique that provides a black-box tool for assessing and predicting the behaviour of representative implementation sets of any real-time scheduling algorithm.
FORMAL MODELING AND VERIFICATION OF MULTI-AGENTS SYSTEM USING WELLFORMED NETScsandit
Multi-agent systems are asynchronous and distributed computer systems. These characteristics make them also a discrete-event dynamic system. It is, therefore, important to analyze the behavior of such systems to ensure that they terminate correctly and satisfy other important properties. This paper presents a formal modeling and analysis of MAS, based on Well-formed Nets, in order to ensure the absence of any undesired or unexpected behavior. To validate our ontribution, we consider the timetable problem, which is a multi-agent resource allocation problem.
IRJET- Comparative Analysis between Critical Path Method and Monte Carlo S...IRJET Journal
This document compares the Critical Path Method (CPM) and Monte Carlo simulation for project scheduling. CPM uses deterministic activity durations to calculate the critical path and project duration. Monte Carlo simulation incorporates uncertainty by using three time estimates per activity - optimistic, pessimistic, and most likely - represented as probability distributions. It runs thousands of simulations to determine the likely project duration based on random sampling from these distributions. The document reviews literature on applying Monte Carlo simulation in construction projects. It then describes a study that uses both CPM and Monte Carlo simulation on a real construction project to compare the results and evaluate Monte Carlo simulation's usefulness for the construction industry.
The Geoquorum approach for implementing atomic read/write shaved memory in mobile ad hoc networks. This
problem in distributed computing is revisited in the new setting provided by the emerging mobile computing technology. A
simple solution tailored for use in ad hoc networks is employed as a vehicle for demonstrating the applicability of formal
requirements and design strategies to the new field of mobile computing. The approach of this paper is based on well
understood techniques in specification refinement, but the methodology is tailored to mobile applications and help designers
address novel concerns such as logical mobility, the invocations, specific conditions constructs
Formal Specification for Implementing Atomic Read/Write Shared Memory in Mobi...ijcsit
The Geoquorum approach for implementing atomic read/write shaved memory in mobile ad hoc networks. This
problem in distributed computing is revisited in the new setting provided by the emerging mobile computing technology. A
simple solution tailored for use in ad hoc networks is employed as a vehicle for demonstrating the applicability of formal
requirements and design strategies to the new field of mobile computing. The approach of this paper is based on well
understood techniques in specification refinement, but the methodology is tailored to mobile applications and help designers
address novel concerns such as logical mobility, the invocations, specific conditions constructs. The proof logic and
programming notation of mobile UNITY provide the intellectual tools required to carryout this task. Also, the quorum
systems are investigated in highly mobile networks in order to reduce the communication cost associated with each distributed
operation.
Survey on deep learning applied to predictive maintenance IJECEIAES
Prognosis health monitoring (PHM) plays an increasingly important role in the management of machines and manufactured products in today’s industry, and deep learning plays an important part by establishing the optimal predictive maintenance policy. However, traditional learning methods such as unsupervised and supervised learning with standard architectures face numerous problems when exploiting existing data. Therefore, in this essay, we review the significant improvements in deep learning made by researchers over the last 3 years in solving these difficulties. We note that researchers are striving to achieve optimal performance in estimating the remaining useful life (RUL) of machine health by optimizing each step from data to predictive diagnostics. Specifically, we outline the challenges at each level with the type of improvement that has been made, and we feel that this is an opportunity to try to select a state-of-the-art architecture that incorporates these changes so each researcher can compare with his or her model. In addition, post-RUL reasoning and the use of distributed computing with cloud technology is presented, which will potentially improve the classification accuracy in maintenance activities. Deep learning will undoubtedly prove to have a major impact in upgrading companies at the lowest cost in the new industrial revolution, Industry 4.0.
Minimum Process Coordinated Checkpointing Scheme For Ad Hoc Networks pijans
The wireless mobile ad hoc network (MANET) architecture is one consisting of a set of mobile hosts
capable of communicating with each other without the assistance of base stations. This has made possible
creating a mobile distributed computing environment and has also brought several new challenges in
distributed protocol design. In this paper, we study a very fundamental problem, the fault tolerance
problem, in a MANET environment and propose a minimum process coordinated checkpointing scheme.
Since potential problems of this new environment are insufficient power and limited storage capacity, the
proposed scheme tries to reduce the amount of information saved for recovery. The MANET structure used
in our algorithm is hierarchical based. The scheme is based for Cluster Based Routing Protocol (CBRP)
which belongs to a class of Hierarchical Reactive routing protocols. The protocol proposed by us is nonblocking coordinated checkpointing algorithm suitable for ad hoc environments. It produces a consistent
set of checkpoints; the algorithm makes sure that only minimum number of nodes in the cluster are
required to take checkpoints; it uses very few control messages. Performance analysis shows that our
algorithm outperforms the existing related works and is a novel idea in the field. Firstly, we describe an
organization of the cluster. Then we propose a minimum process coordinated checkpointing scheme for
cluster based ad hoc routing protocols.
With the emergence of virtualization and cloud computing technologies, several services are housed on virtualization platform. Virtualization is the technology that many cloud service providers rely on for efficient management and coordination of the resource pool. As essential services are also housed on cloud platform, it is necessary to ensure continuous availability by implementing all necessary measures. Windows Active Directory is one such service that Microsoft developed for Windows domain networks. It is included in Windows Server operating systems as a set of processes and services for authentication and authorization of users and computers in a Windows domain type network. The service is required to run continuously without downtime. As a result, there are chances of accumulation of errors or garbage leading to software aging which in turn may lead to system failure and associated consequences. This results in software aging. In this work, software aging patterns of Windows active directory service is studied. Software aging of active directory needs to be predicted properly so that rejuvenation can be triggered to ensure continuous service delivery. In order to predict the accurate time, a model that uses time series forecasting technique is built.
The adoption of cloud environment for various application uses has led to security and privacy concern of user’s data. To protect user data and privacy on such platform is an area of concern.
Many cryptography strategy has been presented to provide secure sharing of resource on cloud platform. These methods tries to achieve a secure authentication strategy to realize feature such as self-blindable access tickets, group signatures, anonymous access tickets, minimal disclosure of tickets and revocation but each one varies in realization of these features. Each feature requires different cryptography mechanism for realization. Due to this it induces computation complexity which affects the deployment of these models in practical application. Most of these techniques are designed for a particular application environment and adopt public key cryptography which incurs high cost due to computation complexity.
To address these issues this work present an secure and efficient privacy preserving of mining data on public cloud platform by adopting party and key based authentication strategy. The proposed SCPPDM (Secure Cloud Privacy Preserving Data Mining) is deployed on Microsoft azure cloud platform. Experiment is conducted to evaluate computation complexity. The outcome shows the proposed model achieves significant performance interm of computation overhead and cost.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMSIJNSA Journal
The smart irrigation system represents an innovative approach to optimize water usage in agricultural and landscaping practices. The integration of cutting-edge technologies, including sensors, actuators, and data analysis, empowers this system to provide accurate monitoring and control of irrigation processes by leveraging real-time environmental conditions. The main objective of a smart irrigation system is to optimize water efficiency, minimize expenses, and foster the adoption of sustainable water management methods. This paper conducts a systematic risk assessment by exploring the key components/assets and their functionalities in the smart irrigation system. The crucial role of sensors in gathering data on soil moisture, weather patterns, and plant well-being is emphasized in this system. These sensors enable intelligent decision-making in irrigation scheduling and water distribution, leading to enhanced water efficiency and sustainable water management practices. Actuators enable automated control of irrigation devices, ensuring precise and targeted water delivery to plants. Additionally, the paper addresses the potential threat and vulnerabilities associated with smart irrigation systems. It discusses limitations of the system, such as power constraints and computational capabilities, and calculates the potential security risks. The paper suggests possible risk treatment methods for effective secure system operation. In conclusion, the paper emphasizes the significant benefits of implementing smart irrigation systems, including improved water conservation, increased crop yield, and reduced environmental impact. Additionally, based on the security analysis conducted, the paper recommends the implementation of countermeasures and security approaches to address vulnerabilities and ensure the integrity and reliability of the system. By incorporating these measures, smart irrigation technology can revolutionize water management practices in agriculture, promoting sustainability, resource efficiency, and safeguarding against potential security threats.
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
1. International Journal of Wireless & Mobile Networks (IJWMN) Vol. 7, No. 4, August 2015
DOI : 10.5121/ijwmn.2015.7407 89
ACHIEVEMENT FOR WIRELESS
COMMUNICATION USING ADEQUATE
MLTI-THREDING ARCHITECTURES
1
Rajeev Ranjan, 2
Praveen Dhyani, 3
Rajeev Kumar
1
Department of Electronics & Communication, Birla Institute of Technology
International Centre, Muscat, Oman,
2
Department of Computer Science, Executive Director at Banasthali University Jaipur
Campus
3
Department of Electronics & Communication, Waljat College of Applied Sciences,
Muscat, Sultanate of Oman,
ABSTRACT
A major role is played in the layout and evaluation of any empirical wireless structure to manifest is the
goal of this paper that counterfeit mode architectures affect counterfeit conduct, regarding structure
accomplishment metrics, essentially and therefore, the excellent architecture should be explored in order to
accomplish the most accurate and reliable results. It is found that the most analytical factors it is found
that that actuate counterfeit mode accomplishment are counterfeit time, structure event organizing and
grade of adequate. It is, also, found that counterfeit time in relation to event existence in the real structure
along with the usage of modern architectural concepts such as multi-interweave technology complement
analytical issues too in the advancement of an adequate counterfeit organization for wireless
communications. In order to evaluate the above findings an extensive empirical review has been
demeanored analysising several distinct events counterfeit organizations towards presenting the relation
between channel designing collections, counterfeit time and structure accomplishment.
KEYWORDS
Counterfeit mode, multi-interweave, cellular structure, Calendar Queue (CQ)
1.INTRODUCTION
1.1 Simulating Wireless structures
Literature has been introduced by several organizations for simulating wireless structures have
been the counterfeit organizations to the real structure conduct is a major goal of the variation of
the counterfeit has to be as pragmatic as possible. The Physical time and the event existence in a
real structure have to be reflected pragmatically inside the counterfeit model. When two or more
episode is happening at the same time, the adequate is the most suitable approachology to mode
them. This is the most counterfeit mode architecture along with computing language is the most
analytical collection for the counterfeit organization developer. We are here to say in this paper,
empirical distinct event counterfeit organizations are analyses and expected towards layouting a
2. International Journal of Wireless & Mobile Networks (IJWMN) Vol. 7, No. 4, August 2015
90
more pragmatic counterfeit environment. Here the presentation of the relation between channel
collections, organizing apparatus modeat s, counterfeit time and structure accomplishment for
layouting and evaluating cellular communications are also conferred. Now days we can see the
basic Mobile User (MU) services (episode) that are supported by a cellular structure are:
• New call acceptance
• Reallocation (handoff)
• User movement
• Call Termination
The counterfeit organization is consisting of four major factors that can be categorized as follows:
• MU services mode
• Operational parameters collection (e.g. Number of cells, Base station situations, channel
allocation scheme, etc.)
• Mathematical mode integration (propagation mode, statistical disseminates, signal
computations, etc.)
• Counterfeit time designing. It should be noticed that counterfeit of event existence over
counterfeit time has to reflect pragmatically the physical time of the structure under
investigation.
Finally, the MUs services have to be simulated based on the above counterfeit organization
factors.
1.2 Distinct Event Counterfeit (DES)
Here I can say the Distinct Event Counterfeit permodes the most known counterfeit
approachology especially for communication organizations. According to DES concept, episode
is being happened at distinct points in time within the counterfeit time. Counterfeit time is
moving onward based on the event channel. These episodes are permodeing the basic physical
structure events such as new call acceptance, etc. Each event is provoked by a time stamp that is
used for the event beheading at a later time. In the main projects of the event existence over
counterfeit time is defined by a scheduler that selects episode with minimum time stamp (max.
priority). Here the complete organizing process is based on a priority queue. DES organizations
can be considered as sequential (SDES) or parallel (PDES). SDES organizations such as ns-2 are
the most common among scientific association. In such an organization, the organizing apparatus
modeat can be analyzed in a three step cycle:
• DE queue: Elimination of an event with the minimum time stamp from the queue
• Execute: preparing of the DE queued event
• Enqueue: Insertion of a new provoked event in the queue
In ns-2, only one event can be executed at any given time. If two or more episode are scheduled
to take place (to execute) at the same time, their beheading is performed based on the first
scheduled – first dispatched manner and so the real structure conduct cannot be reflected
pragmatically.
The PDES organizations offer compelling convenience for speeding up the beheading time of a
convoluted structure modeat. In such organizations, several additional issues due to multiple
preparing sections’ existence have to be faced effectively such as development or
synchronization, load balance, etc. These organizations do not change the concept of the event
organizing apparatus modeat.
3. International Journal of Wireless & Mobile Networks (IJWMN) Vol. 7, No. 4, August 2015
91
2. DES ORGANIZATION ADVANCEMENTS
2.1 DES countenance
An adequate DES organization has been offered several countenance and to amuse some
analytical conditions. These countenance and conditions can be interpreted as factors that affect
the DES organization conduct. The above factors are summarized as follows:
• Designing of the MUs services
• Reasoning and implementation of the real time to counterfeit time
• Organizing apparatus mode at (the most prominent of them being the Calendar Queue
(CQ) organizing)
• Adequate
Time in general is divided in three categories:
• Physical time (real time of the real structure)
• Wall-clock time (beheading time)
• Counterfeit time
According to, "Counterfeit time is defined as a totally arranged set of values where each
value permodes an imperative of time in the physical organization being modeled…” We
can see a major goal of a DES organization is the pragmatic perforation of the physical time into
counterfeit time as well as the more pragmatic organizing access.
2.2 Calendar Queue (CQ) organizing
CQ was first introduced by Brown R. This approach complements the most known organizing
apparatus mode at among the most popular DES organizations such as ns- 2(Berkeley), Ptolemy
II (Berkeley), Jist (Cornel University, USA), etc. the real Accomplishment enhancements for CQ
can be found. Each event is associated with a time stamp that defines its priority (in beheading
channel). A CQ can be enforced an array of lists where each list contains imminent episode. Here
I can see the list of N episode is partitioned to M shorter lists called Buckets that correlate to a
specific time range. Using eq.(1), the bucket number m(e) where an event e will occur at time t(e)
can be calculated
Where δ is a time resolution related constant. Let M=8, N=10, δ=1 and t (e) =3.52 (fig.1) for a
new event e. Using eq.(2), the bucket number for event e is m(e)=3.
Fig. 1. A CQ operation
4. International Journal of Wireless & Mobile Networks (IJWMN) Vol. 7, No. 4, August 2015
92
2.3 Multi-Interweave Technology (MT)
The usual capability of a modern permodeing organization (OS) is the beheading of different
programs (utilization) at the "same time". Here we can see the real application beheading at the
same time requires at least an organization with N=P, where N are the utilization and P the
available development. In most cases, only one development is available and so the CPU time has
to be shared between the running utilization. This beheading apparatus modeat is called
interweave (multi interweave- MT). The traditional organizing apparatus modeat cannot be
reflected pragmatically enough a simultaneous event existence as a coeval (multi-interweave)
apparatus modeat does.
2.3.1 The JVM Example
Here we can see one of the most popular countenance of Java is the support of native multi-
interweave. The JVM restraints the MT environment. The most imp advice for multi interweave
capabilities of Java can be found. The OS faces JVM as a single application but within the JVM,
multiple Java utilization and/or multiple parts (segments) of one application can be executed
(fig.2) coevally. We can use or see in a single development device, the active channels are
executed with a high speed switching between them and so the impression of parallel beheading
is given. Through available approach of JVM, the priority level (1 to 10) of each channel can be
defined. When the channel priorities are equal, the CPU time is distributed in a fair manner.
The OS gives a time slice to JVM, and this time is redistributed to java utilization/channels inside
the JVM environment. The moment event organizing is also restraintled by the JVM. This
apparatus modeat defines the real-time order of channel beheading and can be categorized as non-
preemptive or preemptive. The current channel is running over forever and has to in mode the
scheduler explicitly if it is safe to start another waiting channel according to non-preemptive
organizing. We can use in preemptive, a channel is running for a specific CPU time-slice and then
the scheduler “preempts” it, (calling suspend ()), and resumes another channel for the next
available time-slice. In JVM, the beheading time of every channel (in case of equal priorities) is
equitable. Figure 2, illustrates three active channels that share a single development. The sleep ()
approach deactivates temporarily the current channel in order to give time for beheading of
another channel.
Fig. 2 Channel switching
5. International Journal of Wireless & Mobile Networks (IJWMN) Vol. 7, No. 4, August 2015
93
2.3.2 MT organizing
While we can see the time slice given by JVM or by programmer to each channel (structure
event) beheading is less than the required computational time, an event interleaving is
accomplished. This technique reflects more pragmatically the user competition for accessing
common radio resources. Moreover, when a user is under preparing, the counterfeit time flows
also for the next upcoming user. Thus, the structure decisions are more sophisticated and
optimized to more user requests.
2.3.3 Background and Literature Work
Multi-core systems and clusters become an interesting and affordable plat mode for running
parallel development to accomplish high achievement computing for many utilization and
experiments.
Some examples include internet services, databases, scientific computing, and duplicate. This is
due to their scalability achievement/cost ratio.
There are two main approaches that support parallel computing via multi-core developments:
shared memory and distributed memory approaches. Thus, we will provide an overview of the
evolution of the two main approaches.
2.3.4 Shared Memory Approach
Shared memory based parallel programming models interconnect by sharing the data objects in
the worldwide address space. Shared memory models assume that all parallel events can access
all of memory. Reliability in the data need to be accomplished when different growths
communicate and share the same data item, this is done by using the cache coherence protocols
used by the parallel computer. All processes such as load and store for data carried out by the
automatically without direct intervention by the programmer. For shared memory based parallel
programming models, communication between parallel events is completed via a shared variable
state that must be carefully managed to confirm accurateness. Numerous synchronization
primitives such as locks or transactional memory are used to implement this organization. In this
approach a main memory is shared between all development components in a single address
space.
The dominances with using shared memory based parallel programming models are conferred
below.
• Shared memory based parallel programming models enable easy development of the
application more than distributed memory based multi developments.
• Shared memory based parallel programming models avoid the diversity of data items and
allows the programmer to not be concerned about the programming model's
responsibility.
• Shared memory based programming models offer better achievement than the distributed
memory based parallel programming models.
6. International Journal of Wireless & Mobile Networks (IJWMN) Vol. 7, No. 4, August 2015
94
The disdominances with using the shared memory based parallel programming models are
described below.
• The fixtures requirements for the shared memory based parallel programming models are
very high, convoluted, and cost prohibitive.
• Shared memory parallel programming models often encounter data races and deadlocks
during the development of the utilization.
2.3.5 Distributed Memory Approach
This type of parallel programming approach allows communication between developments by
using the send/receive communication routines.
Message passing models avoids communications between developments based on shared/global
data. They are typically used to program clusters, wherein each development in the architecture
gets its own instance of data and instructions. The dominances of distributed memory based
programming models as follows:
• The fixtures requirement for the message passing models is low, less convoluted, and
comes at very low cost.
• Message passing models avoid the data races and as a con channel the programmer is
freed from using the locks.
The disdominances with distributed memory based parallel programming model are listed below:
• Message passing models in contrast encounter deadlocks during the development of
communications.
• Development of utilization on message passing models is hard and takes more time.
• The developer is responsible for establishing communication between developments.
• Message passing models are less achievement oriented and incur high communication
overheads.
3. EMPIRICAL DES MODELS
3.1 Structure designing
All the empirical models are perfectly based on the coeval concept that is enforced through Multi-
interweave. Here due to the nature of the event existence in the real structure, adequate offers a
chance to model more pragmatically the physical activities of the structure. Main Three different
architectures with increasing grade of adequate have been enforced and analyses in order to
examine the dependence of the results and counterfeit organization accomplishment from the
multi interweave usage. The counterfeit models are supported the four basic services for the MUs
as mentioned before.
The counterfeit model operation is mainly focused in channel allocation procedure which is
strongly deeply connected with new call acceptance and reallocation (hand off). This procedure is
also used in the case of an MU movement. Three circumstances must be satisfied for a effective
channel allocation:
• Channel availability
• Carrier to Noise Ratio (CNR) between MU and BS above a predefined threshold
• Carrier to Noise plus Interference Ratio (CNIR) above a predefined threshold
7. International Journal of Wireless & Mobile Networks (IJWMN) Vol. 7, No. 4, August 2015
95
Additional criteria can be applied using different channel allocation strategies. The Dynamic
Channel Allocation (DCA) [32, 33-36] strategy has been used in our empirical models. The CNIR
ratio is derived from the following type:
(2)
I should say where n is the number of base stations and purchasers, ξi is the distortion due to
shadowing from user to base station, A is a proportional co adequate, Po is the transmitted
potential of a reference point, Pi is the transmitted potential of the i user and di is the distance
between MU i and reference BS.
3.2 Multi-interweave scenarios
The basic structure procedures such as new call acceptance (NC), reallocation (RC), MU
movement (MC) and call termination (FC) are enforced as channels within the JVM environment
and for all empirical models. Inside the counterfeit organization, seven entities complement the
basic factors. These factors are:
• Restraintler (clock) which restraints and synchronizes the whole counterfeit procedure
• Initialization Procedures. This procedure prepares initialization of each new counterfeit
step (e.g. define traffic conditions, initialize counters, etc)
• NC, RC, MC, FC which permode the four basic structure procedures respectively
• Termination Procedures. Actions after the completion of each counterfeit step (e.g.
compute statistical metrics, store data for the finished counterfeit step, etc)
Table 1, illustrates the enforced and simulated scenarios that are based on different channels of
channels and single code tasks (S=single code, T=channel). In these scenarios four, five and
seven channels have been used respectively.
Table 1. Enforced scenarios
Scenario 1 2 3
4 channels 5 channels 7 channels
1 Clock S T T
2 Init loop S S T
3 NC T T T
4 RC T T T
5 MC T T T
6 FC T T T
7 Termination Loop S S T
In the first scenario (fig. 3), only the basic structure procedures are enforced as channels. The
restraintler is working as a part of the main application channel and it is active until the
counterfeit time termination. When a new counterfeit step starts, the restraintler activates the
needed initializations and after the completion the four channels are activated. After completion,
each channel sends a signal to the restraintler. When four completion indicators are collected
from the restraintler, the final loop task is activated.
8. International Journal of Wireless & Mobile Networks (IJWMN) Vol. 7, No. 4, August 2015
96
Fig. 3 The 4 channel scenario
The restraintler is converted by channel (fig. 4) in the case of second scenario. While the four
channels are executing from JVM, the correlating code within the application is blocked. The
restraintler defines the awaking channel between the main counterfeit factors. The last empirical
model (fig. 5) consists of seven channels, four channels for the basic structure procedures, one for
synchronization and two for supplementary tasks (init loop, final loop). All the enforced channels
are always active within the counterfeit time. Special purpose flags inside the body of each
channel and in channel with restraint indicators the correlating code is activated. The JVM offers
the various approaches for restraintling channels. Using these approach for starting/stopping
channels (instead of internal flags) compelling delays and in balance of the counterfeit models
can be produced.
Fig. 4 The 5 channel scenario
9. International Journal of Wireless & Mobile Networks (IJWMN) Vol. 7, No. 4, August 2015
97
Fig. 5 The 7 channel scenario
4. DES ORGANIZATION EVALUATIONS
The major tasks judging accomplishment of a cellular structure are the new call acceptance and
handoff (reallocation). This accomplishment can be measured in terms of statistical metrics by
using blocking and dropping probability. When a new call acceptance is unsuccessful, then this
call is blocked. The blocking probability is calculated from the type:
In a handoff situation, if the structure cannot allocate a new channel for the moving MU, then,
this ongoing call is dropped. The correlating probability is calculated as follows:
For estimating more accurately the simulating results, Monte Carlo [50] beheadings have been
used.
10. International Journal of Wireless & Mobile Networks (IJWMN) Vol. 7, No. 4, August 2015
98
5. EMPIRICAL RESULTS
Figures 6 and 7 show that by increasing channel number, structure accomplishment is augmented
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
4 thread
5 thread
7 thread
Fig. 6. Blocking anticipation for the involved Architectures
0
0.005
0.01
0.015
0.02
0.025
0.03
0.035
0.04
0.045
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Multi Therading
Clander Quene
Fig. 7. Dropping anticipation for the CQ and MT accesses
11. International Journal of Wireless & Mobile Networks (IJWMN) Vol. 7, No. 4, August 2015
99
Tables 2 and 3 confirm the results that are illustrated in previous graphs (figures 8& 9)
Table 2 Mean values of STD and MEAN for blocking probability
Scenario Blocking Mean Value
STD MEAN
7 channels 0.044075 0.089216
5 channels 0.04387 0.080194
4 channels 0.044521 0.082127
Table 3 Mean values of STD and MEAN for dropping probability
Scenario Dropping Mean Values
STD MEAN
7 channels 0.025648 0.018489
5 channels 0.0357 0.022577
4 channels 0.033493 0.022738
Here I have shown intuitively, the enforced adequate is more pragmatic when the computational
time for a counterfeit step is remaining stable (equitable between tasks inside the step). Table 4,
shows the ratio std/mean of the counterfeit time duration (ms).
This ratio gives us the advice for the significance of the std of the counterfeit step duration and
thus the resulting balance. Data inside table 4, permode results based on Monte Carlo beheadings.
The mean values permode the ratio std/mean and the std permodes the standard deviation of the
resulted mean values.
Table 4. Counterfeit step duration
7 channels 5 channels 4 channels
Mean 0.055156 0.1479 0.20057333
Std 0.03353804 0.014321662 0.16271174
6. CONCLUSIONS
I have conferred in this paper, the con channel of modeled adequate level for the expected
counterfeit architectures of wireless communication organizations to counterfeit organization
conduct and accomplishment is conferred. Main and basic the concept of the counterfeit step
computational duration balance over counterfeit time is illustrated.
It is empirically implied that JVM is in the case of the multi-interweave scenario involving
implementation of counterfeit architecture is consisting of four channels and three simple non-
channel tasks, provides equitable time slices only to the four channels. The simple non-channel
tasks don’t participate this time sharing. The total computational is in time that is needed for each
counterfeit step is based on the completion of each individual fundamental such as channels and
simple tasks.
In the case of less than seven channels implementation the total time is asymmetrically allocated
to the factors and so no equitable conditions could be guaranteed as the main role of the mission.
When the mentioned factors are all channels, the time slicing that is provided by JVM is fairly
allocated among the active channels and so equitable conditions are created. The empirical results
are shown that there is high positive correlation between organization accomplishment and
counterfeit time balance implying pragmatic reflection of the Physical Structure Time.
12. International Journal of Wireless & Mobile Networks (IJWMN) Vol. 7, No. 4, August 2015
100
There is no doubt that the JVM environment complements an effective device for developing a
multi interweaves environment and its pure uses. On the other hand, analytical drawbacks of the
JVM access (e.g. deadlocks, compelling delays due to channel priorities, synchronization
problems, etc) should be effectively faced by the developer. The major imminent research work
towards developing an adequate counterfeit model for wireless communication organizations is to
focus on balancing drawbacks and benefits of such a JVM based app.
REFERENCES
[1] Reuben Pasquini, Algorithms for improving the achievement of optimistic parallel counterfeit, PhD
dissertation, Purdue University, 1999
[2] Dirk Brade, A generalized process for the verification and validation of models and counterfeit
results, PhD. Dissertation, university of Bundeswehr Munchen, 2003
[3] Rimon Barr, An adequate, unifying access to counterfeit using virtual machines, Cornell University,
2004
[4] R. S. M. GOH and I. L-J THNG, MLIST: An adequate pending event set structure for distinct event
counterfeit, International Journal of Counterfeit Vol. 4 No. 5-6, 2003
[5] Gianni A. Di Caro, Analysis of counterfeit environments for mobile ad hoc networks, Technical
Report No. IDSIA-24-03, Dalle Molle Institute for Artificial Intelligence, 2003
[6] Thomas J. Schreiber, Daniel T. Brunner, Inside Distinct-Event Counterfeit Software: How It Works
and Why It Matters, Proceedings of the 1997 Winter Counterfeit Conference, 1997
[7] Jayadev Misra, Distributed Distinct-event Counterfeit, ACM, Computing Surveys, Vol. 18, No. 1,
March 1986
[8] Benno Jaap Overeinder, Distributed Event-driven Counterfeit, PhD dissertation, University of
Amsterdam, 2000
[9] Bruno R. Preis, Wayne M. Loucks, V. Carl Hamacher, A Unified Modeling Methodology for
Achievement Evaluation of Distributed Distinct Event Counterfeit Apparatus Structure S, the 1988
Winter Counterfeit Conference, 1988
[10] Kalyan S. Perumalla, Parallel and Distributed Counterfeit: Traditional Method and Recent Advances,
Proceedings of the 2006 Winter Counterfeit Conference, 2006
[11] Mathieu Lacage, Thomas R. Henderson, yet another Network Simulator, ACM 2006
[12] Rick Siow Mong Goh, Ian Li- Jin Thng, DSplay: An Adequate Dynamic Priority Queue Structure for
Distinct Event Counterfeit,
[13] Kehsiung Chung, Janche sang and Vernon rego, A Achievement Comparison of Event Calendar
Algorithms: an Empirical Access, Software—Practice and Experience, Vol. 23(10), 1107– 1138
(October 1993)
[14] Lucite Muliadi, Distinct Event Modeling In .tolemy II, University of California, Berkeley, 1999
[15] Valerie Naoumov, Thomas Gross Counterfeit of Large Ad Hoc Networks, MSWiM’03, September,
San Diego, California, USA, 2003
[16] Kevin Fall, Kannan Varadhan, The ns Manual, UC Berkeley, LBL, USC/ISI, and Xerox PARC,
January 14, 2007
[17] Stuart Kurkowski, Tracy Camp, and Michael Colagrosso, MANET Simulation Studies: The Current
State and New Simulation Tools, 2005
[18] Xiang Zeng, Rajive Bagrodia, Mario Gerla, “GloMoSim: A Library for Parallel Simulation of Large-
scale Wireless Networks”, Proceedings of the 12th Workshop on Parallel and Distributed Simulations,
1998
[19] Lokesh Bajaj, Mineo Takai, Rajat Ahuja, Rajive Bagrodia, “Simulation of Large-Scale
Heterogeneous Communication Systems”, Proceedings of IEEE Military Communications
Conference (MILCOM’99), 1999
[20] A. Boukerche, S. K. Das, A. Fabbri, “Swim Net: A Scalable Parallel Simulation Test bed for Wireless
and Mobile Networks”, Wireless Networks 7, 467–486, 2001
[21] Winston Liu, Changchun Chiang, Hsiao_Kuang Wu, Vikas Jha, Mario Gerla, Rajive Bagrodia,
“Parallel Simulation Environment for Mobile Wireless NetworkS”, Proceedings of the 1996 Winter
Simulation Conference, WSC’96, pp. 605- 612,1996
[22] Richard M. Fujimoto. Parallel and Distributed Simulation Systems. Parallel and Distributed
Computing. Wiley-Interscience, 2000.
13. International Journal of Wireless & Mobile Networks (IJWMN) Vol. 7, No. 4, August 2015
101
[23] Kevin Fall, Kannan Varadhan, (2007), The ns Manual, UC Berkeley, LBL, USC/ISI, and Xerox
PARC, January 14.
[24] Lukito Muliadi, (1999), Discrete Event Modeling In Ptolemy Ii, Department of Electrical Engineering
and Computer Science University of California, Berkeley.
[25] http://jist.ece.cornell.edu/javadoc/jist/runtime/Scheduler.Calendar.html
[26] Rick Siow Mong Goh, Ian Li- Jin Thng, (2004), DSplay: An Efficient Dynamic Priority Queue
Structure for Discrete Event Simulation, SimTech Simulation Technology and Training Conference,
Australia
[27] R. S. M. GOH and I. L-J THNG, (2003), MLIST: An Efficient Pending Event Set Structure For
Discrete Event Simulation, International Journal of Simulation Vol. 4 No. 5-6.
Author
Rajeev Ranjan received his B.E degree in Electronics and Telecommunication from Swami
Ramananad Teerth Marathwada University, Nanded, Maharashtra, India in 1999, and M.E
degree in Electronics and Communication from BIT Mesra, Ranchi, Jharkhand, India in
2002. He is currently pursuing PhD from the Department of Electronics & Communication,
in university of Banasthali. His research interest’s area is ad-hoc networks, algorithm
design in the wireless network and analog communication.
Dr. Praveen Dhyani is Professor of Computer Science and Executive Director at Banasthali
University Jaipur Campus. Previously he was the Rector of the Birla Institute of
Technology, MESRA; prior to which he held key academic cum administrative positions in
BITS Pilani, and BIT, MESRA. He has supervised doctoral research and has authored
research papers, technical reports, and international conference proceedings in diverse
fields of Computer Science and Information Technology. His R&D accomplishments
include development and national and international exhibit of robot, development of electronics devices to
aid foot drop patients, and development of voice operated wheel chair. He is presently member of the
academic and research regulatory bodies of the Banasthali University, and the departmental research
committee of IIS University, India.”
Dr. Rajeev Kumar is Assistant Professor in the Department of Electronics &
Communication Engineering at Waljat College of Applied Sciences, Muscat, Sultanate of
Oman since November 2010. Before joining the Waljat College he was Assistant Professor
in the Department of Applied Physics, Birla Institute of Technology, Mesra, Ranchi, India.
Rajeev Kumar obtained his Master’s degree in Physics from Indian Institute of Technology,
Madras, India and Ph. D. from University of Calcutta, Kolkata, India. His research interests
are in the areas of Experimental Plasma Physics, Spectral Analysis and Numerical Methods.