Assessing Web service systems performance and their dependability are crucial for the development of
today’s applications. Testing the performance and Fault Tolerance Mechanisms (FTMs) of composed
service components is hard to be measured at design time due to service instability is often caused by the
nature of the network conditions. Using a real internet environment for testing systems is difficult to set up
and control. We have introduced a fault injection toolkit that emulates a WAN within a LAN environment
between composed service components and offers full control over the emulated environment in addition to
the capability to inject network-related faults and application specific faults. The toolkit also generates
background workloads on the system under test so as to produce more realistic results. We describe an
experiment that has been performed to examine the impact of fault tolerance protocols deployed at a
service client by using our toolkit system.
A Novel Approach for Efficient Resource Utilization and Trustworthy Web ServiceCSCJournals
Many Web services are expected to run with high degree of security and dependability. To achieve this goal, it is essential to use a Web-services compatible framework that tolerates not only crash faults, but Byzantine faults as well, due to the untrusted communication environment in which the Web services operate. In this paper, we describe the design and implementation of such a framework, called RET-WS (Resource Efficient and Trustworthy Execution -Web Service).RET-WS is designed to operate on top of the standard SOAP messaging framework for maximum interoperability with resource efficient way to execute requests in Byzantine-fault-tolerant replication that is particularly well suited for services in which request processing is resource-intensive. Previous efforts took a failure masking all-active approach of using all execution replicas to execute all requests; at least 2t + 1 execution replicas are needed to mask t Byzantine-faulty ones. We describe an asynchronous protocol that provides resource-efficient execution by combining failure masking with imperfect failure detection and checkpointing. It is implemented as a pluggable module within the Axis2 architecture, as such, it requires minimum changes to the Web applications. The core fault tolerance mechanisms used in RET-WS are based on the well-known Castro and Liskov's BFT algorithm for optimal efficiency with some modification for resource efficient way. Our performance measurements confirm that RET-WS incurs only moderate runtime overhead considering the complexity of the mechanisms.
A Web service (WS*-) is a software system designed to support interoperable machine-to-machine
interaction over a network (WSDL) i.e between a client and a service. It has an interface described in a
machine-processable format . Other systems interact with the Web service in a manner prescribed by its
description using SOAP messages which is a protocol define by world wide web consortium, typically
conveyed using HTTP with an XML serialization in conjunction with other Web-related standards. Windows
Communication Foundation (WCF) is a framework for building service-oriented applications. Using WCF,
you can send data as asynchronous messages from one service endpoint to another. A service endpoint can
be part of a continuously available service hosted by IIS, or it can be a service hosted in an application like
an .exe file. An endpoint can be a client of a service that requests data from a service endpoint. The messages
can be as simple as a single character or word sent as XML document, or as complex as a stream of binary
data. In this paper ,We gave the Adavantages that are Available by using wcf ,instead of webservices and
other.
A Study on Hardware and Software Link Quality Metrics for Wireless Multimedia...Eswar Publications
Due to the lack of accurate evaluation of the transmission characteristics of the wireless communication links, routing algorithms in wireless sensor networks may result in poor network performance. In order to avoid sending packets over the unstable link, routing protocol has to rely on noble metrics to choose better routing path. Better estimation of link reliability between neighboring nodes could permit the selection of a more reliable route. Since the routing metrics play an important role as they have a direct impact on the efficiency and robustness of routing protocols. Different routing metrics will provide different performances to routing protocols when used to compute weight of paths. This paper presents a study on various hardware and software link quality metrics that
help network protocol designers can choose an efficient Link Quality Estimator to develop reliable routing techniques for WMSNs. Additionally a classification tree of different routing metrics is presented which helps in understanding the strengths and weaknesses of these LQ metrics, thus enabling the designer of the routing protocol to make an informed choice.
A TWO-LEVELED WEB SERVICE PATH RE-PLANNING TECHNIQUEijwscjournal
Web service paths can accomplish customer requirements. During the execution of a web service path, violations of service level agreements (SLAs) will cause the path to be re-planed (healed). Existing re-planning techniques generally suffer from shortcomings of ignoring the effect of requirement change.
This paper proposes a two-leveled path re-planning technique (TLPRP), which offers the following features: (a) TLPRP is composed of both meta and physical levels. The meta level re-planning senses environment parameters (e.g., requirement change and analysis/design errors) and re-plan the affected component paths produced by system design. (b) The physical level re-planning algorithm possesses the ability of web
service path composition. (c) The re-planning algorithm embeds an access control policy that computes a successful possibility for every path to facilitate avoiding execution failure caused by web service access failure.
A Novel Approach for Efficient Resource Utilization and Trustworthy Web ServiceCSCJournals
Many Web services are expected to run with high degree of security and dependability. To achieve this goal, it is essential to use a Web-services compatible framework that tolerates not only crash faults, but Byzantine faults as well, due to the untrusted communication environment in which the Web services operate. In this paper, we describe the design and implementation of such a framework, called RET-WS (Resource Efficient and Trustworthy Execution -Web Service).RET-WS is designed to operate on top of the standard SOAP messaging framework for maximum interoperability with resource efficient way to execute requests in Byzantine-fault-tolerant replication that is particularly well suited for services in which request processing is resource-intensive. Previous efforts took a failure masking all-active approach of using all execution replicas to execute all requests; at least 2t + 1 execution replicas are needed to mask t Byzantine-faulty ones. We describe an asynchronous protocol that provides resource-efficient execution by combining failure masking with imperfect failure detection and checkpointing. It is implemented as a pluggable module within the Axis2 architecture, as such, it requires minimum changes to the Web applications. The core fault tolerance mechanisms used in RET-WS are based on the well-known Castro and Liskov's BFT algorithm for optimal efficiency with some modification for resource efficient way. Our performance measurements confirm that RET-WS incurs only moderate runtime overhead considering the complexity of the mechanisms.
A Web service (WS*-) is a software system designed to support interoperable machine-to-machine
interaction over a network (WSDL) i.e between a client and a service. It has an interface described in a
machine-processable format . Other systems interact with the Web service in a manner prescribed by its
description using SOAP messages which is a protocol define by world wide web consortium, typically
conveyed using HTTP with an XML serialization in conjunction with other Web-related standards. Windows
Communication Foundation (WCF) is a framework for building service-oriented applications. Using WCF,
you can send data as asynchronous messages from one service endpoint to another. A service endpoint can
be part of a continuously available service hosted by IIS, or it can be a service hosted in an application like
an .exe file. An endpoint can be a client of a service that requests data from a service endpoint. The messages
can be as simple as a single character or word sent as XML document, or as complex as a stream of binary
data. In this paper ,We gave the Adavantages that are Available by using wcf ,instead of webservices and
other.
A Study on Hardware and Software Link Quality Metrics for Wireless Multimedia...Eswar Publications
Due to the lack of accurate evaluation of the transmission characteristics of the wireless communication links, routing algorithms in wireless sensor networks may result in poor network performance. In order to avoid sending packets over the unstable link, routing protocol has to rely on noble metrics to choose better routing path. Better estimation of link reliability between neighboring nodes could permit the selection of a more reliable route. Since the routing metrics play an important role as they have a direct impact on the efficiency and robustness of routing protocols. Different routing metrics will provide different performances to routing protocols when used to compute weight of paths. This paper presents a study on various hardware and software link quality metrics that
help network protocol designers can choose an efficient Link Quality Estimator to develop reliable routing techniques for WMSNs. Additionally a classification tree of different routing metrics is presented which helps in understanding the strengths and weaknesses of these LQ metrics, thus enabling the designer of the routing protocol to make an informed choice.
A TWO-LEVELED WEB SERVICE PATH RE-PLANNING TECHNIQUEijwscjournal
Web service paths can accomplish customer requirements. During the execution of a web service path, violations of service level agreements (SLAs) will cause the path to be re-planed (healed). Existing re-planning techniques generally suffer from shortcomings of ignoring the effect of requirement change.
This paper proposes a two-leveled path re-planning technique (TLPRP), which offers the following features: (a) TLPRP is composed of both meta and physical levels. The meta level re-planning senses environment parameters (e.g., requirement change and analysis/design errors) and re-plan the affected component paths produced by system design. (b) The physical level re-planning algorithm possesses the ability of web
service path composition. (c) The re-planning algorithm embeds an access control policy that computes a successful possibility for every path to facilitate avoiding execution failure caused by web service access failure.
In this paper, an application-based QoS evaluation approach for heterogeneous networks is proposed.It is possible to expand the network capacity and coverage in a dynamic fashion by applying heterogeneous wireless network architecture. However, the Quality of Service (QoS) evaluation of this type of network architecture is very challenging due to the presence of different communication technologies. Different communication technologies have different characteristics and the applications that utilize them have unique QoS requirements. Although, the communication technologies have different performance measurement parameters, the applications using these radio access networks have the same QoS requirements. As a result, it would be easier to evaluate the QoS of the access networks and the overall network configuration based on the performance of applications running on them. Using such applicationbased QoS evaluation approach, the heterogeneous nature of the underlying networks and the diversity of their traffic can be adequately taken into account. Through simulation studies, we show that the application performance based assessment approach facilitates better QoS management and monitoring of heterogeneous network configurations.
In this digital era, organizations and industries are moving towards replacing websites with web applications for many obvious reasons. With this transition towards web-based applications, organizations and industries find themselves surrounded by several threats and vulnerabilities. One of the largest concerns is keeping their infrastructure safe from attacks and misuse. Web security entails applying a set of procedures and practices, by applying several security principles at various layers to protect web servers, web users, and their surrounding environment. In this paper, we will discuss several attacks that may affect web-based applications namely: SQL injection attacks, cookie poisoning, cross-site scripting, and buffer overflow. Additionally, we will discuss detection and prevention methods from such attacks.
Talhunt is a leader in assisting and executing IEEE Engineering projects to Engineering students - run by young and dynamic IT entrepreneurs. Our primary motto is to help Engineering graduates in IT and Computer science department to implement their final year project with first-class technical and academic assistance.
Project assistance is provided by 15+ years experienced IT Professionals. Over 100+ IEEE 2015 and 200+ yester year IEEE project titles are available with us. Projects are based on Software Development Life-Cycle (SDLC) model.
Application-Based QoS Evaluation of Heterogeneous Networks csandit
Heterogeneous wireless networks expand the network capacity and coverage by leveraging the
network architecture and resources in a dynamic fashion. However, the presence of different
communication technologies makes the Quality of Service (QoS) evaluation, management, and
monitoring of these networks very challenging. Each communication technology has its own
characteristics while the applications that utilize them have their specific QoS requirements.
Although, the communication technologies have different performance assessment parameters,
the applications using these radio access networks have the same QoS requirements. As a
result, it would be easier to evaluate the QoS of the access networks and the overall network
configuration depending on the performance of applications running on them. Using such
application-based QoS evaluation approach, the heterogeneous nature of the underlying
networks and the diversity of their traffic can be adequately taken into account. In this paper,
we propose an application-based QoS evaluation approach for heterogeneous networks.
Through simulation studies, we show that this assessment approach facilitates better QoS
management and monitoring of heterogeneous network configurations.
A brief introduction to task communication in real time operating system.It covers Inter-process communication like concepts of shared memory , message passing, remoteprocedure call .Interprocess communication (IPC) refers specifically to the mechanisms an operating system provides to allow the processes to manage shared data. Typically, applications can use IPC, categorized as clients and servers, where the client requests data and the server responds to client requests.Many applications are both clients and servers, as commonly seen in distributed computing.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
A novel token based approach towards packet loss controleSAT Journals
Abstract Due to the advent of technologies like Web 2.0, the Internet applications are able to support transmission of multimedia content to end users. In such applications the transmission might result in packet loss as well. In this context, it is essential to have packet loss control mechanisms that can avoid deterioration of quality of services while rendering media rich content. The quality of service in this case depends on congestion control. Many protocols have been introduced in order to supplement the standard TCP protocol in order to control network congestion. The CSFQ which was built for fair service with open – loop controller has started deterioration in quality as P2P flows dominated Internet traffic of late. One of the closed loop congestion control known as Token-Based Congestion Control (TBCC) was able to restrict consuming resources and provide best service to end users. It monitors inter-domain traffic for trust relationships. Recently, Shi et al. presented a new mechanism known as Stable Token-Limited Congestion Control (STLCC) for controlling inter-domain congestion and improve network performance. In this paper we implement the STLCC mechanism. We built a prototype application that demonstrates the proof of concept. The experimental results revealed that the proposed application is able to control network congestion by controlling packet loss thus improving performance of network. Keywords – Networking, packet loss control, data gram, packet, TCP, congestion control
PERFORMANCE ANALYSIS OF THE LINK-ADAPTIVE COOPERATIVE AMPLIFY-AND-FORWARD REL...IJCNCJournal
This paper analyzes the performance of cooperative amplify-and-forward (CAF) relay networks that
employ adaptive M-ary quadrature amplitude modulation (M-QAM)/M-ary phase shift keying (M-PSK)
digital modulation techniques in Nakagami-m fading channel. In particular, we present and compared the
analysis of CAF relay networks with different cooperative diversity and opportunistic routing strategies
such as regular Maximal Ratio Combining (MRC), Selection Diversity Combining (SDC), Opportunistic
Relay Selection with Maximal Ratio Combining (ORS-MRC) and Opportunistic Relay Selection with
Selection Diversity Combining (ORS-SDC). We advocate a simple yet unified numerical approach based on
the marginal moment generating function (MGF) of the total received SNR to compute the average symbol
error rate (ASER), mean achievable spectral efficiency, and outage probability performance metrics.
A predictive framework for cyber security analytics using attack graphsIJCNCJournal
Security metrics serve as a powerful tool for organizations to understand the effectiveness of protecting computer networks. However majority of these measurement techniques don’t adequately help corporations to make informed risk management decisions. In this paper we present a stochastic security framework for obtaining quantitative measures of security by taking into account the dynamic attributes associated with vulnerabilities that can change over time. Our model is novel as existing research in attack graph analysis do not consider the temporal aspects associated with the vulnerabilities, such as the availability of exploits and patches which can affect the overall network security based on how the vulnerabilities are interconnected and leveraged to compromise the system. In order to have a more realistic representation of how the security state of the network would vary over time, a nonhomogeneous model is developed which incorporates a time dependent covariate, namely the vulnerability age. The daily transition-probability matrices are estimated using Frei's Vulnerability Lifecycle model. We also leverage the trusted CVSS metric domain to analyze how the total exploitability and impact measures evolve over a time period for a given network.
Advancements in Complementary Metal Oxide Semiconductor (CMOS) technology have enabled Wireless Sensor Networks (WSN) to gather, process and transport multimedia (MM) data as well and not just limited to handling ordinary scalar data anymore. This new generation of WSN type is called Wireless Multimedia Sensor Networks (WMSNs). Better and yet relatively cheaper sensors – sensors that are able to sense both scalar data and multimedia data with more advanced functionalities such as being able to handle rather intense computations easily - have sprung up. In this paper, the applications, architectures, challenges and issues faced in the design of WMSNs are explored. Security and privacy issues, over all requirements, proposed and implemented solutions so far, some of the successful achievements and other related works in the field are also highlighted. Open research areas are pointed out and a few solution suggestions to the still persistent problems are made, which, to the best of my knowledge, so far haven’t been explored yet.
PERFORMANCE ASSESSMENT OF CHAOTIC SEQUENCE DERIVED FROM BIFURCATION DEPENDENT...IJCNCJournal
In CDMA system, m-sequence and Gold codes are often utilized for spreading-despreading and
scrambling-descrambling operations. In a previous work, a design framework was created for generating
large family of codes from logistic map, which have comparable autocorrelation and cross correlation to
m-sequence and Gold codes. The purpose of this work is to evaluate the performance of these chaotic
codes in a CDMA environment. In the bit error rate (BER) simulation, matched filter, decorrelator and
MMSE receiver have been utilized. The received signal was modelled for synchronous CDMA uplink for
simulation simplicity purpose. Additive White Gaussian Noise channel model was assumed for the
simulation.
ENERGY SAVINGS IN APPLICATIONS FOR WIRELESS SENSOR NETWORKS TIME CRITICAL REQ...IJCNCJournal
Along with handling and poor storage capacity, each sensor in wireless sensor network (WSN) is equipped
with a limited power source and very difficult to be replaced in most application environments. Improving
the energy savings in applications for wireless sensor networks is necessary. In this paper, we mainly focus
on energy consumption savings in applications for wireless sensor networks time critical requirements. Our
Paper accompanying analysis of advanced technologies for energy saving techniques for the optimization
of energy efficiency together with the data transmission is optimal. Moreover, we propose improvements to
increase energy savings in applications for wireless sensor networks require time critical (LEACH
improvements). Simulation results show that our proposed protocol significantly better than LEACH about
the formation of clusters in each round, the average power, the number of nodes alive and average total
received data in base stations.
As Wireless Sensor Networks are penetrating into the industrial domain, many research opportunities are emerging. One such essential and challenging application is that of node localization. A feed-forward neural network based methodology is adopted in this paper. The Received Signal Strength Indicator (RSSI) values of the anchor node beacons are used. The number of anchor nodes and their configurations has an impact on the accuracy of the localization system, which is also addressed in this paper. Five different training algorithms are evaluated to find the training algorithm that gives the best result. The multi-layer Perceptron (MLP) neural network model was trained using Matlab. In order to evaluate the performance of the proposed method in real time, the model obtained was then implemented on the Arduino microcontroller. With four anchor nodes, an average 2D localization error of 0.2953 m has been achieved with a 12-12-2 neural network structure. The proposed method can also be implemented on any other embedded microcontroller system.
PERFORMANCE EVALUATION OF WIRELESS SENSOR NETWORK UNDER HELLO FLOOD ATTACKIJCNCJournal
Wireless sensor network (WSN) is highly used in many fields. The network consists of tiny lightweight
sensor nodes and is largely used to scan or detect or monitor environments. Since these sensor nodes are
tiny and lightweight, they put some limitations on resources such as usage of power, processing given task,
radio frequency range. These limitations allow network vulnerable to many different types of attacks such
as hello flood attack, black hole, Sybil attack, sinkhole, and many more. Among these attacks, hello flood is
one of the most important attacks. In this paper,we have analyzed the performance of hello flood attack and
compared the network performance as number of attackers increases. Network performance is evaluated
by modifying the ad-hoc on demand distance vector (AODV) routing protocol by using NS2 simulator. It
has been tested under different scenarios like no attacker, single attacker, and multiple attackers to know
how the network performance changes. The simulation results show that as the number of attackers
increases the performance in terms of throughput and delay changes.
Sensor nodes are highly mobile, which makes the application running on them face network related problems like node failure, link failure, network level disconnection, scarcity of resources etc. Node failure and Network fault are need to be monitored continuously by supervising the network status especially for critical applications like Health Monitoring System. We propose Node Monitoring protocol (NMP) to monitor the node good conditions using agents and ensure that node gets promised quality of service. These Nodes senses environment and communicates important data to the sink or base station. To establish the correct event time, these nodes need to be synchronized with global clock. Therefore, time synchronization is very important parameter. We have built a simulating environment for Validating Node Monitoring Protocol (NMP) to assess the reliability of Health Monitoring systems. Formal Specification and Description Language tool (SDL) has been used to validate the NMP at design time in order to increase the confidence and efficiency of the system.
Peer-to-Peer streaming technology has become one of the major Internet applications as it offers the opportunity of broadcasting high quality video content to a large number of peers with low costs. It is widely accepted that with the efficient utilization of peers and server's upload capacities, peers can enjoy watching a high bit rate video with minimal end-to-end delay. In this paper, we present a practical scheduling algorithm that works in the challenging condition where no spare capacity is available, i.e., it maximally utilizes the resources and broadcasts the maximum streaming rate. Each peer contacts with only a small number of neighbours in the overlay network and autonomously subscribes to sub-streams according to a budget-model in such a way that the number of peers forwarding exactly one sub-stream will be maximized. The hop-count delay is also taken into account to construct a short depth trees. Finally, we show through simulation that peers dynamically converge to an efficient overlay structure with a short hop-count delay. Moreover, the proposed scheme gives nice features in the homogeneous case and overcomes SplitStream in all simulated scenarios.
These days considering expansion of networks, dissemination of information has become one of significant cases for researchers. In social networks in addition to social structures and people effectiveness on each other, Profit increase of sales, publishing a news or rumor, spread or diffusion of an idea can be mentioned. In social societies, people affect each other and with an individual’s membership, his friends
may join that group as well. In publishing a piece of news, independent of its nature there are different ways to expand it. Since information isn’t always suitable and positive, this article is trying to introduce the immunization mechanism against this information. The meaning of immunization is a kind of Slow Publishing of such information in network. Therefor it has been tried in this article to slow down the
publishing of information or even stop them. With comparison of presented methods for immunization and also presenting rate delay parameter, the immunization of methods were evaluated and we identified the most effective immunization method. Among existing methods for immunization and recommended methods, recommended methods also have an effective role in preventing spread of malicious rumor.
HIDING A MESSAGE IN MP3 USING LSB WITH 1, 2, 3 AND 4 BITSIJCNCJournal
Steganography is the art of hiding information in ways that prevent the detection of hidden messages. This
paper presentsa new method which randomly selects position in MP3 file to hide a text secret messageby
using Least Significant Bit (LSB) technique. The text secret message isused in start and ends locations a
unique signature or key.The methodology focuses to embed one bit, two bits, three bitsor four bits from
secret message into MP3 file by using LSB techniques. The evaluation and performancemethods are based
on robustness (BER and correlation), Imperceptibility (PSNR) and hiding capacity (Ratio between Sizes of
text message and MP3 Cover) indicators.The experimental results show the new method is more security.
Moreover the contribution of this paper is the provision of a robustness-based classification of LSB
steganography models depending on their occurrence in the embedding position.
In this paper, an application-based QoS evaluation approach for heterogeneous networks is proposed.It is possible to expand the network capacity and coverage in a dynamic fashion by applying heterogeneous wireless network architecture. However, the Quality of Service (QoS) evaluation of this type of network architecture is very challenging due to the presence of different communication technologies. Different communication technologies have different characteristics and the applications that utilize them have unique QoS requirements. Although, the communication technologies have different performance measurement parameters, the applications using these radio access networks have the same QoS requirements. As a result, it would be easier to evaluate the QoS of the access networks and the overall network configuration based on the performance of applications running on them. Using such applicationbased QoS evaluation approach, the heterogeneous nature of the underlying networks and the diversity of their traffic can be adequately taken into account. Through simulation studies, we show that the application performance based assessment approach facilitates better QoS management and monitoring of heterogeneous network configurations.
In this digital era, organizations and industries are moving towards replacing websites with web applications for many obvious reasons. With this transition towards web-based applications, organizations and industries find themselves surrounded by several threats and vulnerabilities. One of the largest concerns is keeping their infrastructure safe from attacks and misuse. Web security entails applying a set of procedures and practices, by applying several security principles at various layers to protect web servers, web users, and their surrounding environment. In this paper, we will discuss several attacks that may affect web-based applications namely: SQL injection attacks, cookie poisoning, cross-site scripting, and buffer overflow. Additionally, we will discuss detection and prevention methods from such attacks.
Talhunt is a leader in assisting and executing IEEE Engineering projects to Engineering students - run by young and dynamic IT entrepreneurs. Our primary motto is to help Engineering graduates in IT and Computer science department to implement their final year project with first-class technical and academic assistance.
Project assistance is provided by 15+ years experienced IT Professionals. Over 100+ IEEE 2015 and 200+ yester year IEEE project titles are available with us. Projects are based on Software Development Life-Cycle (SDLC) model.
Application-Based QoS Evaluation of Heterogeneous Networks csandit
Heterogeneous wireless networks expand the network capacity and coverage by leveraging the
network architecture and resources in a dynamic fashion. However, the presence of different
communication technologies makes the Quality of Service (QoS) evaluation, management, and
monitoring of these networks very challenging. Each communication technology has its own
characteristics while the applications that utilize them have their specific QoS requirements.
Although, the communication technologies have different performance assessment parameters,
the applications using these radio access networks have the same QoS requirements. As a
result, it would be easier to evaluate the QoS of the access networks and the overall network
configuration depending on the performance of applications running on them. Using such
application-based QoS evaluation approach, the heterogeneous nature of the underlying
networks and the diversity of their traffic can be adequately taken into account. In this paper,
we propose an application-based QoS evaluation approach for heterogeneous networks.
Through simulation studies, we show that this assessment approach facilitates better QoS
management and monitoring of heterogeneous network configurations.
A brief introduction to task communication in real time operating system.It covers Inter-process communication like concepts of shared memory , message passing, remoteprocedure call .Interprocess communication (IPC) refers specifically to the mechanisms an operating system provides to allow the processes to manage shared data. Typically, applications can use IPC, categorized as clients and servers, where the client requests data and the server responds to client requests.Many applications are both clients and servers, as commonly seen in distributed computing.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
A novel token based approach towards packet loss controleSAT Journals
Abstract Due to the advent of technologies like Web 2.0, the Internet applications are able to support transmission of multimedia content to end users. In such applications the transmission might result in packet loss as well. In this context, it is essential to have packet loss control mechanisms that can avoid deterioration of quality of services while rendering media rich content. The quality of service in this case depends on congestion control. Many protocols have been introduced in order to supplement the standard TCP protocol in order to control network congestion. The CSFQ which was built for fair service with open – loop controller has started deterioration in quality as P2P flows dominated Internet traffic of late. One of the closed loop congestion control known as Token-Based Congestion Control (TBCC) was able to restrict consuming resources and provide best service to end users. It monitors inter-domain traffic for trust relationships. Recently, Shi et al. presented a new mechanism known as Stable Token-Limited Congestion Control (STLCC) for controlling inter-domain congestion and improve network performance. In this paper we implement the STLCC mechanism. We built a prototype application that demonstrates the proof of concept. The experimental results revealed that the proposed application is able to control network congestion by controlling packet loss thus improving performance of network. Keywords – Networking, packet loss control, data gram, packet, TCP, congestion control
PERFORMANCE ANALYSIS OF THE LINK-ADAPTIVE COOPERATIVE AMPLIFY-AND-FORWARD REL...IJCNCJournal
This paper analyzes the performance of cooperative amplify-and-forward (CAF) relay networks that
employ adaptive M-ary quadrature amplitude modulation (M-QAM)/M-ary phase shift keying (M-PSK)
digital modulation techniques in Nakagami-m fading channel. In particular, we present and compared the
analysis of CAF relay networks with different cooperative diversity and opportunistic routing strategies
such as regular Maximal Ratio Combining (MRC), Selection Diversity Combining (SDC), Opportunistic
Relay Selection with Maximal Ratio Combining (ORS-MRC) and Opportunistic Relay Selection with
Selection Diversity Combining (ORS-SDC). We advocate a simple yet unified numerical approach based on
the marginal moment generating function (MGF) of the total received SNR to compute the average symbol
error rate (ASER), mean achievable spectral efficiency, and outage probability performance metrics.
A predictive framework for cyber security analytics using attack graphsIJCNCJournal
Security metrics serve as a powerful tool for organizations to understand the effectiveness of protecting computer networks. However majority of these measurement techniques don’t adequately help corporations to make informed risk management decisions. In this paper we present a stochastic security framework for obtaining quantitative measures of security by taking into account the dynamic attributes associated with vulnerabilities that can change over time. Our model is novel as existing research in attack graph analysis do not consider the temporal aspects associated with the vulnerabilities, such as the availability of exploits and patches which can affect the overall network security based on how the vulnerabilities are interconnected and leveraged to compromise the system. In order to have a more realistic representation of how the security state of the network would vary over time, a nonhomogeneous model is developed which incorporates a time dependent covariate, namely the vulnerability age. The daily transition-probability matrices are estimated using Frei's Vulnerability Lifecycle model. We also leverage the trusted CVSS metric domain to analyze how the total exploitability and impact measures evolve over a time period for a given network.
Advancements in Complementary Metal Oxide Semiconductor (CMOS) technology have enabled Wireless Sensor Networks (WSN) to gather, process and transport multimedia (MM) data as well and not just limited to handling ordinary scalar data anymore. This new generation of WSN type is called Wireless Multimedia Sensor Networks (WMSNs). Better and yet relatively cheaper sensors – sensors that are able to sense both scalar data and multimedia data with more advanced functionalities such as being able to handle rather intense computations easily - have sprung up. In this paper, the applications, architectures, challenges and issues faced in the design of WMSNs are explored. Security and privacy issues, over all requirements, proposed and implemented solutions so far, some of the successful achievements and other related works in the field are also highlighted. Open research areas are pointed out and a few solution suggestions to the still persistent problems are made, which, to the best of my knowledge, so far haven’t been explored yet.
PERFORMANCE ASSESSMENT OF CHAOTIC SEQUENCE DERIVED FROM BIFURCATION DEPENDENT...IJCNCJournal
In CDMA system, m-sequence and Gold codes are often utilized for spreading-despreading and
scrambling-descrambling operations. In a previous work, a design framework was created for generating
large family of codes from logistic map, which have comparable autocorrelation and cross correlation to
m-sequence and Gold codes. The purpose of this work is to evaluate the performance of these chaotic
codes in a CDMA environment. In the bit error rate (BER) simulation, matched filter, decorrelator and
MMSE receiver have been utilized. The received signal was modelled for synchronous CDMA uplink for
simulation simplicity purpose. Additive White Gaussian Noise channel model was assumed for the
simulation.
ENERGY SAVINGS IN APPLICATIONS FOR WIRELESS SENSOR NETWORKS TIME CRITICAL REQ...IJCNCJournal
Along with handling and poor storage capacity, each sensor in wireless sensor network (WSN) is equipped
with a limited power source and very difficult to be replaced in most application environments. Improving
the energy savings in applications for wireless sensor networks is necessary. In this paper, we mainly focus
on energy consumption savings in applications for wireless sensor networks time critical requirements. Our
Paper accompanying analysis of advanced technologies for energy saving techniques for the optimization
of energy efficiency together with the data transmission is optimal. Moreover, we propose improvements to
increase energy savings in applications for wireless sensor networks require time critical (LEACH
improvements). Simulation results show that our proposed protocol significantly better than LEACH about
the formation of clusters in each round, the average power, the number of nodes alive and average total
received data in base stations.
As Wireless Sensor Networks are penetrating into the industrial domain, many research opportunities are emerging. One such essential and challenging application is that of node localization. A feed-forward neural network based methodology is adopted in this paper. The Received Signal Strength Indicator (RSSI) values of the anchor node beacons are used. The number of anchor nodes and their configurations has an impact on the accuracy of the localization system, which is also addressed in this paper. Five different training algorithms are evaluated to find the training algorithm that gives the best result. The multi-layer Perceptron (MLP) neural network model was trained using Matlab. In order to evaluate the performance of the proposed method in real time, the model obtained was then implemented on the Arduino microcontroller. With four anchor nodes, an average 2D localization error of 0.2953 m has been achieved with a 12-12-2 neural network structure. The proposed method can also be implemented on any other embedded microcontroller system.
PERFORMANCE EVALUATION OF WIRELESS SENSOR NETWORK UNDER HELLO FLOOD ATTACKIJCNCJournal
Wireless sensor network (WSN) is highly used in many fields. The network consists of tiny lightweight
sensor nodes and is largely used to scan or detect or monitor environments. Since these sensor nodes are
tiny and lightweight, they put some limitations on resources such as usage of power, processing given task,
radio frequency range. These limitations allow network vulnerable to many different types of attacks such
as hello flood attack, black hole, Sybil attack, sinkhole, and many more. Among these attacks, hello flood is
one of the most important attacks. In this paper,we have analyzed the performance of hello flood attack and
compared the network performance as number of attackers increases. Network performance is evaluated
by modifying the ad-hoc on demand distance vector (AODV) routing protocol by using NS2 simulator. It
has been tested under different scenarios like no attacker, single attacker, and multiple attackers to know
how the network performance changes. The simulation results show that as the number of attackers
increases the performance in terms of throughput and delay changes.
Sensor nodes are highly mobile, which makes the application running on them face network related problems like node failure, link failure, network level disconnection, scarcity of resources etc. Node failure and Network fault are need to be monitored continuously by supervising the network status especially for critical applications like Health Monitoring System. We propose Node Monitoring protocol (NMP) to monitor the node good conditions using agents and ensure that node gets promised quality of service. These Nodes senses environment and communicates important data to the sink or base station. To establish the correct event time, these nodes need to be synchronized with global clock. Therefore, time synchronization is very important parameter. We have built a simulating environment for Validating Node Monitoring Protocol (NMP) to assess the reliability of Health Monitoring systems. Formal Specification and Description Language tool (SDL) has been used to validate the NMP at design time in order to increase the confidence and efficiency of the system.
Peer-to-Peer streaming technology has become one of the major Internet applications as it offers the opportunity of broadcasting high quality video content to a large number of peers with low costs. It is widely accepted that with the efficient utilization of peers and server's upload capacities, peers can enjoy watching a high bit rate video with minimal end-to-end delay. In this paper, we present a practical scheduling algorithm that works in the challenging condition where no spare capacity is available, i.e., it maximally utilizes the resources and broadcasts the maximum streaming rate. Each peer contacts with only a small number of neighbours in the overlay network and autonomously subscribes to sub-streams according to a budget-model in such a way that the number of peers forwarding exactly one sub-stream will be maximized. The hop-count delay is also taken into account to construct a short depth trees. Finally, we show through simulation that peers dynamically converge to an efficient overlay structure with a short hop-count delay. Moreover, the proposed scheme gives nice features in the homogeneous case and overcomes SplitStream in all simulated scenarios.
These days considering expansion of networks, dissemination of information has become one of significant cases for researchers. In social networks in addition to social structures and people effectiveness on each other, Profit increase of sales, publishing a news or rumor, spread or diffusion of an idea can be mentioned. In social societies, people affect each other and with an individual’s membership, his friends
may join that group as well. In publishing a piece of news, independent of its nature there are different ways to expand it. Since information isn’t always suitable and positive, this article is trying to introduce the immunization mechanism against this information. The meaning of immunization is a kind of Slow Publishing of such information in network. Therefor it has been tried in this article to slow down the
publishing of information or even stop them. With comparison of presented methods for immunization and also presenting rate delay parameter, the immunization of methods were evaluated and we identified the most effective immunization method. Among existing methods for immunization and recommended methods, recommended methods also have an effective role in preventing spread of malicious rumor.
HIDING A MESSAGE IN MP3 USING LSB WITH 1, 2, 3 AND 4 BITSIJCNCJournal
Steganography is the art of hiding information in ways that prevent the detection of hidden messages. This
paper presentsa new method which randomly selects position in MP3 file to hide a text secret messageby
using Least Significant Bit (LSB) technique. The text secret message isused in start and ends locations a
unique signature or key.The methodology focuses to embed one bit, two bits, three bitsor four bits from
secret message into MP3 file by using LSB techniques. The evaluation and performancemethods are based
on robustness (BER and correlation), Imperceptibility (PSNR) and hiding capacity (Ratio between Sizes of
text message and MP3 Cover) indicators.The experimental results show the new method is more security.
Moreover the contribution of this paper is the provision of a robustness-based classification of LSB
steganography models depending on their occurrence in the embedding position.
On the approximation of the sum of lognormals by a log skew normal distributionIJCNCJournal
Several methods have been proposed to approximate the sum of lognormal RVs. However the accuracy of each method relies highly on the region of the resulting distribution being examined, and the individual lognormal parameters, i.e., mean and variance. There is no such method which can provide the needed accuracy for all cases. This paper propose a universal yet very simple approximation method for the sum of Lognormals based on log skew normal approximation. The main contribution on this work is to propose an analytical method for log skew normal parameters estimation. The proposed method provides highly accurate approximation to the sum of lognormal distributions over the whole range of dB spreads for any correlation coefficient. Simulation results show that our method outperforms all previously proposed methods and provides an accuracy within 0.01 dB for all cases.
Maximizing network interruption in wirelessIJCNCJournal
With the colossal growth of wireless sensor networks (WSNs) in different applications starting from home
automation to military affairs, the pressure on ensuring security in such a network is paramount.
Considering the security challenges, it is really a hard-hitting effort to develop a secured WSN system.
Moreover, as the information technology is getting popular, the intruders are also planning new ideas to
break the system security, to harm the network and to make the system quality down with the target of
taking the control of the network to corrupt it or to get benefits from it anyway. The intruders corrupt the
system only when the security breaking cost (SBC) is lower compared with the benefits they attained or the
harm it can make to others. In this paper, the authors define the term “maximizing network interruption
problem” and propose a technique, called the grid point approximation algorithm, to estimate the SBC of a
multi-hop WSN so that it can be made tougher for an intruder to break the system security. It is assumed
that the intruder has the complete picture of the entire network. The technique is designed from the
intruder’s point of view for completely jamming all the sensor nodes in the network through placing
jammers or malicious nodes strategically and at the same time keeping the number of jammer nodes to
minimum or near minimum. To the best of the authors’ knowledge, there is no work proposed so far of the
same kind. Experimental results with the changes of the different network parameters show that the
proposed algorithm is able to provide excellent performances to achieve the targets.
ENHANCEMENT OF TRANSMISSION RANGE ASSIGNMENT FOR CLUSTERED WIRELESS SENSOR NE...IJCNCJournal
Transmitter range assignment in clustered wireless networks is the bottleneck of the balance between
energy conservation and the connectivity to deliver data to the sink or gateway node. The aim of this
research is to optimize the energy consumption through reducing the transmission ranges of the nodes,
while maintaining high probability to have end-to-end connectivity to the network’s data sink. We modified
the approach given in [1] to achieve more than 25% power saving through reducing cluster head (CH)
transmission range of the backbone nodes in a multihop wireless sensor network with ensuring at least
95% end-to-end connectivity probability.
PERFORMANCE ANALYSIS OF MULTI-PATH TCP NETWORKIJCNCJournal
MPTCP is proposed by IETF working group, it allows a single TCP stream to be split across multiple
paths. It has obvious benefits in performance and reliability. MPTCP has implemented in Linux-based
distributions that can be compiled and installed to be used for both real and experimental scenarios. In this
article, we provide performance analyses for MPTCP with a laptop connected to WiFi access point and 3G
cellular network at the same time. We prove experimentally that MPTCP outperforms regular TCP for
WiFi or 3G interfaces. We also compare four types of congestion control algorithms for MPTCP that are
also implemented in the Linux Kernel. Results show that Alias Linked Increase Congestion Control
algorithm outperforms the others in the normal traffic load while Balanced Linked Adaptation algorithm
outperforms the rest when the paths are shared with heavy traffic, which is not supported by MPTCP.
APPROXIMATING NASH EQUILIBRIUM UNIQUENESS OF POWER CONTROL IN PRACTICAL WSNSIJCNCJournal
Transmission power has a major impact on link and communication reliability and network lifetime in Wireless Sensor Networks. We study power control in a multi-hop Wireless Sensor Network where nodes' communication interfere with each other. Our objective is to determine each node's transmission power level that will reduce the communication interference and keep energy consumption to a minimum. We propose a potential game approach to obtain the unique equilibrium of the network transmission power allocation. The unique equilibrium is located in a continuous domain. However, radio transceivers accept only discrete values for transmission power level setting. We study the viability and performance of mapping the continuous solution from the potential game to the discrete domain required by the radio. We demonstrate the success of our approach through TOSSIM simulation when nodes use the Collection Tree Protocol for routing the data. Also, we show results of our method from the Indriya testbed. We compare it with the case where the motes use Collection Tree Protocol with the maximum transmission power.
ADAPTIVE HANDOVER HYSTERESIS AND CALL ADMISSION CONTROL FOR MOBILE RELAY NODESIJCNCJournal
The aim of equipping a wireless network with a mobile relay node is to support broadband wireless communications for vehicular users and their devices. The high mobility of vehicular users, possibly at a very high velocity in the area in which two cells overlap, could cause the network to suffer from a reduced handover success rate and, hence, increased radio link failure. The combined impact of these problems is service interruptions to vehicular users. Thus, the handover schemes are crucial in solving these problems. In this work, we first present the adaptive handover hysteresis scheme for the wireless network with mobile relay nodes in the high-speed train scenario. Specifically, our proposed adaptive hysteresis scheme is based on the velocity of the train. Second, the handover call dropping probability is reduced by introducing a modified call admission control scheme to support radio resource reservation for handover calls that prioritizes handover calls of mobile relay over the other calls. The proposed solution in which adaptive parameter is combined with call admission control is evaluated by system level simulation. Our simulation results illustrate an increased handover success rate and reduced radio link failures.
Using spectral radius ratio for node degreeIJCNCJournal
In this paper, we show that the spectral radius ratio for node degree could be used to analyze the variation of node degree during the evolution of complex networks. We focus on three commonly studied models of complex networks: random networks, scale-free networks and small-world networks. The spectral radius ratio for node degree is defined as the ratio of the principal (largest) eigenvalue of the adjacency matrix of a network graph to that of the average node degree. During the evolution of each of the above three categories of networks (using the appropriate evolution model for each category), we observe the spectral radius ratio for node degree to exhibit high-very high positive correlation (0.75 or above) to that of the
coefficient of variation of node degree (ratio of the standard deviation of node degree and average node degree). We show that the spectral radius ratio for node degree could be used as the basis to tune the operating parameters of the evolution models for each of the three categories of complex networks as well as analyze the impact of specific operating parameters for each model.
Enhancement in Web Service ArchitectureIJERA Editor
Web services provide a standard means of interoperating between different software applications, running on a
variety of platforms and/or frameworks. Web services are increasingly used to integrate and build business
application on the internet. Failure of web services is not acceptable in many situations such as online banking,
so fault tolerance is a key challenge of web services. This paper elaborates the concept of web service
architecture and its enhancement. Traditional web service architecture lacks facilities to support fault tolerance.
To better cope with the fundamental issues of the traditional client-server based web service architecture, peer to
peer web service architecture have been introduced. The purpose of this paper is to elaborate the architecture,
construction methods and steps of web services and possible weaknesses in scalability and fault tolerance in
traditional client server architecture and a solution for that, peer to peer web service technology has evolved.
The Indo American Journal of Life Sciences and Bio Technology is a scholarly publication dedicated to advancing research and knowledge at the intersection of life sciences and biotechnology. With a focus on fostering collaboration between Indian and American scientific communities, the journal serves as a platform for high-quality articles, reviews, and original research contributions. Covering a broad spectrum of topics, from molecular biology to bioprocessing, the journal facilitates the exchange of cutting-edge information and promotes innovation in these crucial fields. Published regularly, it provides a valuable resource for researchers, academicians, and professionals seeking to stay abreast of the latest developments in life sciences and biotechnology.
The International Journal of Pharmacetical Sciences Letters is an international online journal in English published monthly. The aim of this is to publish peer reviewed research and review articles without delay in the developing field of engineering and science Research.
ISSN 2347-2243
Indo-American Journal of Life Sciences and Biotechnology. is an international online journal in English published Quarterly. Our journal is to publish manuscripts relating to Botany, Zoology, Marine Biology, Health and Nutrition, Cell Biology, Neurobiology, Biochemistry, All articles published in this journal represent the opinion of the authors and not reflect the official policy of the Journal of Indo-American Journal of Life Sciences and Biotechnology .
ISSN 2347-2243
Indo-American Journal of Life Sciences and Biotechnology. is an international online journal in English published Quarterly. Our journal is to publish manuscripts relating to Botany, Zoology, Marine Biology, Health and Nutrition, Cell Biology, Neurobiology, Biochemistry, All articles published in this journal represent the opinion of the authors and not reflect the official policy of the Journal of Indo-American Journal of Life Sciences and Biotechnology
ISSN 2347-2243
The Indo-American Journal of Life Sciences and Biotechnology is an international online journal in English published Quarterly. The delivery expedites the process. All submitted the research articles are subjected to immediate rapid screening by the editors, in consultation with the Editorial Board or others working in the field as appropriate, to ensure they are likely to be journal.
The Indo American Journal of Life Sciences and Bio Technology is a leading scholarly publication that serves as a platform for cutting-edge research at the intersection of life sciences and biotechnology. This peer-reviewed journal showcases high-quality articles and scientific advancements, providing a comprehensive perspective on the latest developments in these fields. With a focus on Indo-American collaborations, the journal fosters international dialogue and knowledge exchange among researchers, academicians, and industry professionals. Through its rigorous editorial process, the journal contributes to the dissemination of innovative ideas and discoveries, making it an invaluable resource for the scientific community. Explore the forefront of life sciences and biotechnology through the insightful pages of the Indo American Journal of Life Sciences and Bio Technology.
The Indo American Journal of Life Sciences and Bio Technology is a leading scholarly publication that serves as a platform for cutting-edge research at the intersection of life sciences and biotechnology. This peer-reviewed journal showcases high-quality articles and scientific advancements, providing a comprehensive perspective on the latest developments in these fields. With a focus on Indo-American collaborations, the journal fosters international dialogue and knowledge exchange among researchers, academicians, and industry professionals. Through its rigorous editorial process, the journal contributes to the dissemination of innovative ideas and discoveries, making it an invaluable resource for the scientific community. Explore the forefront of life sciences and biotechnology through the insightful pages of the Indo American Journal of Life Sciences and Bio Technology.
The Indo-American Journal of Life Sciences and BioTechnology is a premier online platform that serves as a nexus for cutting-edge research at the intersection of life sciences and biotechnology. Our site fosters the exchange of innovative ideas, scholarly articles, and breakthrough discoveries in these dynamic fields. With a commitment to promoting scientific excellence, the journal provides a global forum for researchers, academics, and industry professionals to share their insights and advancements. Navigate through a wealth of diverse content, ranging from molecular biology to bioprocess engineering, as we strive to advance knowledge and propel the frontiers of life sciences and biotechnology. Join us in the pursuit of scientific excellence and stay abreast of the latest developments in this ever-evolving landscape.
The Indo American Journal of Life Sciences and Biotechnology is a leading scholarly publication dedicated to advancing research at the intersection of life sciences and biotechnology. With a focus on fostering interdisciplinary collaboration, this journal provides a platform for cutting-edge research and innovations in areas such as molecular biology, genetics, bioinformatics, and bioprocessing. Featuring rigorous peer-reviewed articles, the journal serves as a valuable resource for scientists, researchers, and professionals in the field, promoting the dissemination of knowledge and the development of groundbreaking technologies that contribute to the advancement of life sciences and biotechnology.
Testing of web services Based on Ontology Management ServiceIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
AVAILABILITY METRICS: UNDER CONTROLLED ENVIRONMENTS FOR WEB SERVICES ijwscjournal
Web Services technology has the potential to cater an enterprise’s needs, providing the ability to integrate
different systems and application types regardless of their platform, operating system, programming
language, or location. Web Services could be deployed in traditional, centralized or brokered client server
approach or they could be in peer to peer manner. Web Services could act as a server providing
functionality to a requester or act as a client, receiving functionality from any service. From the
performance perspective the availability of Web Services plays an important role among many parameters.
AVAILABILITY METRICS: UNDER CONTROLLED ENVIRONMENTS FOR WEB SERVICES ijwscjournal
Web Services technology has the potential to cater an enterprise’s needs, providing the ability to integrate different systems and application types regardless of their platform, operating system, programming language, or location. Web Services could be deployed in traditional, centralized or brokered client server approach or they could be in peer to peer manner. Web Services could act as a server providing functionality to a requester or act as a client, receiving functionality from any service. From the performance perspective the availability of Web Services plays an important role among many parameters.
AVAILABILITY METRICS: UNDER CONTROLLED ENVIRONMENTS FOR WEB SERVICES ijwscjournal
Web Services technology has the potential to cater an enterprise’s needs, providing the ability to integrate different systems and application types regardless of their platform, operating system, programming language, or location. Web Services could be deployed in traditional, centralized or brokered client server approach or they could be in peer to peer manner. Web Services could act as a server providing functionality to a requester or act as a client, receiving functionality from any service. From the performance perspective the availability of Web Services plays an important role among many parameters.
Web Services testing comes with its own challenges. What should be the right framework for automated testing of web services? Discover in this Blog post.
EARLY PERFORMANCE PREDICTION OF WEB SERVICESijwscjournal
Web Service is an interface which implements business logic. Performance is an important quality aspect of Web services because of their distributed nature. Predicting the performance of web services during early stages of software development is significant. In this paper we model web service using Unified
Modeling Language, Use Case Diagram, Sequence Diagram, Deployment Diagram. We obtain the Performance metrics by simulating the web services model using a simulation tool Simulation of Multi-Tier Queuing Architecture. We have identified the bottle neck resources.
EARLY PERFORMANCE PREDICTION OF WEB SERVICESijwscjournal
Web Service is an interface which implements business logic. Performance is an important quality aspect
of Web services because of their distributed nature. Predicting the performance of web services during
early stages of software development is significant. In this paper we model web service using Unified
Modeling Language, Use Case Diagram, Sequence Diagram, Deployment Diagram. We obtain the
Performance metrics by simulating the web services model using a simulation tool Simulation of Multi-Tier
Queuing Architecture. We have identified the bottle neck resources.
Referring Expressions with Rational Speech Act Framework: A Probabilistic App...IJDKP
This paper focuses on a referring expression generation (REG) task in which the aim is to pick
out an object in a complex visual scene. One common theoretical approach to this problem is
to model the task as a two-agent cooperative scheme in which a ‘speaker’ agent would generate
the expression that best describes a targeted area and a ‘listener’ agent would identify the target.
Several recent REG systems have used deep learning approaches to represent the speaker/listener
agents. The Rational Speech Act framework (RSA), a Bayesian approach to pragmatics that can
predict human linguistic behavior quite accurately, has been shown to generate high quality and
explainable expressions on toy datasets involving simple visual scenes. Its application to large scale
problems, however, remains largely unexplored. This paper applies a combination of the probabilistic
RSA framework and deep learning approaches to larger datasets involving complex visual scenes
in a multi-step process with the aim of generating better-explained expressions. We carry out
experiments on the RefCOCO and RefCOCO+ datasets and compare our approach with other endto-end deep learning approaches as well as a variation of RSA to highlight our key contribution.
Experimental results show that while achieving lower accuracy than SOTA deep learning methods,
our approach outperforms similar RSA approach in human comprehension and has an advantage
over end-to-end deep learning under limited data scenario. Lastly, we provide a detailed analysis
on the expression generation process with concrete examples, thus providing a systematic view
on error types and deficiencies in the generation process and identifying possible areas for future
improvements.
EARLY PERFORMANCE PREDICTION OF WEB SERVICESijwscjournal
Web Service is an interface which implements business logic. Performance is an important quality aspect of Web services because of their distributed nature. Predicting the performance of web services during early stages of software development is significant. In this paper we model web service using Unified Modeling Language, Use Case Diagram, Sequence Diagram, Deployment Diagram. We obtain the Performance metrics by simulating the web services model using a simulation tool Simulation of Multi-Tier Queuing Architecture. We have identified the bottle neck resources.
Similar to A NOVEL METHOD TO TEST DEPENDABLE COMPOSED SERVICE COMPONENTS (20)
Vehicle Ad Hoc Networks (VANETs) have become a viable technology to improve traffic flow and safety on the roads. Due to its effectiveness and scalability, the Wingsuit Search-based Optimised Link State Routing Protocol (WS-OLSR) is frequently used for data distribution in VANETs. However, the selection of MultiPoint Relays (MPRs) plays a pivotal role in WS-OLSR's performance. This paper presents an improved MPR selection algorithm tailored to WS-OLSR, designed to enhance the overall routing efficiency and reduce overhead. The analysis found that the current OLSR protocol has problems such as redundancy of HELLO and TC message packets or failure to update routing information in time, so a WS-OLSR routing protocol based on improved-MPR selection algorithm was proposed. Firstly, factors such as node mobility and link changes are comprehensively considered to reflect network topology changes, and the broadcast cycle of node HELLO messages is controlled through topology changes. Secondly, a new MPR selection algorithm is proposed, considering link stability issues and nodes. Finally, evaluate its effectiveness in terms of packet delivery ratio, end-to-end delay, and control message overhead. Simulation results demonstrate the superior performance of our improved MR selection algorithm when compared to traditional approaches.
A Novel Medium Access Control Strategy for Heterogeneous Traffic in Wireless ...IJCNCJournal
So far, Wireless Body Area Networks (WBANs) have played a pivotal role in driving the development of intelligent healthcare systems with broad applicability across various domains. Each WBAN consists of one or more types of sensors that can be embedded in clothing, attached directly to the body, or even implanted beneath an individual's skin. These sensors typically serve asingle application. However, the traffic generated by each sensor may have distinct requirements. This diversity necessitates a dual approach: tailored treatment based on the specific needs of each traffic typeand the fulfillment of application requirements, such asreliability and timeliness. Never the less, the presence of energy constraints and the unreliable nature of wireless communications make QoS provisioning under such networks a non-trivial task. In this context, the current paper introduces a novel Medium AccessControl (MAC) strategy for the regular traffic applications of WBANs, designed to significantly enhance efficiency when compared to the established MAC protocols IEEE 802.15.4 and IEEE 802.15.6, with a particular focus on improving reliability, timeliness, and energy efficiency.
May_2024 Top 10 Read Articles in Computer Networks & Communications.pdfIJCNCJournal
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications. The journal focuses on all technical and practical aspects of Computer Networks & data Communications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
A Topology Control Algorithm Taking into Account Energy and Quality of Transm...IJCNCJournal
The efficient use of energy in wireless sensor networks is critical for extending node lifetime. The network topology is one of the factors that have a significant impact on the energy usage at the nodes and the quality of transmission (QoT) in the network. We propose a topology control algorithm for software-defined wireless sensor networks (SDWSNs) in this paper. Our method is to formulate topology control algorithm as a nonlinear programming (NP) problem with the objective to optimizing two metrics, maximum communication range, and desired degree. This NP problem is solved at the SDWSN controller by employing the genetic algorithm (GA) to determine the best topology. The simulation results show that the proposed algorithm outperforms the MaxPower algorithm in terms of average node degree and energy expansion ratio.
Multi-Server user Authentication Scheme for Privacy Preservation with Fuzzy C...IJCNCJournal
The integration of artificial intelligence technology with a scalable Internet of Things (IoT) platform facilitates diverse smart communication services, allowing remote users to access services from anywhere at any time. The multi-server environment within IoT introduces a flexible security service model, enabling users to interact with any server through a single registration. To ensure secure and privacy preservation services for resources, an authentication scheme is essential. Zhao et al. recently introduced a user authentication scheme for the multi-server environment, utilizing passwords and smart cards, claiming resilience against well-known attacks. This paper conducts cryptanalysis on Zhao et al.'s scheme, focusing on denial of service and privacy attacks, revealing a lack of user-friendliness. Subsequently, we propose a new multi-server user authentication scheme for privacy preservation with fuzzy commitment over the IoT environment, addressing the shortcomings of Zhao et al.'s scheme. Formal security verification of the proposed scheme is conducted using the ProVerif simulation tool. Through both formal and informal security analyses, we demonstrate that the proposed scheme is resilient against various known attacks and those identified in Zhao et al.'s scheme.
Advanced Privacy Scheme to Improve Road Safety in Smart Transportation SystemsIJCNCJournal
In -Vehicle Ad-Hoc Network (VANET), vehicles continuously transmit and receive spatiotemporal data with neighboring vehicles, thereby establishing a comprehensive 360-degree traffic awareness system. Vehicular Network safety applications facilitate the transmission of messages between vehicles that are near each other, at regular intervals, enhancing drivers' contextual understanding of the driving environment and significantly improving traffic safety. Privacy schemes in VANETs are vital to safeguard vehicles’ identities and their associated owners or drivers. Privacy schemes prevent unauthorized parties from linking the vehicle's communications to a specific real-world identity by employing techniques such as pseudonyms, randomization, or cryptographic protocols. Nevertheless, these communications frequently contain important vehicle information that malevolent groups could use to Monitor the vehicle over a long period. The acquisition of this shared data has the potential to facilitate the reconstruction of vehicle trajectories, thereby posing a potential risk to the privacy of the driver. Addressing the critical challenge of developing effective and scalable privacy-preserving protocols for communication in vehicle networks is of the highest priority. These protocols aim to reduce the transmission of confidential data while ensuring the required level of communication. This paper aims to propose an Advanced Privacy Vehicle Scheme (APV) that periodically changes pseudonyms to protect vehicle identities and improve privacy. The APV scheme utilizes a concept called the silent period, which involves changing the pseudonym of a vehicle periodically based on the tracking of neighboring vehicles. The pseudonym is a temporary identifier that vehicles use to communicate with each other in a VANET. By changing the pseudonym regularly, the APV scheme makes it difficult for unauthorized entities to link a vehicle's communications to its real-world identity. The proposed APV is compared to the SLOW, RSP, CAPS, and CPN techniques. The data indicates that the efficiency of APV is a better improvement in privacy metrics. It is evident that the AVP offers enhanced safety for vehicles during transportation in the smart city.
April 2024 - Top 10 Read Articles in Computer Networks & CommunicationsIJCNCJournal
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications. The journal focuses on all technical and practical aspects of Computer Networks & data Communications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
DEF: Deep Ensemble Neural Network Classifier for Android Malware DetectionIJCNCJournal
Malware is one of the threats to security of computer networks and information systems. Since malware instances are available sufficiently, there is increased interest among researchers on usage of Artificial Intelligence (AI). Of late AI-enabled methods such as machine learning (ML) and deep learning paved way for solving many real-world problems. As it is a learning-based approach, accumulated training samples help in improving thequality of training and thus leveraging malware detection accuracy. Existing deep learning methods are focusing on learning-based malware detection systems. However, there is need for improving the state of the art through ensemble approach. Towards this end, in this paper we proposed a framework known as Deep Ensemble Framework (DEF) for automatic malware detection. The framework obtains features from training samples. From given malware instance a grayscale image is generated. There is another process to extract the opcode sequences. Convolutional Neural Network (CNN) and Long Short Term Memory (LSTM) techniques are used to obtain grayscale image and opcode sequence respectively. Afterwards, a stacking ensemble is employed in order to achieve efficient malware detection and classification. Malware samples collected fromthe Internet sources and Microsoft are used for theempirical study. An algorithm known as Ensemble Learning for Automatic Malware Detection (EL-AML) is proposed to realize our framework. Another algorithm named Pre-Process is proposed to assist the EL-AML algorithm for obtaining intermediate features required by CNN and LSTM.Empirical study reveals that our framework outperforms many existing methods in terms of speed-up and accuracy.
High Performance NMF Based Intrusion Detection System for Big Data IOT TrafficIJCNCJournal
With the emergence of smart devices and the Internet of Things (IoT), millions of users connected to the network produce massive network traffic datasets. These vast datasets of network traffic, Big Data are challenging to store, deal with and analyse using a single computer. In this paper we developed parallel implementation using a High Performance Computer (HPC) for the Non-Negative Matrix Factorization technique as an engine for an Intrusion Detection System (HPC-NMF-IDS). The large IoT traffic datasets of order of millions samples are distributed evenly on all the computing cores for both storage and speedup purpose. The distribution of computing tasks involved in the Matrix Factorization takes into account the reduction of the communication cost between the computing cores. The experiments we conducted on the proposed HPC-IDS-NMF give better results than the traditional ML-based intrusion detection systems. We could train the HPC model with datasets of one million samples in only 31 seconds instead of the 40 minutes using one processor), that is a speed up of 87 times. Moreover, we have got an excellent detection accuracy rate of 98% for KDD dataset.
A Novel Medium Access Control Strategy for Heterogeneous Traffic in Wireless ...IJCNCJournal
So far, Wireless Body Area Networks (WBANs) have played a pivotal role in driving the development of intelligent healthcare systems with broad applicability across various domains. Each WBAN consists of one or more types of sensors that can be embedded in clothing, attached directly to the body, or even implanted beneath an individual's skin. These sensors typically serve asingle application. However, the traffic generated by each sensor may have distinct requirements. This diversity necessitates a dual approach: tailored treatment based on the specific needs of each traffic typeand the fulfillment of application requirements, such asreliability and timeliness. Never the less, the presence of energy constraints and the unreliable nature of wireless communications make QoS provisioning under such networks a non-trivial task. In this context, the current paper introduces a novel Medium AccessControl (MAC) strategy for the regular traffic applications of WBANs, designed to significantly enhance efficiency when compared to the established MAC protocols IEEE 802.15.4 and IEEE 802.15.6, with a particular focus on improving reliability, timeliness, and energy efficiency.
A Topology Control Algorithm Taking into Account Energy and Quality of Transm...IJCNCJournal
The efficient use of energy in wireless sensor networks is critical for extending node lifetime. The network topology is one of the factors that have a significant impact on the energy usage at the nodes and the quality of transmission (QoT) in the network. We propose a topology control algorithm for software-defined wireless sensor networks (SDWSNs) in this paper. Our method is to formulate topology control algorithm as a nonlinear programming (NP) problem with the objective to optimizing two metrics, maximum communication range, and desired degree. This NP problem is solved at the SDWSN controller by employing the genetic algorithm (GA) to determine the best topology. The simulation results show that the proposed algorithm outperforms the MaxPower algorithm in terms of average node degree and energy expansion ratio.
Multi-Server user Authentication Scheme for Privacy Preservation with Fuzzy C...IJCNCJournal
The integration of artificial intelligence technology with a scalable Internet of Things (IoT) platform facilitates diverse smart communication services, allowing remote users to access services from anywhere at any time. The multi-server environment within IoT introduces a flexible security service model, enabling users to interact with any server through a single registration. To ensure secure and privacy preservation services for resources, an authentication scheme is essential. Zhao et al. recently introduced a user authentication scheme for the multi-server environment, utilizing passwords and smart cards, claiming resilience against well-known attacks. This paper conducts cryptanalysis on Zhao et al.'s scheme, focusing on denial of service and privacy attacks, revealing a lack of user-friendliness. Subsequently, we propose a new multi-server user authentication scheme for privacy preservation with fuzzy commitment over the IoT environment, addressing the shortcomings of Zhao et al.'s scheme. Formal security verification of the proposed scheme is conducted using the ProVerif simulation tool. Through both formal and informal security analyses, we demonstrate that the proposed scheme is resilient against various known attacks and those identified in Zhao et al.'s scheme.
Advanced Privacy Scheme to Improve Road Safety in Smart Transportation SystemsIJCNCJournal
In -Vehicle Ad-Hoc Network (VANET), vehicles continuously transmit and receive spatiotemporal data with neighboring vehicles, thereby establishing a comprehensive 360-degree traffic awareness system. Vehicular Network safety applications facilitate the transmission of messages between vehicles that are near each other, at regular intervals, enhancing drivers' contextual understanding of the driving environment and significantly improving traffic safety. Privacy schemes in VANETs are vital to safeguard vehicles’ identities and their associated owners or drivers. Privacy schemes prevent unauthorized parties from linking the vehicle's communications to a specific real-world identity by employing techniques such as pseudonyms, randomization, or cryptographic protocols. Nevertheless, these communications frequently contain important vehicle information that malevolent groups could use to Monitor the vehicle over a long period. The acquisition of this shared data has the potential to facilitate the reconstruction of vehicle trajectories, thereby posing a potential risk to the privacy of the driver. Addressing the critical challenge of developing effective and scalable privacy-preserving protocols for communication in vehicle networks is of the highest priority. These protocols aim to reduce the transmission of confidential data while ensuring the required level of communication. This paper aims to propose an Advanced Privacy Vehicle Scheme (APV) that periodically changes pseudonyms to protect vehicle identities and improve privacy. The APV scheme utilizes a concept called the silent period, which involves changing the pseudonym of a vehicle periodically based on the tracking of neighboring vehicles. The pseudonym is a temporary identifier that vehicles use to communicate with each other in a VANET. By changing the pseudonym regularly, the APV scheme makes it difficult for unauthorized entities to link a vehicle's communications to its real-world identity. The proposed APV is compared to the SLOW, RSP, CAPS, and CPN techniques. The data indicates that the efficiency of APV is a better improvement in privacy metrics. It is evident that the AVP offers enhanced safety for vehicles during transportation in the smart city.
DEF: Deep Ensemble Neural Network Classifier for Android Malware DetectionIJCNCJournal
Malware is one of the threats to security of computer networks and information systems. Since malware instances are available sufficiently, there is increased interest among researchers on usage of Artificial Intelligence (AI). Of late AI-enabled methods such as machine learning (ML) and deep learning paved way for solving many real-world problems. As it is a learning-based approach, accumulated training samples help in improving thequality of training and thus leveraging malware detection accuracy. Existing deep learning methods are focusing on learning-based malware detection systems. However, there is need for improving the state of the art through ensemble approach. Towards this end, in this paper we proposed a framework known as Deep Ensemble Framework (DEF) for automatic malware detection. The framework obtains features from training samples. From given malware instance a grayscale image is generated. There is another process to extract the opcode sequences. Convolutional Neural Network (CNN) and Long Short Term Memory (LSTM) techniques are used to obtain grayscale image and opcode sequence respectively. Afterwards, a stacking ensemble is employed in order to achieve efficient malware detection and classification. Malware samples collected fromthe Internet sources and Microsoft are used for theempirical study. An algorithm known as Ensemble Learning for Automatic Malware Detection (EL-AML) is proposed to realize our framework. Another algorithm named Pre-Process is proposed to assist the EL-AML algorithm for obtaining intermediate features required by CNN and LSTM.Empirical study reveals that our framework outperforms many existing methods in terms of speed-up and accuracy.
High Performance NMF based Intrusion Detection System for Big Data IoT TrafficIJCNCJournal
With the emergence of smart devices and the Internet of Things (IoT), millions of users connected to the network produce massive network traffic datasets. These vast datasets of network traffic, Big Data are challenging to store, deal with and analyse using a single computer. In this paper we developed parallel implementation using a High Performance Computer (HPC) for the Non-Negative Matrix Factorization technique as an engine for an Intrusion Detection System (HPC-NMF-IDS). The large IoT traffic datasets of order of millions samples are distributed evenly on all the computing cores for both storage and speedup purpose. The distribution of computing tasks involved in the Matrix Factorization takes into account the reduction of the communication cost between the computing cores. The experiments we conducted on the proposed HPC-IDS-NMF give better results than the traditional ML-based intrusion detection systems. We could train the HPC model with datasets of one million samples in only 31 seconds instead of the 40 minutes using one processor), that is a speed up of 87 times. Moreover, we have got an excellent detection accuracy rate of 98% for KDD dataset.
IoT Guardian: A Novel Feature Discovery and Cooperative Game Theory Empowered...IJCNCJournal
Cyber intrusion attacks increasingly target the Internet of Things (IoT) ecosystem, exploiting vulnerable devices and networks. Malicious activities must be identified early to minimize damage and mitigate threats. Using actual benign and attack traffic from the CICIoT2023 dataset, this WORK aims to evaluate and benchmark machine-learning techniques for IoT intrusion detection. There are four main phases to the system. First, the CICIoT2023 dataset is refined to remove irrelevant features and clean up missing and duplicate data. The second phase employs statistical models and artificial intelligence to discover novel features. The most significant features are then selected in the third phase based on cooperative game theory. Using the original CICIoT2023 dataset and a dataset containing only novel features, we train and evaluate a variety of machine learning classifiers. On the original dataset, Random Forest achieved the highest accuracy of 99%. Still, with novel features, Random Forest's performance dropped only slightly (96%) while other models achieved significantly lower accuracy. As a whole, the work contributes substantial contributions to tailored feature engineering, feature selection, and rigorous benchmarking of IoT intrusion detection techniques. IoT networks and devices face continuously evolving threats, making it necessary to develop robust intrusion detection systems.
Enhancing Traffic Routing Inside a Network through IoT Technology & Network C...IJCNCJournal
IoT networking uses real items as stationary or mobile nodes. Mobile nodes complicate networking. Internet of Things (IoT) networks have a lot of control overhead messages because devices are mobile. These signals are generated by the constant flow of control data as such device identity, geographical positioning, node mobility, device configuration, and others. Network clustering is a popular overhead communication management method. Many cluster-based routing methods have been developed to address system restrictions. Node clustering based on the Internet of Things (IoT) protocol, may be used to cluster all network nodes according to predefined criteria. Each cluster will have a Smart Designated Node. SDN cluster management is efficient. Many intelligent nodes remain in the network. The network design spreads these signals. This paper presents an intelligent and responsive routing approach for clustered nodes in IoT networks. An existing method builds a new sub-area clustered topology. The Nodes Clustering Based on the Internet of Things (NCIoT) method improves message transmission between any two nodes. This will facilitate the secure and reliable interchange of healthcare data between professionals and patients. NCIoT is a system that organizes nodes in the Internet of Things (IoT) by grouping them together based on their proximity. It also picks SDN routes for these nodes. This approach involves selecting one option from a range of choices and preparing for likely outcomes problem addressing limitations on activities is a primary focus during the review process. Predictive inquiry employs the process of analyzing data to forecast and anticipate future events. This document provides an explanation of compact units. The Predictive Inquiry Small Packets (PISP) improved its backup system and partnered with SDN to establish a routing information table for each intelligent node, resulting in higher routing performance. Both principal and secondary roads are available for use. The simulation findings indicate that NCIoT algorithms outperform CBR protocols. Enhancements lead to a substantial 78% boost in network performance. In addition, the end-to-end latency dropped by 12.5%. The PISP methodology produces 5.9% more inquiry packets compared to alternative approaches. The algorithms are constructed and evaluated against academic ones.
IoT Guardian: A Novel Feature Discovery and Cooperative Game Theory Empowered...IJCNCJournal
Cyber intrusion attacks increasingly target the Internet of Things (IoT) ecosystem, exploiting vulnerable devices and networks. Malicious activities must be identified early to minimize damage and mitigate threats. Using actual benign and attack traffic from the CICIoT2023 dataset, this WORK aims to evaluate and benchmark machine-learning techniques for IoT intrusion detection. There are four main phases to the system. First, the CICIoT2023 dataset is refined to remove irrelevant features and clean up missing and duplicate data. The second phase employs statistical models and artificial intelligence to discover novel features. The most significant features are then selected in the third phase based on cooperative game theory. Using the original CICIoT2023 dataset and a dataset containing only novel features, we train and evaluate a variety of machine learning classifiers. On the original dataset, Random Forest achieved the highest accuracy of 99%. Still, with novel features, Random Forest's performance dropped only slightly (96%) while other models achieved significantly lower accuracy. As a whole, the work contributes substantial contributions to tailored feature engineering, feature selection, and rigorous benchmarking of IoT intrusion detection techniques. IoT networks and devices face continuously evolving threats, making it necessary to develop robust intrusion detection systems.
** Connect, Collaborate, And Innovate: IJCNC - Where Networking Futures Take ...IJCNCJournal
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications. The journal focuses on all technical and practical aspects of Computer Networks & data Communications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
Enhancing Traffic Routing Inside a Network through IoT Technology & Network C...IJCNCJournal
IoT networking uses real items as stationary or mobile nodes. Mobile nodes complicate networking. Internet of Things (IoT) networks have a lot of control overhead messages because devices are mobile. These signals are generated by the constant flow of control data as such device identity, geographical positioning, node mobility, device configuration, and others. Network clustering is a popular overhead communication management method. Many cluster-based routing methods have been developed to address system restrictions. Node clustering based on the Internet of Things (IoT) protocol, may be used to cluster all network nodes according to predefined criteria. Each cluster will have a Smart Designated Node. SDN cluster management is efficient. Many intelligent nodes remain in the network. The network design spreads these signals. This paper presents an intelligent and responsive routing approach for clustered nodes in IoT networks. An existing method builds a new sub-area clustered topology. The Nodes Clustering Based on the Internet of Things (NCIoT) method improves message transmission between any two nodes. This will facilitate the secure and reliable interchange of healthcare data between professionals and patients. NCIoT is a system that organizes nodes in the Internet of Things (IoT) by grouping them together based on their proximity. It also picks SDN routes for these nodes. This approach involves selecting one option from a range of choices and preparing for likely outcomes problem addressing limitations on activities is a primary focus during the review process. Predictive inquiry employs the process of analyzing data to forecast and anticipate future events. This document provides an explanation of compact units. The Predictive Inquiry Small Packets (PISP) improved its backup system and partnered with SDN to establish a routing information table for each intelligent node, resulting in higher routing performance. Both principal and secondary roads are available for use. The simulation findings indicate that NCIoT algorithms outperform CBR protocols. Enhancements lead to a substantial 78% boost in network performance. In addition, the end-to-end latency dropped by 12.5%. The PISP methodology produces 5.9% more inquiry packets compared to alternative approaches. The algorithms are constructed and evaluated against academic ones.
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
How to Create Map Views in the Odoo 17 ERPCeline George
The map views are useful for providing a geographical representation of data. They allow users to visualize and analyze the data in a more intuitive manner.
How to Split Bills in the Odoo 17 POS ModuleCeline George
Bills have a main role in point of sale procedure. It will help to track sales, handling payments and giving receipts to customers. Bill splitting also has an important role in POS. For example, If some friends come together for dinner and if they want to divide the bill then it is possible by POS bill splitting. This slide will show how to split bills in odoo 17 POS.
A NOVEL METHOD TO TEST DEPENDABLE COMPOSED SERVICE COMPONENTS
1. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.3, May 2016
DOI: 10.5121/ijcnc.2016.8311 149
A NOVEL METHOD TO TEST DEPENDABLE
COMPOSED SERVICE COMPONENTS
Khaled Farj1
and Adel Smeda2
1
Faculty of Science, University of Al-Jabel Al-Gharbi, P.O. Box 64200, Gharian, Libya
2
Faculty of Accounting, University of Al-Jabel Al-Gharbi, Gharian, Libya
ABSTRACT
Assessing Web service systems performance and their dependability are crucial for the development of
today’s applications. Testing the performance and Fault Tolerance Mechanisms (FTMs) of composed
service components is hard to be measured at design time due to service instability is often caused by the
nature of the network conditions. Using a real internet environment for testing systems is difficult to set up
and control. We have introduced a fault injection toolkit that emulates a WAN within a LAN environment
between composed service components and offers full control over the emulated environment in addition to
the capability to inject network-related faults and application specific faults. The toolkit also generates
background workloads on the system under test so as to produce more realistic results. We describe an
experiment that has been performed to examine the impact of fault tolerance protocols deployed at a
service client by using our toolkit system.
KEYWORDS
Web Services, Fault Tolerance Methodologies, and Software Fault Injection & Composed Web service.
1. INTRODUCTION
Web services are becoming progressively more essential in the information systems. Web
services are software applications that operate independently and that offer services over the
Internet to other software applications, including web applications and other Web services. Web
services have changed the way we look at the Internet from being a repository of data into a
repository of Services [1], because of its capability to solve integration problems in Internet
applications. By using Web services, Internet applications can communicate with other Internet
applications regardless of using different programming language or platforms.
Web services can be adopted to develop information systems by integration of services to
obtain complex composed services. Web service technology is being used to allow the creation
of complex systems, composed of simple Web services, which exchange messages to form
complex conversation schemas [2]. These services are usually developed and administrated by
different service providers, running on different platforms and also distributed over the Internet in
different locations.
The quality of such complex systems depends on the quality of the network environment and, of
course, on the quality of the Web service applications participating in forming such systems. One
of the obstacles of the adoption of the Web service paradigm in such composed systems is the
problem of measuring their overall quality. Services are inherently distributed and heterogeneous,
and are often invoked with little knowledge of their reliability and their performance.
2. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.3, May 2016
150
In such applications, service composition is typically dynamic so that services are discovered,
selected, and composed, probably at runtime. In such services it is hard to assess behaviour and
performance in the presence of faults. As Web services are usually distributed and operated over
the Internet, there is no guarantee that all the components of the service are highly reliable. In
[15] it is reported that network faults such as message duplication, reordering, loss or corruption
have an effect on traditional distributed systems such as CORBA applications. In addition, it has
been concluded that unstable Internet environments and server connections can lead to
unreliability of Web service systems [3].
As Web service systems are subject to many network faults such as delaying, dropping,
damaging, and reordering messages and also to software faults within the service, testing the
performance and fault tolerance of Web services have become an active research area.
Software Fault injection is a well-proven method of testing and assessing the reliability of a
system [1]. In this paper, we describe a toolkit for assessing the performance and FTMs of either
a single Web service or composed services, without modifying the system being tested. No
recompiling or patching is necessary. Furthermore, the toolkit will generate a background
workload for more accurately emulate real networks. Our toolkit is also running independent of
the hosting environment for portability.
In Section 2, other tools for injecting faults into web services are described. In Section 3 and 4,
we explain the design and implementation of our Network Fault Injector service (NetFIS). In
Section 5, an example experiment are described which uses NetFIS to test and assess the
dependability of a Bioinformatics Web service system. This examined Web service system uses a
technique termed Mediation [4] to provide fault tolerance mechanisms. We evaluate the
performance and the fault tolerance mechanisms of the service. Section 6 describes our
conclusions and our future work.
2. RELATED WORK
There are many Software Fault Injection tools for testing and assessing distributed systems in
general and other tools for testing Web service systems in particular. Some well-known fault
injection testing tools, such as DOCTOR [5] and Orchestra [6] that both of them support
network level fault injection and could be potentially used to inject faults into Web service
systems, both tools have been designed to test network protocols and therefore don’t decode
complete middleware message sequences. There are many other fault injection tools for testing
Web service applications. In [7] a testing tool is developed for generating and validating test
cases. Tools start from the WSDL schema types and introduce some operators to generate a
request with random data and a test script which manipulates and modifies the request
parameters. In [8] other technique for testing Web service systems using mutation analysis is
proposed. A mutant WSDL document is generated by applying mutant operators to the elements
of the original WSDL document. A test tool called WSDLTest [9] testing tool generates Web
service requests from the WSDL schemas and tunes them in accordance with, what’s called, the
pre-conditions written by the user and verifies the response against the post-conditions offline.
In [10] an assessing tool is proposed based on some rules which defined in XML schema or DTD.
The testing tool modifies the parameter values in requests by using boundary value testing, and
on interaction perturbation, using mutation analysis. Another testing tool [11] introduces a toolkit
framework intercepting and perturbing exchanged SOAP messages by injecting faults by
corrupting the encoding schema address, dropping messages, and inserting random text in the
body of SOAP messages. The work described in [12] helps Web service requesters to create test
cases scenarios in order to select suitable and correct Web service application from public
repositories. It proposes a testing method tool which injects faults into SOAP messages to test the
parameter boundaries, as specified in the WSDL document. WS-FIT testing tools [13] inject
3. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.3, May 2016
151
faults by modifying exchanged SOAP messages using scripts. The function parameters are
manipulated by using the value boundaries specified by the tester. WSInject [14] testing tool
injects both communication and interface faults, and the ability to inject faults for testing a single
Web service or composed service system.
A common characteristic of previous testing method tools is that their focus is mostly on testing
single Web services in isolation (except for a few such as WSInject); furthermore, most of their
focus is on injecting faults by only modifying the exchanged SOAP messages, since they do not
emulate additional workload in the system which most likely could give rise to different results.
Different workloads could lead to different assessment and testing results, due to the cause of
different system activation patterns [15]. Moreover, most of the previous testing methods focus is
on testing only the Web service provider not on the Web service requester. In a composed service
where the Web service provider needs to be a Web service requester to other Web service
provider in order to serve a request, which means it is so essential to test the Web service
requester to prevent the whole system from failing to provide the required service.
In [16] testing method, faults are injected at the IP level in order to investigate the effect of
retransmission mechanisms of TCP on Web services. This allows them to assess the relationship
between TCP and WSRM time-out and retransmission mechanisms. By contrast, our testing
toolkit method will drop the whole exchanged message which may consist of more than one
packet. In this way, the consequences of injecting such faults can be propagated to the application
level so as to assess and examine whether FTMs at the application level can handle such faults. In
[17] a methodology, for assessing and testing the performance of FTMs applied to Web services
applications, have been introduced and the overhead of the tool has been tested.
In this paper, we propose a fault injection testing method that, adopting the architecture of a Wide
Area Network emulator used for testing and assessing other distributed systems, extends it to test
and assess composed Web service systems. In addition, two classes of faults are injected,
communication faults and software-specific faults without modification to the system under test.
The method also generates additional workload on the tested system in order to produce more
realistic testing results.
3. NETWORK FAULT INJECTION METHOD
The Network Fault Injector Service (NetFIS) is a Web service system which implements our fault
injection method for assessing Web services performance and fault tolerance mechanisms.
It is deployed between the Web service client and the Web service provider and requires the
system under test to be distributed in a modular fashion of services interacting via messages,
therefore, messages can be manipulated and modified to emulate incorrect behaviour of faulty
services. It basically intercepts the request from the Web service client, provides a network
emulator service and injects appropriate fault (if any) and thereafter, forwards the request to the
Web service provider. Similarly, it intercepts the response from the Web service provider,
provides a network emulator and injects a fault (if any) and then forwards it to the Web service
client. NetFIS gives Web service applications the sense of running over a WAN without any
modification to the applications. Our testing method requires no modifications to the underlying
operating system, networking libraries or the Web service to be tested. NetFIS is able to emulate
WAN behaviour and injecting network faults such as dropping, delaying, randomly corrupting
bytes in the body of the SOAP messages, network faults have been used to carry out experiments
reported hereafter. In addition, software faults can also be injected into individual RPC
parameters based on obtaining the relevant Web service parameters definitions (including data
types) from WSDL files, however this class of faults will not be injected or detailed in this
paper.
4. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.3, May 2016
152
The network emulator is configurable and has the ability to control every property of the target
emulated network environment. The graphical user interface (GUI) of the network emulator also
gives the ability to control the emulator at runtime so as to achieve the dynamic nature of
networks. By using the network topology configuration file, network traffic trace files and the
GUI, composed service systems performance and FTMs can be measured and assessed in a
controlled emulation of a target WAN. As composed service systems can be subjected and
affected to network delays, drops, errors, reorders and partitions, and also software faults, in
this experiment we restrict the use of NetFIS to emulate only drops, errors and delays to the
messages exchanged between the system components in order to measure the performance and
the FTMs applied to Web service client side.
Before going through more details about the proposed system architecture of the NetFIS, it is
worthwhile to elaborate on some of the major design issues.
3.1 APPLICATION LEVEL AND NETWORK LEVEL
Network related faults like corruptions, reordering and dropping packets occur at the network
level. This includes the physical media and all the layers in the network stack. It follows logically
that intentional network related faults are injected at the network level. This is done traditionally
to assess the reliability and performance of networking protocol stacks. However there is
a very high probability that the fault injected (e.g. corruption) will be detected by the actual
underlying network protocol stack at the other end [15]. Consequently, the application or
middleware being tested will not notice the occurrence of an error and the reliability
measures built there will go untested. Moreover injected faults at network level are based on
tampering with packets not on application messages.
Our fault injection method is for injecting communication faults and tests their impact on
composed service system performance and their impact on the fault tolerance mechanisms
applied to such systems. Therefore it is more efficient and desirable to inject faults at the
application/middleware level instead of at the network level. Communications between the
system services are intercepted at application level and faults are injected by using proxies.
We elaborate on the architecture of this choice in later sections.
3.2 NETWORK EMULATION
Since, in composed service, the services participating in the system are usually running over the
Internet, the performance and fault tolerance of a composed service are very difficult to be
measured at design time. Testing such systems a distributed testing environment is required such
as a wide area network (WAN) or the Internet. Therefore performance and fault tolerance of the
system can be tested by deploying the system and run it over a WAN or the Internet.
However using the Internet or WAN for the sake of testing is usually impractical. It involves a
high cost in terms of time consuming and setting up a WAN or using the Internet for the
sake of testing. It is impossible to control such dynamic environment as networks such as putting
more stress, load, or errors. Moreover, errors and faults may take a long time to occur. Some
errors may not occur without applying a certain chain of events.
A realistic approach is to run the system in one machine or over a LAN using a WAN emulation
system which can provide the sense that the system is running over a WAN and provides all the
properties of a dynamic WAN like the Internet. That is, will help the testers to test the
performance and fault tolerance by running the system under different circumstances such as
different network traffic load, delays, loss rate, and so on. By using network emulation, not only
the performance of the whole system can be measured under different circumstances, but also the
5. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.3, May 2016
153
contribution of each service to the overall composed service system can also be measured and a
bottleneck service can be discovered. Such runtime environment should also be able to inject
faults to the system under test which this research proposes so.
Based on the discussion above, the proposed solution we propose in this paper is emulating
customizable and controllable WANs over LANs. Therefore, the Web service systems are
assessed on virtual WANs which are very similar and comparable to the actual target WAN
environments. Testing over these virtual WANs will not be suffering the problems of the real
WANs discussed above. The assumption made in this thesis is that the actual LANs used in
hosting the virtual LANs are very reliable and very fast thus making uncontrolled faults
and delays negligible.
Our network emulation is based on the architecture of a fault injection testing method with
successful results for testing CORBA Applications [18]. The original testing method is for
emulating the behaviour of WAN and injecting network faults at the application level. The
messages exchanged between CORBA components are intercepted (using CORBA Interceptors),
and then network faults are injected.
However, there are some shortcomings of the CORBA fault injection approach [18]. In CORBA
interceptor level the messages are already coded in binary code, therefore this method does not
target any particular elements in the message to inject faults such as function parameters in the
case of RPC. Also the corruption and dropping messages are only injected by throwing
exceptions. That means, the CORBA fault injection method assumption is for injecting the
mentioned faults to test only the system ability to deal with such exceptions, whereas it is more
logical to inject explicitly the fault and observe its effects on the system. Injecting faults such
as dropping and delaying messages can help developers to assign a reasonable time period before
the system times out. The problem of the CORBA approach cannot help in how to distinguish
between in which a message (or its acknowledgement) is simply experience a delay in the
network from those in which a message has actually been lost. If the time-out interval is made
too short, then there is a risk of duplicating messages and also reordering in some cases. If the
interval is made too long, then the system becomes unresponsive.
All the discussed issues above have been taken into account in order to produce a WAN
Emulation for our fault injection method. As discussed in the previous section, the messages are
intercepted at application level by using proxies, so at this level the complete message
entities are captured and any particular part of the messages can be manipulated. In addition the
network faults may be injected explicitly (dropped or corrupted messages). The proposed
solution for choosing a reasonable time out period is tackled by testing the system under different
real delay rate and drop rate scenarios, then monitoring the system in order to assign the best
timeout period that can minimize the risk of confusing between the normal network
delays and the message losses.
3.3 SCALABILITY AND OVERHEAD
There are some key issues have been considered in order to design our fault injection method. In
order to emulate a large multi-hop network, scalability and overhead issues need to be addressed.
The emulator must has to scale well for networks with hundreds or more of network nodes, in the
meantime maintaining a limit on the overhead of the emulation. It is intended that the network
emulator is hosted over a LAN where every physical node is responsible for a clique of
virtual nodes. This reduces the chances of uncontrollable faults caused by the underlying
hardware or networking devices and allows accurate emulation of other traffic sources.
The design assumes that the WAN, needed to be emulated, are large enough that the overhead
introduced by the network emulator is negligible when compared to the actual network delays.
6. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.3, May 2016
154
3.4 SYSTEM MONITORING (FAILURE DETECTION)
Here we will discuss many ways in which distributed systems and in particular Web service
systems can fail and the effects of each failure on the system.
There are many system failure modes, affecting distributed systems, have been identified and
classified. For example, in [15] system failure modes which can be occurred in CORBA
applications have been identified, and in [11] system failure modes affecting Web service
systems are also classified. Based on the above system failure modes classifications, we have
summarized system failure modes of composed service systems as follows: 1) crash of a service
instance/hosting environments, 2) hang of a service, 3) corruption of data coming into the system,
4) corruption of data coming out of the system, 5) duplication of messages, 6)
omission of messages and 7) delay of messages.
This list is very general. Every Web service system has its own specific failure modes. However,
the majority, if not all, of these system failure modes can be classified into one of the above major
classes of system failure modes.
The effect of the above failure modes will depend on the capability of the fault tolerance of the
system to detect them and prevent the system from deviating from its specified behaviour.
Corrupted data coming into the system should be detected by the middleware (or the Web service
application) and rejected then raising appropriate error exception as a response. Corruption of
data coming out of the system should be handled by the middleware at the service client,
however Corruption of data coming out of the system can cause failure when it is not
signalled by the system and propagated from the middleware to the application level. In such
case a mechanism must be deployed at application level to deal with this.
Duplication and omission of messages should also be handled by the middleware layer of
the service and raise appropriate exception. However omission of messages from client to
service must be detected by the middleware of the client since the service would have no
mechanism for knowing the message had been sent so it could not generate an exception.
If the application server crashed, it will not be able to accept the invocation and the client will get
an exception from the transport layer. If the application server hangs, it may either accepts the
invocation but does not respond so the client will not know what’s happening or the application
server may not be able to accept the invocation at all making it more similar to crash.
Delayed messages may cause timing faults. Timing faults should be detected by the middleware
at service side when a response message is not received in a specified time slot. However, at the
service client there is a problem of distinguishing between a lost request message and the message
experience a long delay caused by the network. Therefore, a reasonable time span should be
deployed before raising timeout exception at service client to minimize this issue.
Because of all the problems above, some of the failure modes are very difficult to detect. For
example as discussed above, it is difficult to distinguish between crash and hang failure modes in
some cases, when the testing run by service client that do not have access to the application server
logs where the Web service is running. In addition some other failure modes are also difficult
to detect when the tester have no access to the service client logs. For example omission
of requests when client request is lost before reaching the service provider. As a result of
that a mechanism (such as timeout mechanism) deployed to detect omission of request messages
at service client cannot be tested.
7. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.3, May 2016
155
To face these problems we rely on the logging mechanism of the proposed method and
the logging of the client as well. The failure modes mentioned before can be observed by
analysing the tool logging mechanism to detecting exceptions caused by corruption of data
and detect omission of messages from service to client. In addition for this experiment we use
service client logs to detect omission of message failure from the client to service and also the
effect of delays of messages. Moreover, as crashes of services/hosting environments and hang
of services can be detected via receiving exceptions or via time-out mechanisms applied in
service client, client logs are also used to detect such failure in this experiment. For this
experiment our fault injection method has no means of detecting duplication of message failure. It
will be addressed in later research.
4. NETFIS IMPLEMENTATION
The fault injection method is designed to overcome the shortcomings of earlier fault-injection
tools. It minimizes the dependence on the underlying OS and distributed computing platform.
Although the implemented version of our testing method is for SOAP based composed service
systems, the architecture is generally applicable to be deployed to any SOA distributed computing
platform (e.g. Open Grid Services Architecture) that allows the installation of proxies into the
communication subsystem. The emulator is transparent to the applications and requires no
modifications, recompiling or patching. NetFIS is also independent of any hosting environment
for portability. Finally, NetFIS gives the applications under test the sense that there are other -
synthetic- applications running during the test and sharing the networking resources without a
perceivable emulation overhead. Our NetFIS Architecture tool consists of three main
components as follows:
4.1 FAULT INJECTION SERVICE
Our Fault Injection Service (FIS) is a Web service application that has the ability to generate a
proxy Web service to one or more Web service applications of the tested system. More
importantly, it injects the proper fault into the system under test by its sub-components.
The FIS role depends upon where it is deployed. At the client side, FIS role is to generate a proxy
WSDL from the actual Web service WSDL needed to be called by client. As result, all client
requests are processed by the FIS. Thereafter, the FIS sends the intercepted request to its
internal subcomponent, the Fault Injection Controller (FIC) to inject the required faults. Then
the request is sent to another FIS that is deployed on the site where the actual requested
Web service is running. When the client side FIS receives a response from the requested Web
service it forwards it to the client. At the actual requested Web service side, the FIS role is
different. Request messages received from the FIS, deployed at client side, are forwarded to the
actual Web service by the FIS. When the response is received, it is redirected to the internal FIC
for fault injection, and then the response, if any, is sent back to the FIS deployed at the client site.
In the case of composed service systems, where the service has to be acting as both a service and
client in the same system, a single FIS can perform both of the roles mentioned above. Because of
using this way of intercepting messages, no modification need to be made to the system under
test.
4.2 FAULT INJECTION CONTROLLER
The Fault Injection Controller (FIC) is a java component resided in the FIS that is
responsible for controlling the testing tool and injecting the required faults into the intercepted
massages. Faults are injected into the message based upon decisions coming from two other
components of the testing tool – the Network Emulation Service (NES) and the Script Fault
Model (SFM).
8. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.3, May 2016
156
These two components can either be turned on or off at the choice of the user. The SFM is a java
script which written by the user. The function parameters can be modified by using the value
boundaries specified by the tester. When both SFM and NES are activated, the SFM decision can
only be applied if the decision from NES is not to drop or corrupt the exchanged messages. In
other words, The FIC gives network faults higher priority. The FIC also logs all the exchanged
SOAP messages so as to be analysed offline. Thereafter, the message, if it has not been dropped,
is sent back to the interceptor to complete its journey to the corresponding FIS.
4.3 NETWORK EMULATOR SERVICE
The Network Emulator Service (NES) is a WAN Emulator Web service, which gives
the applications the sense of running over a LAN or WAN. It gives the applications under
test the sense that there are other -synthetic- applications running during the test and sharing the
networking resources. In addition, it provides the capability of injecting network faults (loss,
corruption, delay, reordering, etc.). This work has been modified and extended by using only Web
service technology. All the generated workload traffic and the injected faults use only SOAP
messages.
The network emulator service system is deployed and exposed as composed Web services. It
consists of a centralized Network Controller Service (NCS), controlling the target emulated
network and a set of NES’s deployed at every node in the system which emulates the nodes of
the target emulated network. The NCS and every NES communicate with each other by
exchanging SOAP messages and also communicate with the FIC using also SOAP messages as
required.
4.4 SETTING UP THE TOOL
The first stage of setting up the tool consists of building a description of the target emulated
network using a topology file and to describe the traffic workload generated on all network nodes.
The next stage is to start the NCS and load the topology. The third stage is to start the NES’s
for all the network nodes. Thereafter, starting the FIS for every node which will cause generating
a proxy Web service for each Web service required to be called in the system under test, and
finally, order the NEC to start the emulation and then start the system need to be tested and
assessed.
The Topology file is a simple configuration xml file that describes the target network topology. It
lists the nodes in the network together with their configuration. In addition, a trace file also must
be provided for each node describes its traffic workload. The trace file contains the packet counts
per unit time and can be either created manually, captured from real traffic traces or produced by
using network traffic modelling algorithms. Then, the NCS, which is a Web service itself, is
started. NCS is used by NES’s in order to provide node configuration parameters and locations of
neighbouring NES’s. Each node of the target emulated network is represented by one FIS and one
NES.
Each FIS at the client side needs to be provided with an xml file containing the URL(s) of the
Web service(s) under test. The client needs to call this, in order to generate a Web service proxy
which will be called by the client instead of the actual Web service under test. In addition, the
xml file also contains the URL of the NES emulating the same node.
As the tool does not require any modifications to the system under test, unless the only job
for the client is to start calling the proxy service generated by the FIS instead of calling the
actual Web service.
9. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.3, May 2016
157
5. TEST CASE
In this section we describe an experiment that injects a number of network-related faults
(delaying, dropping and randomly corrupting SOAP messages) into a Bioinformatics Web service
[19]. We deployed the WS-Mediator [4] at the client side to invoke three identical Bioinformatics
Web services [19] simultaneously via the NetFIS. The performance and the fault tolerance
protocols of the system under test have been examined. Moreover the overheads introduced to
the system by using our tool are also measured. The results obtained from logging
files are analysed and discussed. The setup of the experiment is explained in the next section.
5.1 EXPERIMENT SETUP
The topology of the target network that we emulated is a four-node network setup as shown in
Figure. 1. In the experiments various types of faults were injected into the emulated network.
The Network Emulation Service is enabled to generate synthetic traffic through the network.
Figure 1. Network topology
For a real application deployed on a WAN, there is a significant variation in performance due to
other traffic occupying the network resources. NetFIS supports various simulated traffic models
including, but not limited to, self- similar, random, constant and even replaying previously
captured traffic traces. Since studies of network traffic suggest that it is self-similar in nature
[20], we chose to emulate continuous self-similar traffic in our network. The mean packet rate
is 30 packets /second on each link, and the self-similarity value is 0.8. The packet size
distribution follows measurements taken from Internet backbones [21]. The link utilization varies
based on the generated packet size and the link configuration.
5.2 NETWORK CONFIGURATION
We measure the performance of the protocols in four network configuration scenarios:
5.2.1. LAN CONFIGURATION
The LAN was used to test the base performance of the target system without deploying NetFIS.
The Web service client issuing the requests was loaded on machine A in Figure. 2. The 3 services
participating in the test are run on machines B, C and D respectively.
5.2.2 FAST WAN CONFIGURATION
The propagation delays are fixed at 2ms which is typical of inter-city links within the UK. The
bandwidth of each link is 4mb/s. The average utilization of each network link given this
bandwidth, and the simulated traffic detailed in previous section, ranges between 10% and 20%.
10. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.3, May 2016
158
5.2.3 SLOW WAN CONFIGURATION
This configuration represents the other extreme network environment. All services are located in
far apart geographical locations and connected by slow links. The propagation network delays are
fixed at 50ms which can be typical of far apart geographical locations and international links (e.g.
between Newcastle, England and Tripoli, Libya). The bandwidth of each link is 512Kb/s. The
average utilization of each network link, given this bandwidth and the traffic, ranges between
20% and 40%.
5.2.4 HETEROGENEOUS WAN CONFIGURATION
This configuration represents a case somewhere between the two extremes. One of the services
was placed in a faraway location (connected by slow WAN links) while the other servers and the
client were closer to each other (connected by fast WAN links). The links and loads used here are
similar to those used for the slow and fast WAN configurations.
5.3 CLIENT CONFIGURATION
We have developed a special client application implementing several test cases corresponding to
the fault injection configurations applied during the experiment. The client application is
implemented on the WS-Mediator framework (as shown in Figure. 2) and utilizes the built-in
FTMs and logging mechanisms of the framework. The WS-Mediator claims to offer
comprehensive off-the-shelf FTMs in order to cope with various kinds of typical Web service
application scenarios. It also includes a monitoring mechanism to benchmark a collection of
candidate Web services that would be used during service composition and generate their
dependability metadata for dynamic composition reasoning. The framework allows the client to
submit a number of candidate Web services for service composition and define a
reconfiguration policy to specify how to make use of the candidate Web services, and thus to
reduce the development cost of a dependable client application.
Figure 2. A system being tested by NetFIS
In our Web service client application, we chose to use the deployed N-version programming
mechanism offered by the WS-Mediator in order to invoke the NetFIS proxies simultaneously
and choose the first valid response from the service a client’s request. During the invocations, all
request and response messages are logged using the built-in monitoring mechanism of the
WS-Mediator. The complexity and processing overheads of the WS-Mediator have been
minimized with these settings. It is worth noting that classic N-version programming approach
normally requires voting for result validation.
11. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.3, May 2016
159
However in Web service applications, although similar Web services may return semantically
identical responses they are not usually exact matches. The WS-Mediator framework allows a
policy-based response mapping, however since the NetFIS tool injects random faults into
the SOAP messages especially with random timing, the response mapping and voting
mechanisms are of little value in our test cases. Nevertheless, the late responses are also logged
for further analysis.
Besides the fault tolerance mechanisms deployed in the client application, the functionality of the
client is fairly simple. It invokes the three replicated Web services repeatedly with or
without the NetFIS. The number of invocations and the delay interval between invocations can be
configured dynamically.
5.4 EXPERIMENTAL RESULTS
The experiment comprises several test cases for validating the NetFIS approach. All the events
have been logged (SOAP requests and responses, injected faults, round trip response times and
exception messages) during the experiment. Those saved logs generated by the client application
and the NetFIS have been used for quantitative result analysis.
Section 1: the NetFIS emulates different types of network with simulating various traffic
load. Details about the settings are shown in Table 1. A preliminary assessment test was carried
out to check the network condition and the Web service before the performing other test cases.
The client invoked the three Web services directly 1000 times (interval: 1000ms) without the
NetFIS. The overall maximum, minimum, and average round trip response time (RTT)
received by the client application are 102ms, 8ms and 57ms respectively.
Figure 3. Client invocation RTT
Figure. 3 shows the RTT to the three Web services logged by the WS-Mediator. It is very
interesting to see the three Web services had much longer RTT at the very beginning of the test
suggesting the RTT could have been optimized by some kind of caching mechanism employed in
the Web services. It is also worth noting although the three replicated Web services have identical
hardware, operating system, middleware, etc., WS3 constantly had longer delays than WS1 and
WS2. However, the RTT variations of Web services and between different Web services are
indeed insignificant compared with the delays to be injected by our toolkit. Therefore can be
safely ignored. The average RTT of WS1, WS2 and WS3 are 10ms, 11ms and 12ms. The
average client RTT was slightly smaller than 10ms, because it always uses the quickest response
from the three Web services.
12. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.3, May 2016
160
The preliminary test results provided the benchmarking information of the physical LAN and
Web services involved in the experiment. Then the NetFIS were added between the client and the
Web services, and the client made 1000 invocations in each setting.
Table 1. Response time overhead
Network
Bandwidth
(Mb/s)
Response time
Max,
ms
Min,
ms
Average,
ms
LAN N/A 102 8 57
Fast WAN 4000 488 35 59
Slow WAN 512 698 110 190
Hetero.WAN Fast and Slow 870 99 104
The result statistics shown in Table 1 clearly indicated the effectiveness of emulating the three
different networks between the system components. The average RTT - without injecting
drop and error faults - when emulating Fast WAN is 59ms, where the average RTT without
using our tool in LAN is 57ms. That means the overhead delay introduced by our toolkit to the
tested system is clearly insignificant. However the differences of the average response
time between the Fast WAN and the Slow WAN is indeed big. That is related to the
configurations of the two emulated networks, specifically the propagation delays and the
bandwidths where in Fast WAN are 2ms, 4mb/s and in Slow WAN are 50ms, 512Kb/s
respectively. When considering Heterogeneous WAN, the average response time is almost
between the average response times of the Fast and Slow WANs. That is due to Heterogeneous
WAN is configured of a combination of the other two WANs (Fast and Slow).
13. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.3, May 2016
161
Figure 4. RTT of the test cases (Web service 1)
The Figure shows the RTT of WS1 (monitored at the WS-Mediator client) at different drop
and error rates and network conditions. The ‘injection modes’ axis represents the invocation
RTT of each injection mode shown in the Figure legend.
The overall average RTT of the fast network is much smaller than of the other two network
conditions. The Figure clearly shows greater RTT variations of the heterogeneous network than
the slow network. The timeout value has been regulated to 3000ms in the Figure to make the plots
more readable.
Section 2: Our tool injects different types of faults into the exchanged messages between the
client and the Web services in different emulated network conditions. The combinations of those
injected faults are detailed in Table 2. The client invoked the Web services via the NetFIS 1000
times in each testing scenario setting.
Table 2. Drop and random error injected
Network
Emulated
Injected Drop rate Injected Error rate
Target
%
Achieved (total
messages).
Target
%
Achieved
(total
messages).
Fast WAN
0.1 1 0.1 1
1 9 1 10
Slow WAN
0.1 1 0.1 1
1 10 1 10
Hetero.WAN
0.1 1 0.1 1
1 9 1 10
Table 2 indicates the statistics of the results in each test case scenario. The results show that the
NetFIS coped well with the settings and injected the expected faults correctly. When “drop” fault
is injected, the client threw a “timeout” exception after 10 seconds waiting indicating that the
response was lost.
14. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.3, May 2016
162
When errors are injected, “Cannot find dispatch method
for{http:%/webservices.calibayes.ncl.ac.uk/}getAvailableSimMethods”., exception messages
were thrown by the client indicating that corrupted SOAP messages were received but the JAX-
WS framework was not capable to correctly deal with the responses. Figure. 4 and Figure. 5
indicate the plots of the results of some test scenario cases, which obviously demonstrate the
effectiveness of the. The NetFIS simulates real work network conditions and faults in order to
help on robust client application development (in this case, by applying the WS-Mediator).
Figure 5. RTT of the test cases (Client)
15. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.3, May 2016
163
Figure 5 indicates a comparison of the RTT of the three Web services and the responses delivered
to the client through the WS-Mediator. The ‘clients’ axis represents the invocation RTT
monitored at each client thread (respectively deals with WS1, WS2, WS3) and the client
application that deploys the WS-Mediator to deal with the results received by the client threads.
We chose the 1% drop rate injection scenario to illustrate the comparison since this test case
scenario affects the RTT most. As the faults were injected arbitrarily into the three Web services,
the N-version programming FTM in the WS-Mediator successfully dealt with the injected faults
in most test cases and masked the reliability issues to the client. Moreover, the client application
only threw exceptions when all the three Web services failed simultaneously.
6. CONCLUSION
We have introduced a testing methodology and built a toolkit that is capable to inject faults into
any Web service application without any modification to the code of the application. There
is great flexibility in both the number and type of faults that can be injected into the system under
test. Furthermore, we can control the emulation of the network that is used and the ability to add
background traffic.
The network emulation may not exactly mirror the real world of the network environment,
however, it is a significant advance on testing a system using a single machine or a LAN. In
particular, a sample traffic from a real workload of a network can be used in the network
emulator as well as self-similar traffic patterns.
Our experiment has clearly demonstrated the network emulation and fault injection capacities of
the NetFIS and an experiment of how to deploy and use the functionalities of our toolkit for
testing the FTMs of the client application. In this experiment, the WS-Mediator has demonstrated
its FTMs capacity with service diversity and dynamic service composition reconfiguration.
In future work, we are concentrating on two main issues. Firstly, the toolkit does not have the
capability to deal with Call-back asynchronous invocations yet. In this stage dealing with Call-
back asynchronous invocations is left for future work as Dispatch has become a standard.
Secondly, as the goal of WS-ReliableMessaging [22] is to empower applications to exchange
messages simply, reliably, and efficiently even in the face of application, platform, or
network failure at middleware level, our future plan is to test and assess NetFIS with WS-RM
powered Web service systems.
REFERENCES
[1] M.-C. Hsueh, T. K. Tsai, and R. K. Iyer, Fault Injection Techniques and Tools. IEEE Computer, vol.
30, pp 75-82, 1997.
[2] Papazoglou, M., & van den Heuvel, W. J. (2006). Service-Oriented Design and Development
Methodology. International Journal on Web Engineering and Technology, 2(4), 412–442.
[3] Z.Zheng and M. R. Lyu. Ws-dream: A distributed reliability assessment mechanism for
web services. In DSN, pages 392–397, Anchorage, Alaska, USA, June 2008.
[4] Y. Chen and A. Romanovsky. Improving the dependability of web services integration. IT
Professional, 10(3):29–35, 2008.
[5] S. Han, K. G. Shin, and H. A. Rosenberg, "DOCTOR: An IntegrateD SOftware Fault InjeCTiOn
EnviRonment for Distributed Real-time Systems," International Computer Performance
and Dependability Symposium, Erlangen; Germany, pp.204-213, 1995.
16. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.3, May 2016
164
[6] S. Dawson, F. Jahanian, T. Mitton and T.-L. Tung, Testing of Fault- Tolerant and Real-Time
Distributed Systems via Protocol Fault Injection, in Proc. 26th Int. Symp. on Fault-Tolerant
Computing (FTCS-26), Sendai, Japan, 1996, pp.404-414.
[7] de Almeida, L.F. and S.R. Vergilio. E xploring Perturbation Based Testing for Web Services. in
Web Services, 2006. ICWS '06. International Conference on. 2006.
[8] Siblini, R. and N. Mansour. Testing Web services. in Computer Systems and Applications,
2005. The 3rd ACS/IEEE International Conference on. 2005.
[9] Sneed, H.M. and H. Shihong. WSDLTest - A Tool for Testing Web Services. in Web Site E
volution, 2006. WSE '06. Eighth IEEE International Symposium on. 2006.
[10] Jeff, O. and X. Wuzhi, Generating test cases for web services using data perturbation. SIGSOFT
Softw. E ng. Notes, 2004. 29(5): p. 1-10.
[11] Looker, N. and J. Xu. Assessing the Dependability of SOAP RPC-Based Web Services by Fault
Injection. in Object-Oriented Real-Time Dependable Systems, 2003. WORDS 2003 Fall.
The Ninth IEEE International Workshop on. 2003.
[12] Zhang, J. and R.G. Qiu. Fault injection-based test case generation for SOA-oriented software. in 2006
IEEE International Conference on Service Operations and Logistics, and Informatics, SOLI 2006.
2006.
[13] Looker, N., M. Munro, and J. Xu. WS-FIT: A tool for dependability analysis of web services. in
Proceedings - International Computer Software and Applications Conference. 2004. Hong Kong,
China.
[14] F. Bessayah, A. Cavalli, W. Maja, E. Martins, A. Willik Valenti, A Fault Injection Tool for
Testing Web Services Composition, Proc. 5th Testing Academic and Industrial Conference --
practice and research techniques (TAIC PART 2010), Windsor, UK, (LNCS 6303), 2010, pp. 137-
146.
[15] E. Marsden, J.-C. Fabre, J. Arlat, “Dependability of CORBA Systems: Service Characterization by
Fault Injection”, 21st IEEE Int. Symp. on Reliable Distributed Systems (SRDS-2002), (Osaka, Japan),
pp.276-285, IEEE CS Press, 2002.
[16] P. Reinecke, A. P. A. van Moorsel, and K. Wolter. The Fast and the Fair: A Fault-Injection-Driven
Comparison of Restart Oracles for Reliable Web Services. In QEST ’06: Proceedings of the
3rd International Conference on the Quantitative Evaluation of Systems, pages 375–384,
Washington, DC, USA, 2006. IEEE Computer Society.
[17] Khaled Farj, Adel Smeda, “A Methodology for Evaluating Fault Tolerance in Web Service
Applications”, Proceedings of the 15th International Conference on Applied Computer Science (ACS
‘15), p. 188-191, Konya, Turkey, May 20-22, 2015.
[18] Mohammad Alsaeed, Neil A. Speirs, “A Wide Area Network Emulator for CORBA
Applications”, Proceedings of the 10th IEEE International Symposium on Object and Component-
Oriented Real-Time Distributed Computing, p.359-364, May 07-09, 2007.
[19] Chen, Y., Lawless, C., Gillespie, C.S., Wu, J., Boys, R.J., Wilkinson, D.J., 2009, CaliBayes and
BASIS: integrated tools for the calibration, simulation and storage of biological simulation models.
Briefings in Bioinformatics.
[20] Mark E. Crovella and Azer Bestavros, “Self-similarity in World Wide Web traffic: Evidence and
possible causes,” IEEE/ACM Transactions on networking, vol. 5, no. 6, pp. 835-846, Dec. 1997.
[21] C. Fraleigh and S. Moon, et al. “Packet-Level Traffic Measurements from the Sprint IP Backbone”,
In: IEEE Network, Volume 17, Number 6: November 2003.
[22] OASIS. (2010). 'OASIS Web Services Reliable Messaging (WSRM) TC'. [Retrieved: 30 Jun 2010];
Available from: http://www.oasis open.org/committees/tc_home.php?wg_abbrev=wsrm.
17. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.3, May 2016
165
AUTHORS
Dr. Khaled Farj is a specialist in distributed systems, web and cloud applications. Khaled gained a PhD in
Cloud Computing, and MSc in Computing Sciences at Newcastle University (UK). Also a Diploma in e-
Commerce, UMIST (UK). He is involved in numerous professional and academic activities related to
distributed systems and some government systems. He was the Technical Project Manager for IT Libyan
Customs team; the project is for automating all Libyan ports based on what’s called ASYCUDA
application. Thereafter, worked as a Head of IT Department at the Ministry of Higher Education (Libya).
Khaled also assigned as the Ministry Coordinator of the national e-government long-term strategic project
plan. In addition, involved in the academic higher education sector, by teaching some modules for MSc
students and supervising some student projects and researches at University. He is interested and involved
in some research areas, such as Cloud and Web Computing, Architecture for distributed Web applications,
Enterprise Computing and Middleware and Web Services.
Dr. Adel Smeda is an associate professor graduated from université de Nantes, France. He is a staff
member of university of al-jabal al-gharbi, Libya, adjunct professor at the Libyan academy in Tripoli, and
an active member of the research centre LINA of université de Nantes in France. His area of interest
includes software architecture, cloud computing, modelling, E-technologies and published more than 40
articles in these areas.