The document discusses using a Colored Petri Net (CPN) model to generate test sequences for validating web service compositions. It proposes an algorithm that generates a test suite covering all possible paths without redundancy. The CPN model, previously used to verify the composition design, is reused for test design. The technique was applied to an airline reservation system and test sequences were evaluated against three coverage criteria.
Impact of Packet Inter-arrival Time Features for Online Peer-to-Peer (P2P) Cl...IJECEIAES
Identification of bandwidth-heavy Internet traffic is important for network administrators to throttle high-bandwidth application traffic. Flow features based classification have been previously proposed as promising method to identify Internet traffic based on packet statistical features. The selection of statistical features plays an important role for accurate and timely classification. In this work, we investigate the impact of packet inter-arrival time feature for online P2P classification in terms of accuracy, Kappa statistic and time. Simulations were conducted using available traces from University of Brescia, University of Aalborg and University of Cambridge. Experimental results show that the inclusion of inter-arrival time (IAT) as an online feature increases simulation time and decreases classification accuracy and Kappa statistic.
Threshold benchmarking for feature ranking techniquesjournalBEEI
In prediction modeling, the choice of features chosen from the original feature set is crucial for accuracy and model interpretability. Feature ranking techniques rank the features by its importance but there is no consensus on the number of features to be cut-off. Thus, it becomes important to identify a threshold value or range, so as to remove the redundant features. In this work, an empirical study is conducted for identification of the threshold benchmark for feature ranking algorithms. Experiments are conducted on Apache Click dataset with six popularly used ranker techniques and six machine learning techniques, to deduce a relationship between the total number of input features (N) to the threshold range. The area under the curve analysis shows that ≃ 33-50% of the features are necessary and sufficient to yield a reasonable performance measure, with a variance of 2%, in defect prediction models. Further, we also find that the log2(N) as the ranker threshold value represents the lower limit of the range.
An Adjacent Analysis of the Parallel Programming Model Perspective: A SurveyIRJET Journal
This document provides an overview and analysis of parallel programming models. It begins with an abstract discussing the growing demand for parallel computing and challenges with existing parallel programming frameworks. It then reviews several relevant studies on parallel programming models and architectures. The document goes on to describe several key parallel programming models in more detail, including the Parallel Random Access Machine (PRAM) model, Unrestricted Message Passing (UMP) model, and Bulk Synchronous Parallel (BSP) model. It discusses aspects of each model like architecture, communication methods, and associated cost models. The overall goal is to compare benefits and limitations of different parallel programming models.
Approximation of regression-based fault minimization for network trafficTELKOMNIKA JOURNAL
This research associates three distinct approaches for computer network traffic prediction. They are the traditional stochastic gradient descent (SGD) using a few random samplings instead of the complete dataset for each iterative calculation, the gradient descent algorithm (GDA) which is a well-known optimization approach in deep learning, and the proposed method. The network traffic is computed from the traffic load (data and multimedia) of the computer network nodes via the Internet. It is apparent that the SGD is a modest iteration but can conclude suboptimal solutions. The GDA is a complicated one, can function more accurate than the SGD but difficult to manipulate parameters, such as the learning rate, the dataset granularity, and the loss function. Network traffic estimation helps improve performance and lower costs for various applications, such as an adaptive rate control, load balancing, the quality of service (QoS), fair bandwidth allocation, and anomaly detection. The proposed method confirms optimal values out of parameters using simulation to compute the minimum figure of specified loss function in each iteration.
Probability Density Functions of the Packet Length for Computer Networks With...IJCNCJournal
The research on Internet traffic classification and identification, with application on prevention of attacks
and intrusions, increased considerably in the past years. Strategies based on statistical characteristics of
the Internet traffic, that use parameters such as packet length (size) and inter-arrival time and their
probability density functions, are popular. This paper presents a new statistical modeling for packet length,
which shows that it can be modeled using a probability density function that involves a normal or a beta
distribution, according to the traffic generated by the users. The proposed functions has parameters that
depend on the type of traffic and can be used as part of an Internet traffic classification and identification
strategy. The models can be used to compare, simulate and estimate the computer network traffic, as well
as to generate synthetic traffic and estimate the packets processing capacity of Internet routers
Exploiting 2-Dimensional Source Correlation in Channel Decoding with Paramete...IJECEIAES
The document describes a proposed joint source-channel coding (JSCC) system that exploits 2-dimensional source correlation in channel decoding with parameter estimation. The system uses a modified Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm at the decoder to exploit source correlation on rows and columns of a 2D source. A parameter estimation technique based on the Baum-Welch algorithm is used jointly with the decoder to estimate source correlation parameters at the receiver since these parameters are not always known in practice. Simulation results show that the proposed coding scheme that performs joint decoding and parameter estimation performs very close to an ideal 2D JSCC system with perfect knowledge of source correlation parameters.
Comparative analysis of the performance of various active queue management te...IJECEIAES
This paper demonstrates the robustness of active queue management techniques to varying load, link capacity and propagation delay in a wireless environment. The performances of four standard controllers used in Transmission Control Protocol/Active Queue Management (TCP/AQM) systems were compared. The active queue management controllers were the Fixed-Parameter Proportional Integral (PI), Random Early Detection (RED), Self-Tuning Regulator (STR) and the Model Predictive Control (MPC). The robustness of the congestion control algorithm of each technique was documented by simulating the varying conditions using MATLAB® and Simulink® software. From the results obtained, the MPC controller gives the best result in terms of response time and controllability in a wireless network with varying link capacity and propagation delay. Thus, the MPC controller is the best bet when adaptive algorithms are to be employed in a wireless network environment. The MPC controller can also be recommended for heterogeneous networks where the network load cannot be estimated.
This document provides an analysis of video streaming on multiple devices wirelessly. It discusses the research methodology, including the inductive approach used and resources required like computers, wireless transmitters, and surveillance equipment. It describes testing the streaming performance of the Buffer and Rate Optimization for Streaming (BROS) algorithm. The document outlines problems that may be encountered with wireless networks like inability to connect or slow performance. It provides references to support the analysis.
Impact of Packet Inter-arrival Time Features for Online Peer-to-Peer (P2P) Cl...IJECEIAES
Identification of bandwidth-heavy Internet traffic is important for network administrators to throttle high-bandwidth application traffic. Flow features based classification have been previously proposed as promising method to identify Internet traffic based on packet statistical features. The selection of statistical features plays an important role for accurate and timely classification. In this work, we investigate the impact of packet inter-arrival time feature for online P2P classification in terms of accuracy, Kappa statistic and time. Simulations were conducted using available traces from University of Brescia, University of Aalborg and University of Cambridge. Experimental results show that the inclusion of inter-arrival time (IAT) as an online feature increases simulation time and decreases classification accuracy and Kappa statistic.
Threshold benchmarking for feature ranking techniquesjournalBEEI
In prediction modeling, the choice of features chosen from the original feature set is crucial for accuracy and model interpretability. Feature ranking techniques rank the features by its importance but there is no consensus on the number of features to be cut-off. Thus, it becomes important to identify a threshold value or range, so as to remove the redundant features. In this work, an empirical study is conducted for identification of the threshold benchmark for feature ranking algorithms. Experiments are conducted on Apache Click dataset with six popularly used ranker techniques and six machine learning techniques, to deduce a relationship between the total number of input features (N) to the threshold range. The area under the curve analysis shows that ≃ 33-50% of the features are necessary and sufficient to yield a reasonable performance measure, with a variance of 2%, in defect prediction models. Further, we also find that the log2(N) as the ranker threshold value represents the lower limit of the range.
An Adjacent Analysis of the Parallel Programming Model Perspective: A SurveyIRJET Journal
This document provides an overview and analysis of parallel programming models. It begins with an abstract discussing the growing demand for parallel computing and challenges with existing parallel programming frameworks. It then reviews several relevant studies on parallel programming models and architectures. The document goes on to describe several key parallel programming models in more detail, including the Parallel Random Access Machine (PRAM) model, Unrestricted Message Passing (UMP) model, and Bulk Synchronous Parallel (BSP) model. It discusses aspects of each model like architecture, communication methods, and associated cost models. The overall goal is to compare benefits and limitations of different parallel programming models.
Approximation of regression-based fault minimization for network trafficTELKOMNIKA JOURNAL
This research associates three distinct approaches for computer network traffic prediction. They are the traditional stochastic gradient descent (SGD) using a few random samplings instead of the complete dataset for each iterative calculation, the gradient descent algorithm (GDA) which is a well-known optimization approach in deep learning, and the proposed method. The network traffic is computed from the traffic load (data and multimedia) of the computer network nodes via the Internet. It is apparent that the SGD is a modest iteration but can conclude suboptimal solutions. The GDA is a complicated one, can function more accurate than the SGD but difficult to manipulate parameters, such as the learning rate, the dataset granularity, and the loss function. Network traffic estimation helps improve performance and lower costs for various applications, such as an adaptive rate control, load balancing, the quality of service (QoS), fair bandwidth allocation, and anomaly detection. The proposed method confirms optimal values out of parameters using simulation to compute the minimum figure of specified loss function in each iteration.
Probability Density Functions of the Packet Length for Computer Networks With...IJCNCJournal
The research on Internet traffic classification and identification, with application on prevention of attacks
and intrusions, increased considerably in the past years. Strategies based on statistical characteristics of
the Internet traffic, that use parameters such as packet length (size) and inter-arrival time and their
probability density functions, are popular. This paper presents a new statistical modeling for packet length,
which shows that it can be modeled using a probability density function that involves a normal or a beta
distribution, according to the traffic generated by the users. The proposed functions has parameters that
depend on the type of traffic and can be used as part of an Internet traffic classification and identification
strategy. The models can be used to compare, simulate and estimate the computer network traffic, as well
as to generate synthetic traffic and estimate the packets processing capacity of Internet routers
Exploiting 2-Dimensional Source Correlation in Channel Decoding with Paramete...IJECEIAES
The document describes a proposed joint source-channel coding (JSCC) system that exploits 2-dimensional source correlation in channel decoding with parameter estimation. The system uses a modified Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm at the decoder to exploit source correlation on rows and columns of a 2D source. A parameter estimation technique based on the Baum-Welch algorithm is used jointly with the decoder to estimate source correlation parameters at the receiver since these parameters are not always known in practice. Simulation results show that the proposed coding scheme that performs joint decoding and parameter estimation performs very close to an ideal 2D JSCC system with perfect knowledge of source correlation parameters.
Comparative analysis of the performance of various active queue management te...IJECEIAES
This paper demonstrates the robustness of active queue management techniques to varying load, link capacity and propagation delay in a wireless environment. The performances of four standard controllers used in Transmission Control Protocol/Active Queue Management (TCP/AQM) systems were compared. The active queue management controllers were the Fixed-Parameter Proportional Integral (PI), Random Early Detection (RED), Self-Tuning Regulator (STR) and the Model Predictive Control (MPC). The robustness of the congestion control algorithm of each technique was documented by simulating the varying conditions using MATLAB® and Simulink® software. From the results obtained, the MPC controller gives the best result in terms of response time and controllability in a wireless network with varying link capacity and propagation delay. Thus, the MPC controller is the best bet when adaptive algorithms are to be employed in a wireless network environment. The MPC controller can also be recommended for heterogeneous networks where the network load cannot be estimated.
This document provides an analysis of video streaming on multiple devices wirelessly. It discusses the research methodology, including the inductive approach used and resources required like computers, wireless transmitters, and surveillance equipment. It describes testing the streaming performance of the Buffer and Rate Optimization for Streaming (BROS) algorithm. The document outlines problems that may be encountered with wireless networks like inability to connect or slow performance. It provides references to support the analysis.
Enhancement of student performance prediction using modified K-nearest neighborTELKOMNIKA JOURNAL
The traditional K-nearest neighbor (KNN) algorithm uses an exhaustive search for a complete training set to predict a single test sample. This procedure can slow down the system to consume more time for huge datasets. The selection of classes for a new sample depends on a simple majority voting system that does not reflect the various significance of different samples (i.e. ignoring the similarities among samples). It also leads to a misclassification problem due to the occurrence of a double majority class. In reference to the above-mentioned issues, this work adopts a combination of moment descriptor and KNN to optimize the sample selection. This is done based on the fact that classifying the training samples before the searching actually takes place can speed up and improve the predictive performance of the nearest neighbor. The proposed method can be called as fast KNN (FKNN). The experimental results show that the proposed FKNN method decreases original KNN consuming time within a range of (75.4%) to (90.25%), and improve the classification accuracy percentage in the range from (20%) to (36.3%) utilizing three types of student datasets to predict whether the student can pass or fail the exam automatically.
Improved fuzzy c-means algorithm based on a novel mechanism for the formation...TELKOMNIKA JOURNAL
The clustering approach is considered as a vital method for many fields suchas machine learning, pattern recognition, image processing, information retrieval, data compression, computer graphics, and others.Similarly, it hasgreat significance in wireless sensor networks (WSNs) by organizing thesensor nodes into specific clusters. Consequently, saving energy and prolonging network lifetime, which is totally dependent on the sensor’s battery, that is considered asa major challenge in the WSNs. Fuzzyc-means (FCM) is one of classification algorithm, which is widely used in literature for this purpose in WSNs. However, according to the nature of random nodes deployment manner, on certain occasions, this situation forces this algorithm to produce unbalanced clusters, which adversely affects the lifetime of the network.To overcome this problem, a new clustering method called FCM-CMhas been proposed by improving the FCM algorithm to form balanced clustersfor random nodes deployment. The improvement is conductedby integrating the FCM with a centralized mechanism(CM).The proposed method will be evaluated based on four new parameters. Simulation result shows that our proposed algorithm is more superior to FCM by producing balanced clustersin addition to increasing the balancing of the intra-distances of the clusters, which leads to energy conservation and prolonging network lifespan.
A surrogate-assisted modeling and optimization method for planning communicat...Power System Operation
The development of industrial informatization stimulates
the implementation of cyber-physical system (CPS) in
distribution network. As a close integration of the power
network infrastructure with cyber system, the research
of design methodology and tools for CPS has gained
wide spread interest considering the heterogeneous
characteristic. To address the problem of planning
communication system in distribution network CPS, at
first, this paper proposed an optimization model utilizing
topology potential equilibrium. The mutual influence
of nodes and the spatial distribution of topological
structure is mathematically described. Then, facing the
complex optimization problem in binary space with
multiple constraints, a novel binary bare bones fireworks
algorithm (BBBFA) with a surrogate-assisted model
is proposed. In the proposed algorithm, the surrogate
model, a back propagation neural network, replaces the
complex constraints by incremental approximation of
nonlinear constraint functions for reducing the difficulty
in finding the optimal solution. The communication
system planning of IEEE 39-bus power system,
which comprises four terminal units, was optimized.
Considering the different heterogeneous degrees,
four programs were involved in planning for practical
considerations. The simulation results of the proposed
algorithm were compared with other representative
methods, which demonstrated the effective performance
of the proposed method to solve communication system
planning for optimizing problems of distribution
network.
A Path-Oriented Automatic Random Testing Based on Double Constraint Propagationijseajournal
This document summarizes a research paper that proposes a path-oriented automatic random testing method based on double constraint propagation. It begins with an introduction to random testing and discusses its limitations. It then describes an existing path-oriented random testing (PRT) approach and identifies limitations where some invalid inputs may still exist. The document proposes a new double constraint propagation based random testing (DCPRT) method that applies a constraint propagation algorithm twice to more accurately reduce the input domain for a given path. It provides details on the DCPRT algorithm and reports on experimental results comparing DCPRT to PRT on some test programs, finding DCPRT obtained a more accurate input domain.
ENHANCING COMPUTATIONAL EFFORTS WITH CONSIDERATION OF PROBABILISTIC AVAILABL...Raja Larik
This document proposes a Probabilistic Collocation Method (PCM) to improve probabilistic load flow (PLF) computation methods and model network topology uncertainties. PCM uses probability distribution functions to model the impact of uncertainties as a linear function of power injections. It maintains the linear relationship between line flows and power injections. The method is examined using the IEEE 39-bus test system and compared to Monte Carlo simulation, showing significantly reduced computational efforts while maintaining accuracy.
The document discusses using a machine learning algorithm called Multi Expression Programming (MEP) to develop a computational model for predicting the compressive strength of carbon fiber-reinforced polymer (CFRP) confined concrete. The model will be based on an extensive database of 828 experimental specimens, incorporating parameters like specimen diameter, height, unconfined concrete strength, CFRP layer thickness, and elastic modulus. MEP is described as an advanced evolutionary algorithm that can accurately model problems with unspecified complexities better than other techniques. The validation and performance of the proposed MEP model will be assessed by comparing predictions to experimental data and other existing strength models.
Analysis of non-functional aspects like performance and reliability are crucial for the success of dynamic distributed systems that are self adaptive. With the success of the Internet and mobile technology, properties like the reliability of connections, available bandwidth and computing resources become an even greater concern. Non-functional requirements, are often difficult to capture, measure, and predict. Therefore, stochastic methods are required to address these aspects. For the same purpose, architecture of dynamic distributed systems, in particular P2P networks is viewed as a graph and modeled by graph transformation.
Model based test case prioritization using neural network classificationcseij
Model-based testing for real-life software systems often require a large number of tests, all of which cannot
exhaustively be run due to time and cost constraints. Thus, it is necessary to prioritize the test cases in
accordance with their importance the tester perceives. In this paper, this problem is solved by improving
our given previous study, namely, applying classification approach to the results of our previous study
functional relationship between the test case prioritization group membership and the two attributes:
important index and frequency for all events belonging to given group are established. A for classification
purpose, neural network (NN) that is the most advances is preferred and a data set obtained from our study
for all test cases is classified using multilayer perceptron (MLP) NN. The classification results for
commercial test prioritization application show the high classification accuracies about 96% and the
acceptable test prioritization performances are achieved.
Optimizing Effort Parameter of COCOMO II Using Particle Swarm Optimization Me...TELKOMNIKA JOURNAL
Estimating the effort and cost of software is an important activity for software project managers. A poor estimate (overestimates or underestimates) will result in poor software project management. To handle this problem, many researchers have proposed various models for estimating software cost. Constructive Cost Model II (COCOMO II) is one of the best known and widely used models for estimating software costs. To estimate the cost of a software project, the COCOMO II model uses software size, cost drivers, scale factors as inputs. However, this model is still lacking in terms of accuracy. To improve the accuracy of COCOMO II model, this study examines the effect of the cost factor and scale factor in improving the accuracy of effort estimation. In this study, we initialized using Particle Swarm Optimization (PSO) to optimize the parameters in a model of COCOMO II. The method proposed is implemented using the Turkish Software Industry dataset which has 12 data items. The method can handle improper and uncertain inputs efficiently, as well as improves the reliability of software effort. The experiment results by MMRE were 34.1939%, indicating better high accuracy and significantly minimizing error 698.9461% and 104.876%.
PAGE: A Partition Aware Engine for Parallel Graph Computation1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
This document provides 50 questions and answers on advanced testing techniques for the ISTQB CTAL certification. It discusses topics like conditional testing, expression testing, domain testing, perturbation testing, fault sensitivity testing, propagation oriented testing including path testing and compiler-based testing, data flow testing, and mutation testing. The full document provides detailed explanations of each testing technique.
Text Detection and Recognition: A ReviewIRJET Journal
This document reviews different approaches for text detection and recognition from images. It discusses two main methods: stepwise methods that separate detection and recognition into distinct stages, and integrated methods that share information between stages. The key stages are discussed as text detection and localization, classification, segmentation, and text recognition. A variety of approaches are analyzed for each stage, including their advantages and disadvantages. Finally, applications of text detection and recognition technologies are mentioned.
Improving K-NN Internet Traffic Classification Using Clustering and Principle...journalBEEI
K-Nearest Neighbour (K-NN) is one of the popular classification algorithm, in this research K-NN use to classify internet traffic, the K-NN is appropriate for huge amounts of data and have more accurate classification, K-NN algorithm has a disadvantages in computation process because K-NN algorithm calculate the distance of all existing data in dataset. Clustering is one of the solution to conquer the K-NN weaknesses, clustering process should be done before the K-NN classification process, the clustering process does not need high computing time to conqest the data which have same characteristic, Fuzzy C-Mean is the clustering algorithm used in this research. The Fuzzy C-Mean algorithm no need to determine the first number of clusters to be formed, clusters that form on this algorithm will be formed naturally based datasets be entered. The Fuzzy C-Mean has weakness in clustering results obtained are frequently not same even though the input of dataset was same because the initial dataset that of the Fuzzy C-Mean is less optimal, to optimize the initial datasets needs feature selection algorithm. Feature selection is a method to produce an optimum initial dataset Fuzzy C-Means. Feature selection algorithm in this research is Principal Component Analysis (PCA). PCA can reduce non significant attribute or feature to create optimal dataset and can improve performance for clustering and classification algorithm. The resultsof this research is the combination method of classification, clustering and feature selection of internet traffic dataset was successfully modeled internet traffic classification method that higher accuracy and faster performance.
This document discusses model-based calibration techniques used to develop an engine calibration to meet emissions standards. It focuses on using design of experiments methods and statistical modeling to optimize engine parameters. Specifically, it describes:
1) Using a two-stage regression approach to separate variables like spark timing into a "local model" and others into a "global model" to better characterize responses.
2) How screening experiments can help select appropriate variables and ranges for the design of experiments to avoid unstable operating points.
3) Techniques for modeling experimental data through "local models" of individual variables and a "global model" that combines local models to reproduce responses for any combination of variables.
Hybrid multi objectives genetic algorithms and immigrants scheme for dynamic ...khalil IBRAHIM
the main concept of intelligent optimization techniques, artificial neural networks, and new genetic algorithms to solve the multi-objective multicast routing problems with shortest path (SP) problem that used in the addresses networks and improve all processes addressing in the wireless communications based on multi-objective optimization. The most important characteristics in mobile wireless networks is the topology dynamics and the network topology changes over time, the routing problem (SPRP) in mobile ad hoc networks (MANETs) turns out to be a dynamic optimization problem[13], the hybrid immigrants multiple-objective genetic algorithm (HIMOGAs) in the real- world are dynamic in nature, that has objective functions, constraints, and parameters, the dynamic optimization problems (DOPs) are a big challenges to evolutionary multi-objective, since any environmental change may affect the objective vector, constraints, and parameters, HIMOGA for the optimization goal is to track the moving of parameters and get a sequence of approximations solutions over time. The quantity of services (QoS) is supporting guarantee for all data traffic and getting the maximizing utilization for network, the QoS based on multicast routing offers significant challenges, and increases to use an efficient multicast routing protocol that will be able to check multicast routing and satisfying QoS constraints, The author propose to use HIMOGAs and SP algorithm to solve multicast problem that produces new generation wireless networks with immigrants schema to get high-quality solutions after each change and satisfying all objectives.
Improving Posture Accuracy of Non-Holonomic Mobile Robot System with Variable...TELKOMNIKA JOURNAL
This paper presents a method to decrease imprecision and inaccuracy that have the tendency to
influence the posture of non-holonomic mobile robot by using the adaptive tuning of universe of discourse.
As such, the primary objective of the study is to force the posture error of , , and towards
zero. Hence, for each step of tuning the fuzzy domain, about 20% of imprecision and inaccuracy had been
added automatically into the variable universe fuzzy, while the control input was bound via scaling gain.
Furthermore, the simulation results showed that the tuning of universe fuzzy parameters could increase
the performance of the system from the aspects of response time and error for steady state through better
control of inaccuracy. Besides, the domains of universe fuzzy input [-4,4] and output [0,6] exhibited good
performance in inching towards zero values as the steady state error was about 1% for x(t) position, 0.02%
for y(t) position, and 0.16% for θ(t) orientation, whereas the posture error in the given reference was about
0.0002% .
K2 Algorithm-based Text Detection with An Adaptive Classifier ThresholdCSCJournals
In natural scene images, text detection is a challenging study area for dissimilar content-based image analysis tasks. In this paper, a Bayesian network scores are used to classify candidate character regions by computing posterior probabilities. The posterior probabilities are used to define an adaptive threshold to detect text in scene images with accuracy. Therefore, candidate character regions are extracted through maximally stable extremal region. K2 algorithm-based Bayesian network scores are learned by evaluating dependencies amongst features of a given candidate character region. Bayesian logistic regression classifier is trained to compute posterior probabilities to define an adaptive classifier threshold. The candidate character regions below from adaptive classifier threshold are discarded as non-character regions. Finally, text regions are detected with the use of effective text localization scheme based on geometric features. The entire system is evaluated on the ICDAR 2013 competition database. Experimental results produce competitive performance (precision, recall and harmonic mean) with the recently published algorithms.
Spectral opportunity selection based on the hybrid algorithm AHP-ELECTRETELKOMNIKA JOURNAL
Due to an ever-growing demand for spectrum and the fast-paced developmentof wireless applications, technologies such as cognitive radio enablethe efficient use of the spectrum. The objective of the present article is todesign an algorithm capable of choosing the best channel for data transmission.It uses quantitative methods that can modify behavior by changing qualityparameters in the channel. To achieve this task, a hybrid decision-makingalgorithm is designed that combinesanalytical hierarchy process(AHP)algorithms and adjusts the weights of each channel parameter, using a prioritytable. TheElimination Et Choix Tranduisant La Realité(ELECTRE)algorithm processes the information from each channel through a weightmatrix and then delivers the most favorable result for the transmitted data. Theresults reveal that the hybrid AHP-ELECTRE algorithm has a suitableperformance, which improves the throughput rate by 14% compared to similaralternatives.
IDENTIFICATION AND INVESTIGATION OF THE USER SESSION FOR LAN CONNECTIVITY VIA...ijcseit
This paper mainly presents some technical discussions on the identification and analyze of “LAN usersessions”.
The identification of a user-session is non trivial. Classical methods approaches rely on
threshold based mechanisms. Threshold based techniques are very sensitive to the value chosen for the
threshold, which may be difficult to set correctly. Clustering techniques are used to define a novel
methodology to identify LAN user-sessions without requiring an a priori definition of threshold values. We
have defined a clustering based approach in detail, and also we discussed positive and negative of this
approach, and we apply it to real traffic traces. The proposed methodology is applied to artificially
generated traces to evaluate its benefits against traditional threshold based approaches. We also analyzed
the characteristics of user-sessions extracted by the clustering methodology from real traces and study
their statistical properties.
Using PageRank Algorithm to Improve Coupling MetricsIDES Editor
Existing coupling metrics only use the number of
methods invocations, and does not consider the weight of the
methods. Thus, they cannot measure coupling metrics
accurately. In this paper, we measure the weight of methods
using PageRank algorithm, and propose a new approach to
improve coupling metrics using the weight. We validate the
proposed approach by applying them to several open source
projects. And we measure several coupling metrics using
existing approach and proposed approach. As a result, the
correlation between change-proneness and improved coupling
metrics were significantly higher than existing coupling
metrics. Hence, our improved coupling metrics can more
accurately measure software.
Building a new CTL model checker using Web Servicesinfopapers
Florin Stoica, Laura Stoica, Building a new CTL model checker using Web Services, Proceeding The 21th International Conference on Software, Telecommunications and Computer Networks (SoftCOM 2013), At Split-Primosten, Croatia, 18-20 September, pp. 285-289, 2013
DOI=10.1109/SoftCOM.2013.6671858 http://dx.doi.org/10.1109/SoftCOM.2013.6671858
MMUNE-INSPIRED METHOD FOR SELECTING THE OPTIMAL SOLUTION IN SEMANTIC WEB SE...dannyijwest
The increasing interest in developing efficient and effective optimization techniques has conducted
researchers to turn their attention towards biology. It has been noticed that biology offers many clues for
designing novel optimization techniques, these approaches exhibit self-organizing capabilities and permit
the reachability of promising solutions without the existence of a central coordinator. In this paper we
handle the problem of dynamic web service composition, by using the clonal selection algorithm. In order
to assess the optimality rate of a given composition, we use the QOS attributes of the services involved in
the workflow as well as, the semantic similarity between these components. The experimental evaluation
shows that the proposed approach has a better performance in comparison with other approaches such as
the genetic algorithm
Enhancement of student performance prediction using modified K-nearest neighborTELKOMNIKA JOURNAL
The traditional K-nearest neighbor (KNN) algorithm uses an exhaustive search for a complete training set to predict a single test sample. This procedure can slow down the system to consume more time for huge datasets. The selection of classes for a new sample depends on a simple majority voting system that does not reflect the various significance of different samples (i.e. ignoring the similarities among samples). It also leads to a misclassification problem due to the occurrence of a double majority class. In reference to the above-mentioned issues, this work adopts a combination of moment descriptor and KNN to optimize the sample selection. This is done based on the fact that classifying the training samples before the searching actually takes place can speed up and improve the predictive performance of the nearest neighbor. The proposed method can be called as fast KNN (FKNN). The experimental results show that the proposed FKNN method decreases original KNN consuming time within a range of (75.4%) to (90.25%), and improve the classification accuracy percentage in the range from (20%) to (36.3%) utilizing three types of student datasets to predict whether the student can pass or fail the exam automatically.
Improved fuzzy c-means algorithm based on a novel mechanism for the formation...TELKOMNIKA JOURNAL
The clustering approach is considered as a vital method for many fields suchas machine learning, pattern recognition, image processing, information retrieval, data compression, computer graphics, and others.Similarly, it hasgreat significance in wireless sensor networks (WSNs) by organizing thesensor nodes into specific clusters. Consequently, saving energy and prolonging network lifetime, which is totally dependent on the sensor’s battery, that is considered asa major challenge in the WSNs. Fuzzyc-means (FCM) is one of classification algorithm, which is widely used in literature for this purpose in WSNs. However, according to the nature of random nodes deployment manner, on certain occasions, this situation forces this algorithm to produce unbalanced clusters, which adversely affects the lifetime of the network.To overcome this problem, a new clustering method called FCM-CMhas been proposed by improving the FCM algorithm to form balanced clustersfor random nodes deployment. The improvement is conductedby integrating the FCM with a centralized mechanism(CM).The proposed method will be evaluated based on four new parameters. Simulation result shows that our proposed algorithm is more superior to FCM by producing balanced clustersin addition to increasing the balancing of the intra-distances of the clusters, which leads to energy conservation and prolonging network lifespan.
A surrogate-assisted modeling and optimization method for planning communicat...Power System Operation
The development of industrial informatization stimulates
the implementation of cyber-physical system (CPS) in
distribution network. As a close integration of the power
network infrastructure with cyber system, the research
of design methodology and tools for CPS has gained
wide spread interest considering the heterogeneous
characteristic. To address the problem of planning
communication system in distribution network CPS, at
first, this paper proposed an optimization model utilizing
topology potential equilibrium. The mutual influence
of nodes and the spatial distribution of topological
structure is mathematically described. Then, facing the
complex optimization problem in binary space with
multiple constraints, a novel binary bare bones fireworks
algorithm (BBBFA) with a surrogate-assisted model
is proposed. In the proposed algorithm, the surrogate
model, a back propagation neural network, replaces the
complex constraints by incremental approximation of
nonlinear constraint functions for reducing the difficulty
in finding the optimal solution. The communication
system planning of IEEE 39-bus power system,
which comprises four terminal units, was optimized.
Considering the different heterogeneous degrees,
four programs were involved in planning for practical
considerations. The simulation results of the proposed
algorithm were compared with other representative
methods, which demonstrated the effective performance
of the proposed method to solve communication system
planning for optimizing problems of distribution
network.
A Path-Oriented Automatic Random Testing Based on Double Constraint Propagationijseajournal
This document summarizes a research paper that proposes a path-oriented automatic random testing method based on double constraint propagation. It begins with an introduction to random testing and discusses its limitations. It then describes an existing path-oriented random testing (PRT) approach and identifies limitations where some invalid inputs may still exist. The document proposes a new double constraint propagation based random testing (DCPRT) method that applies a constraint propagation algorithm twice to more accurately reduce the input domain for a given path. It provides details on the DCPRT algorithm and reports on experimental results comparing DCPRT to PRT on some test programs, finding DCPRT obtained a more accurate input domain.
ENHANCING COMPUTATIONAL EFFORTS WITH CONSIDERATION OF PROBABILISTIC AVAILABL...Raja Larik
This document proposes a Probabilistic Collocation Method (PCM) to improve probabilistic load flow (PLF) computation methods and model network topology uncertainties. PCM uses probability distribution functions to model the impact of uncertainties as a linear function of power injections. It maintains the linear relationship between line flows and power injections. The method is examined using the IEEE 39-bus test system and compared to Monte Carlo simulation, showing significantly reduced computational efforts while maintaining accuracy.
The document discusses using a machine learning algorithm called Multi Expression Programming (MEP) to develop a computational model for predicting the compressive strength of carbon fiber-reinforced polymer (CFRP) confined concrete. The model will be based on an extensive database of 828 experimental specimens, incorporating parameters like specimen diameter, height, unconfined concrete strength, CFRP layer thickness, and elastic modulus. MEP is described as an advanced evolutionary algorithm that can accurately model problems with unspecified complexities better than other techniques. The validation and performance of the proposed MEP model will be assessed by comparing predictions to experimental data and other existing strength models.
Analysis of non-functional aspects like performance and reliability are crucial for the success of dynamic distributed systems that are self adaptive. With the success of the Internet and mobile technology, properties like the reliability of connections, available bandwidth and computing resources become an even greater concern. Non-functional requirements, are often difficult to capture, measure, and predict. Therefore, stochastic methods are required to address these aspects. For the same purpose, architecture of dynamic distributed systems, in particular P2P networks is viewed as a graph and modeled by graph transformation.
Model based test case prioritization using neural network classificationcseij
Model-based testing for real-life software systems often require a large number of tests, all of which cannot
exhaustively be run due to time and cost constraints. Thus, it is necessary to prioritize the test cases in
accordance with their importance the tester perceives. In this paper, this problem is solved by improving
our given previous study, namely, applying classification approach to the results of our previous study
functional relationship between the test case prioritization group membership and the two attributes:
important index and frequency for all events belonging to given group are established. A for classification
purpose, neural network (NN) that is the most advances is preferred and a data set obtained from our study
for all test cases is classified using multilayer perceptron (MLP) NN. The classification results for
commercial test prioritization application show the high classification accuracies about 96% and the
acceptable test prioritization performances are achieved.
Optimizing Effort Parameter of COCOMO II Using Particle Swarm Optimization Me...TELKOMNIKA JOURNAL
Estimating the effort and cost of software is an important activity for software project managers. A poor estimate (overestimates or underestimates) will result in poor software project management. To handle this problem, many researchers have proposed various models for estimating software cost. Constructive Cost Model II (COCOMO II) is one of the best known and widely used models for estimating software costs. To estimate the cost of a software project, the COCOMO II model uses software size, cost drivers, scale factors as inputs. However, this model is still lacking in terms of accuracy. To improve the accuracy of COCOMO II model, this study examines the effect of the cost factor and scale factor in improving the accuracy of effort estimation. In this study, we initialized using Particle Swarm Optimization (PSO) to optimize the parameters in a model of COCOMO II. The method proposed is implemented using the Turkish Software Industry dataset which has 12 data items. The method can handle improper and uncertain inputs efficiently, as well as improves the reliability of software effort. The experiment results by MMRE were 34.1939%, indicating better high accuracy and significantly minimizing error 698.9461% and 104.876%.
PAGE: A Partition Aware Engine for Parallel Graph Computation1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
This document provides 50 questions and answers on advanced testing techniques for the ISTQB CTAL certification. It discusses topics like conditional testing, expression testing, domain testing, perturbation testing, fault sensitivity testing, propagation oriented testing including path testing and compiler-based testing, data flow testing, and mutation testing. The full document provides detailed explanations of each testing technique.
Text Detection and Recognition: A ReviewIRJET Journal
This document reviews different approaches for text detection and recognition from images. It discusses two main methods: stepwise methods that separate detection and recognition into distinct stages, and integrated methods that share information between stages. The key stages are discussed as text detection and localization, classification, segmentation, and text recognition. A variety of approaches are analyzed for each stage, including their advantages and disadvantages. Finally, applications of text detection and recognition technologies are mentioned.
Improving K-NN Internet Traffic Classification Using Clustering and Principle...journalBEEI
K-Nearest Neighbour (K-NN) is one of the popular classification algorithm, in this research K-NN use to classify internet traffic, the K-NN is appropriate for huge amounts of data and have more accurate classification, K-NN algorithm has a disadvantages in computation process because K-NN algorithm calculate the distance of all existing data in dataset. Clustering is one of the solution to conquer the K-NN weaknesses, clustering process should be done before the K-NN classification process, the clustering process does not need high computing time to conqest the data which have same characteristic, Fuzzy C-Mean is the clustering algorithm used in this research. The Fuzzy C-Mean algorithm no need to determine the first number of clusters to be formed, clusters that form on this algorithm will be formed naturally based datasets be entered. The Fuzzy C-Mean has weakness in clustering results obtained are frequently not same even though the input of dataset was same because the initial dataset that of the Fuzzy C-Mean is less optimal, to optimize the initial datasets needs feature selection algorithm. Feature selection is a method to produce an optimum initial dataset Fuzzy C-Means. Feature selection algorithm in this research is Principal Component Analysis (PCA). PCA can reduce non significant attribute or feature to create optimal dataset and can improve performance for clustering and classification algorithm. The resultsof this research is the combination method of classification, clustering and feature selection of internet traffic dataset was successfully modeled internet traffic classification method that higher accuracy and faster performance.
This document discusses model-based calibration techniques used to develop an engine calibration to meet emissions standards. It focuses on using design of experiments methods and statistical modeling to optimize engine parameters. Specifically, it describes:
1) Using a two-stage regression approach to separate variables like spark timing into a "local model" and others into a "global model" to better characterize responses.
2) How screening experiments can help select appropriate variables and ranges for the design of experiments to avoid unstable operating points.
3) Techniques for modeling experimental data through "local models" of individual variables and a "global model" that combines local models to reproduce responses for any combination of variables.
Hybrid multi objectives genetic algorithms and immigrants scheme for dynamic ...khalil IBRAHIM
the main concept of intelligent optimization techniques, artificial neural networks, and new genetic algorithms to solve the multi-objective multicast routing problems with shortest path (SP) problem that used in the addresses networks and improve all processes addressing in the wireless communications based on multi-objective optimization. The most important characteristics in mobile wireless networks is the topology dynamics and the network topology changes over time, the routing problem (SPRP) in mobile ad hoc networks (MANETs) turns out to be a dynamic optimization problem[13], the hybrid immigrants multiple-objective genetic algorithm (HIMOGAs) in the real- world are dynamic in nature, that has objective functions, constraints, and parameters, the dynamic optimization problems (DOPs) are a big challenges to evolutionary multi-objective, since any environmental change may affect the objective vector, constraints, and parameters, HIMOGA for the optimization goal is to track the moving of parameters and get a sequence of approximations solutions over time. The quantity of services (QoS) is supporting guarantee for all data traffic and getting the maximizing utilization for network, the QoS based on multicast routing offers significant challenges, and increases to use an efficient multicast routing protocol that will be able to check multicast routing and satisfying QoS constraints, The author propose to use HIMOGAs and SP algorithm to solve multicast problem that produces new generation wireless networks with immigrants schema to get high-quality solutions after each change and satisfying all objectives.
Improving Posture Accuracy of Non-Holonomic Mobile Robot System with Variable...TELKOMNIKA JOURNAL
This paper presents a method to decrease imprecision and inaccuracy that have the tendency to
influence the posture of non-holonomic mobile robot by using the adaptive tuning of universe of discourse.
As such, the primary objective of the study is to force the posture error of , , and towards
zero. Hence, for each step of tuning the fuzzy domain, about 20% of imprecision and inaccuracy had been
added automatically into the variable universe fuzzy, while the control input was bound via scaling gain.
Furthermore, the simulation results showed that the tuning of universe fuzzy parameters could increase
the performance of the system from the aspects of response time and error for steady state through better
control of inaccuracy. Besides, the domains of universe fuzzy input [-4,4] and output [0,6] exhibited good
performance in inching towards zero values as the steady state error was about 1% for x(t) position, 0.02%
for y(t) position, and 0.16% for θ(t) orientation, whereas the posture error in the given reference was about
0.0002% .
K2 Algorithm-based Text Detection with An Adaptive Classifier ThresholdCSCJournals
In natural scene images, text detection is a challenging study area for dissimilar content-based image analysis tasks. In this paper, a Bayesian network scores are used to classify candidate character regions by computing posterior probabilities. The posterior probabilities are used to define an adaptive threshold to detect text in scene images with accuracy. Therefore, candidate character regions are extracted through maximally stable extremal region. K2 algorithm-based Bayesian network scores are learned by evaluating dependencies amongst features of a given candidate character region. Bayesian logistic regression classifier is trained to compute posterior probabilities to define an adaptive classifier threshold. The candidate character regions below from adaptive classifier threshold are discarded as non-character regions. Finally, text regions are detected with the use of effective text localization scheme based on geometric features. The entire system is evaluated on the ICDAR 2013 competition database. Experimental results produce competitive performance (precision, recall and harmonic mean) with the recently published algorithms.
Spectral opportunity selection based on the hybrid algorithm AHP-ELECTRETELKOMNIKA JOURNAL
Due to an ever-growing demand for spectrum and the fast-paced developmentof wireless applications, technologies such as cognitive radio enablethe efficient use of the spectrum. The objective of the present article is todesign an algorithm capable of choosing the best channel for data transmission.It uses quantitative methods that can modify behavior by changing qualityparameters in the channel. To achieve this task, a hybrid decision-makingalgorithm is designed that combinesanalytical hierarchy process(AHP)algorithms and adjusts the weights of each channel parameter, using a prioritytable. TheElimination Et Choix Tranduisant La Realité(ELECTRE)algorithm processes the information from each channel through a weightmatrix and then delivers the most favorable result for the transmitted data. Theresults reveal that the hybrid AHP-ELECTRE algorithm has a suitableperformance, which improves the throughput rate by 14% compared to similaralternatives.
IDENTIFICATION AND INVESTIGATION OF THE USER SESSION FOR LAN CONNECTIVITY VIA...ijcseit
This paper mainly presents some technical discussions on the identification and analyze of “LAN usersessions”.
The identification of a user-session is non trivial. Classical methods approaches rely on
threshold based mechanisms. Threshold based techniques are very sensitive to the value chosen for the
threshold, which may be difficult to set correctly. Clustering techniques are used to define a novel
methodology to identify LAN user-sessions without requiring an a priori definition of threshold values. We
have defined a clustering based approach in detail, and also we discussed positive and negative of this
approach, and we apply it to real traffic traces. The proposed methodology is applied to artificially
generated traces to evaluate its benefits against traditional threshold based approaches. We also analyzed
the characteristics of user-sessions extracted by the clustering methodology from real traces and study
their statistical properties.
Using PageRank Algorithm to Improve Coupling MetricsIDES Editor
Existing coupling metrics only use the number of
methods invocations, and does not consider the weight of the
methods. Thus, they cannot measure coupling metrics
accurately. In this paper, we measure the weight of methods
using PageRank algorithm, and propose a new approach to
improve coupling metrics using the weight. We validate the
proposed approach by applying them to several open source
projects. And we measure several coupling metrics using
existing approach and proposed approach. As a result, the
correlation between change-proneness and improved coupling
metrics were significantly higher than existing coupling
metrics. Hence, our improved coupling metrics can more
accurately measure software.
Building a new CTL model checker using Web Servicesinfopapers
Florin Stoica, Laura Stoica, Building a new CTL model checker using Web Services, Proceeding The 21th International Conference on Software, Telecommunications and Computer Networks (SoftCOM 2013), At Split-Primosten, Croatia, 18-20 September, pp. 285-289, 2013
DOI=10.1109/SoftCOM.2013.6671858 http://dx.doi.org/10.1109/SoftCOM.2013.6671858
MMUNE-INSPIRED METHOD FOR SELECTING THE OPTIMAL SOLUTION IN SEMANTIC WEB SE...dannyijwest
The increasing interest in developing efficient and effective optimization techniques has conducted
researchers to turn their attention towards biology. It has been noticed that biology offers many clues for
designing novel optimization techniques, these approaches exhibit self-organizing capabilities and permit
the reachability of promising solutions without the existence of a central coordinator. In this paper we
handle the problem of dynamic web service composition, by using the clonal selection algorithm. In order
to assess the optimality rate of a given composition, we use the QOS attributes of the services involved in
the workflow as well as, the semantic similarity between these components. The experimental evaluation
shows that the proposed approach has a better performance in comparison with other approaches such as
the genetic algorithm
Immune-Inspired Method for Selecting the Optimal Solution in Semantic Web Ser...IJwest
The increasing interest in developing efficient and effective optimization techniques has conducted researchers to turn their attention towards biology. It has been noticed that biology offers many clues for designing novel optimization techniques, these approaches exhibit self-organizing capabilities and permit the reachability of promising solutions without the existence of a central coordinator. In this paper we handle the problem of dynamic web service composition, by using the clonal selection algorithm. In order to assess the optimality rate of a given composition, we use the QOS attributes of the services involved in the workflow as well as, the semantic similarity between these components. The experimental evaluation shows that the proposed approach has a better performance in comparison with other approaches such as the genetic algorithm.
TRANSFORMING SOFTWARE REQUIREMENTS INTO TEST CASES VIA MODEL TRANSFORMATIONijseajournal
Executable test cases originate at the onset of testing as abstract requirements that represent system
behavior. Their manual development is time-consuming, susceptible to errors, and expensive. Translating
system requirements into behavioral models and then transforming them into a scripting language has the
potential to automate their conversion into executable tests. Ideally, an effective testing process should
start as early as possible, refine the use cases with ample details, and facilitate the creation of test cases.
We propose a methodology that enables automation in converting functional requirements into executable
test cases via model transformation. The proposed testing process starts with capturing system behavior in
the form of visual use cases, using a domain-specific language, defining transformation rules, and
ultimately transforming the use cases into executable tests.
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.co¬m-Visit Our Website: www.finalyearprojects.org
JAVA 2013 IEEE DATAMINING PROJECT Distributed web systems performance forecas...IEEEGLOBALSOFTTECHNOLOGIES
The document discusses using the Turning Bands (TB) geostatistical simulation method to predict the performance of distributed web systems. Real-life data on download times was collected from over 60 web servers monitored from different locations worldwide. The TB method was applied to this data to generate spatial-temporal predictions of web performance. The results showed the TB method provided good quality forecasts, especially for European servers monitored from Poland. The study aims to develop a robust spatio-temporal prediction algorithm using the TB method to efficiently forecast client-perceived web performance.
PREDICTING PERFORMANCE OF WEB SERVICES USING SMTQA cscpconf
Web Service is an interface which implements business logic. Performance is an important quality aspect of Web services because of their distributed nature. Predicting the performance of web services during early stages of software development is significant. In this paper we model web service using Unified Modeling Language, Use Case Diagram, Sequence Diagram. We obtain the Performance metrics by simulating the web services model using a simulation tool Simulation of Multi-Tier Queuing Architecture. We have identified the bottle neck resources
WEB SERVICE SELECTION BASED ON RANKING OF QOS USING ASSOCIATIVE CLASSIFICATIONijwscjournal
With the explosive growth of the number of services published over the Internet, it is difficult to select satisfactory web services among the candidate web services which provide similar functionalities. Quality of Service (QoS) is considered as the most important non-functional criterion for service selection. But this criterion is no longer considered as the only criterion to rank web services, satisfying user’s preferences. The similarity measure (outputs–inputs similarity) between concepts based on ontology in an interconnected network of semantic Web services involved in a composition can be used as a distinguishing criterion to estimate the semantic quality of selected services for the composite service. Coupling the
semantic similarity as the functional aspect and quality of services allows us to further constrain and select services for the valid composite services.
In this paper, we present an overall service selection and ranking framework which firstly classify candidate web services to different QoS levels respect to user’s QoS requirements and preferences with an Associative Classification algorithm and then rank the most qualified candidate services based on their
functional quality through semantic matching. The experimental results show that proposed framework can satisfy service requesters’ non-functional requirements.
WEB SERVICE SELECTION BASED ON RANKING OF QOS USING ASSOCIATIVE CLASSIFICATIONijwscjournal
With the explosive growth of the number of services published over the Internet, it is difficult to select satisfactory web services among the candidate web services which provide similar functionalities. Quality of Service (QoS) is considered as the most important non-functional criterion for service selection. But this criterion is no longer considered as the only criterion to rank web services, satisfying user’s preferences. The similarity measure (outputs–inputs similarity) between concepts based on ontology in an interconnected network of semantic Web services involved in a composition can be used as a distinguishing criterion to estimate the semantic quality of selected services for the composite service. Coupling the semantic similarity as the functional aspect and quality of services allows us to further constrain and select services for the valid composite services.
In this paper, we present an overall service selection and ranking framework which firstly classify candidate web services to different QoS levels respect to user’s QoS requirements and preferences with an Associative Classification algorithm and then rank the most qualified candidate services based on their functional quality through semantic matching. The experimental results show that proposed framework can satisfy service requesters’ non-functional requirements.
WEB SERVICE SELECTION BASED ON RANKING OF QOS USING ASSOCIATIVE CLASSIFICATIONijwscjournal
With the explosive growth of the number of services published over the Internet, it is difficult to select satisfactory web services among the candidate web services which provide similar functionalities. Quality of Service (QoS) is considered as the most important non-functional criterion for service selection. But this criterion is no longer considered as the only criterion to rank web services, satisfying user’s preferences. The similarity measure (outputs–inputs similarity) between concepts based on ontology in an interconnected network of semantic Web services involved in a composition can be used as a distinguishing criterion to estimate the semantic quality of selected services for the composite service. Coupling the semantic similarity as the functional aspect and quality of services allows us to further constrain and select services for the valid composite services.
In this paper, we present an overall service selection and ranking framework which firstly classify candidate web services to different QoS levels respect to user’s QoS requirements and preferences with an Associative Classification algorithm and then rank the most qualified candidate services based on their functional quality through semantic matching. The experimental results show that proposed framework can satisfy service requesters’ non-functional requirements.
WEB SERVICE SELECTION BASED ON RANKING OF QOS USING ASSOCIATIVE CLASSIFICATIONijwscjournal
With the explosive growth of the number of services published over the Internet, it is difficult to select satisfactory web services among the candidate web services which provide similar functionalities. Quality of Service (QoS) is considered as the most important non-functional criterion for service selection. But this criterion is no longer considered as the only criterion to rank web services, satisfying user’s preferences. The similarity measure (outputs–inputs similarity) between concepts based on ontology in an interconnected network of semantic Web services involved in a composition can be used as a distinguishing criterion to estimate the semantic quality of selected services for the composite service. Coupling the
semantic similarity as the functional aspect and quality of services allows us to further constrain and select services for the valid composite services. In this paper, we present an overall service selection and ranking framework which firstly classify candidate web services to different QoS levels respect to user’s QoS requirements and preferences with an Associative Classification algorithm and then rank the most qualified candidate services based on their functional quality through semantic matching. The experimental results show that proposed framework can satisfy service requesters’ non-functional requirements.
RESEARCH ON DISTRIBUTED SOFTWARE TESTING PLATFORM BASED ON CLOUD RESOURCEijcses
In order to solve the low efficiency problem of large-scale distributed software testing , CBDSTP(
Cloud-Based Distributed Software Testing Platform) is put forward.This platform can provide continous
integration and automation of testing for large software systems, which can make full use of resources on
the cloud clients, achieving testing result s in the real environment and reasonable allocating testing jobs,
to resolve the Web application software configuration test, compatibility test and distributed test problems,
to reduce costs, improve efficiency. Through making MySQL testing on this prototype system, the
verification is made for platform architecture and job allocation effectiveness.
Laura Florentina Stoica, Florian Mircea Boian, Florin Stoica, A Distributed CTL Model Checker, Proceeding of 10th International Conference on e-Business, ICE-B 2013, Reykjavik Iceland, paper 33, 29-31 July, pp. 379-386, ISBN: 978-989-8565-72-3, 2013
Final Year IEEE Project 2013-2014 - Web Services Project Title and Abstractelysiumtechnologies
This document provides contact and location information for Elysium Technologies Private Limited, an IT company with 13 years of experience and over 250 developers located across multiple branches in India. It lists their services such as automated services, 24/7 help desk support, and ticketing & appointment systems. The company has experience in multiple languages and technologies.
The Indo American Journal of Life Sciences and Bio Technology is a scholarly publication dedicated to advancing research and knowledge at the intersection of life sciences and biotechnology. With a focus on fostering collaboration between Indian and American scientific communities, the journal serves as a platform for high-quality articles, reviews, and original research contributions. Covering a broad spectrum of topics, from molecular biology to bioprocessing, the journal facilitates the exchange of cutting-edge information and promotes innovation in these crucial fields. Published regularly, it provides a valuable resource for researchers, academicians, and professionals seeking to stay abreast of the latest developments in life sciences and biotechnology.
This document discusses methods for testing web services and software-as-a-service applications. It proposes an architecture for dynamic test reconfiguration that can adapt to changes in web service operations, arguments, and compositions. The architecture includes layers for service interfaces, system operations, service management, and resource access. It also reviews several existing approaches to web service testing, including collaborative testing using multiple test partners, adaptive testing frameworks, and WSDL-based automated test data generation. The proposed system would implement runtime testing of atomic and composite web services to evaluate cloud-based applications with third-party services.
ISSN 2347-2243
Indo-American Journal of Life Sciences and Biotechnology. is an international online journal in English published Quarterly. Our journal is to publish manuscripts relating to Botany, Zoology, Marine Biology, Health and Nutrition, Cell Biology, Neurobiology, Biochemistry, All articles published in this journal represent the opinion of the authors and not reflect the official policy of the Journal of Indo-American Journal of Life Sciences and Biotechnology .
ISSN 2347-2243
Indo-American Journal of Life Sciences and Biotechnology. is an international online journal in English published Quarterly. Our journal is to publish manuscripts relating to Botany, Zoology, Marine Biology, Health and Nutrition, Cell Biology, Neurobiology, Biochemistry, All articles published in this journal represent the opinion of the authors and not reflect the official policy of the Journal of Indo-American Journal of Life Sciences and Biotechnology
ISSN 2347-2243
The Indo-American Journal of Life Sciences and Biotechnology is an international online journal in English published Quarterly. The delivery expedites the process. All submitted the research articles are subjected to immediate rapid screening by the editors, in consultation with the Editorial Board or others working in the field as appropriate, to ensure they are likely to be journal.
Similar to Test sequences for web service composition using cpn model (20)
Abnormalities of hormones and inflammatory cytokines in women affected with p...Alexander Decker
Women with polycystic ovary syndrome (PCOS) have elevated levels of hormones like luteinizing hormone and testosterone, as well as higher levels of insulin and insulin resistance compared to healthy women. They also have increased levels of inflammatory markers like C-reactive protein, interleukin-6, and leptin. This study found these abnormalities in the hormones and inflammatory cytokines of women with PCOS ages 23-40, indicating that hormone imbalances associated with insulin resistance and elevated inflammatory markers may worsen infertility in women with PCOS.
A usability evaluation framework for b2 c e commerce websitesAlexander Decker
This document presents a framework for evaluating the usability of B2C e-commerce websites. It involves user testing methods like usability testing and interviews to identify usability problems in areas like navigation, design, purchasing processes, and customer service. The framework specifies goals for the evaluation, determines which website aspects to evaluate, and identifies target users. It then describes collecting data through user testing and analyzing the results to identify usability problems and suggest improvements.
A universal model for managing the marketing executives in nigerian banksAlexander Decker
This document discusses a study that aimed to synthesize motivation theories into a universal model for managing marketing executives in Nigerian banks. The study was guided by Maslow and McGregor's theories. A sample of 303 marketing executives was used. The results showed that managers will be most effective at motivating marketing executives if they consider individual needs and create challenging but attainable goals. The emerged model suggests managers should provide job satisfaction by tailoring assignments to abilities and monitoring performance with feedback. This addresses confusion faced by Nigerian bank managers in determining effective motivation strategies.
A unique common fixed point theorems in generalized dAlexander Decker
This document presents definitions and properties related to generalized D*-metric spaces and establishes some common fixed point theorems for contractive type mappings in these spaces. It begins by introducing D*-metric spaces and generalized D*-metric spaces, defines concepts like convergence and Cauchy sequences. It presents lemmas showing the uniqueness of limits in these spaces and the equivalence of different definitions of convergence. The goal of the paper is then stated as obtaining a unique common fixed point theorem for generalized D*-metric spaces.
A trends of salmonella and antibiotic resistanceAlexander Decker
This document provides a review of trends in Salmonella and antibiotic resistance. It begins with an introduction to Salmonella as a facultative anaerobe that causes nontyphoidal salmonellosis. The emergence of antimicrobial-resistant Salmonella is then discussed. The document proceeds to cover the historical perspective and classification of Salmonella, definitions of antimicrobials and antibiotic resistance, and mechanisms of antibiotic resistance in Salmonella including modification or destruction of antimicrobial agents, efflux pumps, modification of antibiotic targets, and decreased membrane permeability. Specific resistance mechanisms are discussed for several classes of antimicrobials.
A transformational generative approach towards understanding al-istifhamAlexander Decker
This document discusses a transformational-generative approach to understanding Al-Istifham, which refers to interrogative sentences in Arabic. It begins with an introduction to the origin and development of Arabic grammar. The paper then explains the theoretical framework of transformational-generative grammar that is used. Basic linguistic concepts and terms related to Arabic grammar are defined. The document analyzes how interrogative sentences in Arabic can be derived and transformed via tools from transformational-generative grammar, categorizing Al-Istifham into linguistic and literary questions.
A time series analysis of the determinants of savings in namibiaAlexander Decker
This document summarizes a study on the determinants of savings in Namibia from 1991 to 2012. It reviews previous literature on savings determinants in developing countries. The study uses time series analysis including unit root tests, cointegration, and error correction models to analyze the relationship between savings and variables like income, inflation, population growth, deposit rates, and financial deepening in Namibia. The results found inflation and income have a positive impact on savings, while population growth negatively impacts savings. Deposit rates and financial deepening were found to have no significant impact. The study reinforces previous work and emphasizes the importance of improving income levels to achieve higher savings rates in Namibia.
A therapy for physical and mental fitness of school childrenAlexander Decker
This document summarizes a study on the importance of exercise in maintaining physical and mental fitness for school children. It discusses how physical and mental fitness are developed through participation in regular physical exercises and cannot be achieved solely through classroom learning. The document outlines different types and components of fitness and argues that developing fitness should be a key objective of education systems. It recommends that schools ensure pupils engage in graded physical activities and exercises to support their overall development.
A theory of efficiency for managing the marketing executives in nigerian banksAlexander Decker
This document summarizes a study examining efficiency in managing marketing executives in Nigerian banks. The study was examined through the lenses of Kaizen theory (continuous improvement) and efficiency theory. A survey of 303 marketing executives from Nigerian banks found that management plays a key role in identifying and implementing efficiency improvements. The document recommends adopting a "3H grand strategy" to improve the heads, hearts, and hands of management and marketing executives by enhancing their knowledge, attitudes, and tools.
This document discusses evaluating the link budget for effective 900MHz GSM communication. It describes the basic parameters needed for a high-level link budget calculation, including transmitter power, antenna gains, path loss, and propagation models. Common propagation models for 900MHz that are described include Okumura model for urban areas and Hata model for urban, suburban, and open areas. Rain attenuation is also incorporated using the updated ITU model to improve communication during rainfall.
A synthetic review of contraceptive supplies in punjabAlexander Decker
This document discusses contraceptive use in Punjab, Pakistan. It begins by providing background on the benefits of family planning and contraceptive use for maternal and child health. It then analyzes contraceptive commodity data from Punjab, finding that use is still low despite efforts to improve access. The document concludes by emphasizing the need for strategies to bridge gaps and meet the unmet need for effective and affordable contraceptive methods and supplies in Punjab in order to improve health outcomes.
A synthesis of taylor’s and fayol’s management approaches for managing market...Alexander Decker
1) The document discusses synthesizing Taylor's scientific management approach and Fayol's process management approach to identify an effective way to manage marketing executives in Nigerian banks.
2) It reviews Taylor's emphasis on efficiency and breaking tasks into small parts, and Fayol's focus on developing general management principles.
3) The study administered a survey to 303 marketing executives in Nigerian banks to test if combining elements of Taylor and Fayol's approaches would help manage their performance through clear roles, accountability, and motivation. Statistical analysis supported combining the two approaches.
A survey paper on sequence pattern mining with incrementalAlexander Decker
This document summarizes four algorithms for sequential pattern mining: GSP, ISM, FreeSpan, and PrefixSpan. GSP is an Apriori-based algorithm that incorporates time constraints. ISM extends SPADE to incrementally update patterns after database changes. FreeSpan uses frequent items to recursively project databases and grow subsequences. PrefixSpan also uses projection but claims to not require candidate generation. It recursively projects databases based on short prefix patterns. The document concludes by stating the goal was to find an efficient scheme for extracting sequential patterns from transactional datasets.
A survey on live virtual machine migrations and its techniquesAlexander Decker
This document summarizes several techniques for live virtual machine migration in cloud computing. It discusses works that have proposed affinity-aware migration models to improve resource utilization, energy efficient migration approaches using storage migration and live VM migration, and a dynamic consolidation technique using migration control to avoid unnecessary migrations. The document also summarizes works that have designed methods to minimize migration downtime and network traffic, proposed a resource reservation framework for efficient migration of multiple VMs, and addressed real-time issues in live migration. Finally, it provides a table summarizing the techniques, tools used, and potential future work or gaps identified for each discussed work.
A survey on data mining and analysis in hadoop and mongo dbAlexander Decker
This document discusses data mining of big data using Hadoop and MongoDB. It provides an overview of Hadoop and MongoDB and their uses in big data analysis. Specifically, it proposes using Hadoop for distributed processing and MongoDB for data storage and input. The document reviews several related works that discuss big data analysis using these tools, as well as their capabilities for scalable data storage and mining. It aims to improve computational time and fault tolerance for big data analysis by mining data stored in Hadoop using MongoDB and MapReduce.
1. The document discusses several challenges for integrating media with cloud computing including media content convergence, scalability and expandability, finding appropriate applications, and reliability.
2. Media content convergence challenges include dealing with the heterogeneity of media types, services, networks, devices, and quality of service requirements as well as integrating technologies used by media providers and consumers.
3. Scalability and expandability challenges involve adapting to the increasing volume of media content and being able to support new media formats and outlets over time.
This document surveys trust architectures that leverage provenance in wireless sensor networks. It begins with background on provenance, which refers to the documented history or derivation of data. Provenance can be used to assess trust by providing metadata about how data was processed. The document then discusses challenges for using provenance to establish trust in wireless sensor networks, which have constraints on energy and computation. Finally, it provides background on trust, which is the subjective probability that a node will behave dependably. Trust architectures need to be lightweight to account for the constraints of wireless sensor networks.
This document discusses private equity investments in Kenya. It provides background on private equity and discusses trends in various regions. The objectives of the study discussed are to establish the extent of private equity adoption in Kenya, identify common forms of private equity utilized, and determine typical exit strategies. Private equity can involve venture capital, leveraged buyouts, or mezzanine financing. Exits allow recycling of capital into new opportunities. The document provides context on private equity globally and in developing markets like Africa to frame the goals of the study.
This document discusses a study that analyzes the financial health of the Indian logistics industry from 2005-2012 using Altman's Z-score model. The study finds that the average Z-score for selected logistics firms was in the healthy to very healthy range during the study period. The average Z-score increased from 2006 to 2010 when the Indian economy was hit by the global recession, indicating the overall performance of the Indian logistics industry was good. The document reviews previous literature on measuring financial performance and distress using ratios and Z-scores, and outlines the objectives and methodology used in the current study.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Building RAG with self-deployed Milvus vector database and Snowpark Container...Zilliz
This talk will give hands-on advice on building RAG applications with an open-source Milvus database deployed as a docker container. We will also introduce the integration of Milvus with Snowpark Container Services.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Test sequences for web service composition using cpn model
1. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.6, 2012
Test Sequences for Web Service Composition using CPN model
Poonkavithai Kalamegam* and Dr. Zayaraz Godandapani
Dept of CSE, Pondicherry Engineering College, Pondicherry-605014, India
* E-mail of the corresponding author: poonks2012@gmail.com
Abstract
Web service composition is most mature and effective way to realize the rapidly changing requirements of business
in service-oriented solutions. Testing the compositions of web services is complex, due to their distributed nature and
asynchronous behaviour. Colored Petri Nets (CPNs) provide a framework for the design, specification, validation
and verification of systems. In this paper the CPN model used for composition design verification is reused for test
design purpose. We propose an on-the-fly algorithm that generates a test suite that covers all possible paths without
redundancy. The prioritization of test sequences, test suite size and redundancy reduction are also focused. The
proposed technique was applied to air line reservation system and the generated test sequences were evaluated
against three coverage criteria; Decision Coverage, Input Output Coverage and Transition Coverage.
Keywords— CPN, MBT, web service composition testing, test case generation
1. Introduction
Web Services is a set of distributed message oriented interacting components. It is the most active and widely
adopted implementation of SOA which is a design pattern composed of loosely coupled, discoverable, reusable,
inter-operable platform agnostic services that follow a well defined standard [1]. Success of any deployment in an
enterprise depends on the quality assurance process undertaken. Web Service composition testing is quite different
from traditional testing. It requires its own type of test architecture and tester skills. To test any composition, all the
web services needs to be tested in isolation along with the common use cases where the services are interdependent.
The services can be composed by following two complementary views; choreography and orchestration. In
Orchestration a central element controls the business logic and execution order of the interactions. In Choreography
interactions may involve multiple parties and multiple sources, but each element of the process is autonomous and
controls its own agenda. Hence service composition testing can be classified into choreography-based and
orchestration-based testing.
Software testing is one of the most crucial phases in any SDLC to assure the quality of software. Creation of test
cases consumes major effort allocated for testing. Model Based Testing (MBT) is a form of black-box testing
technique that uses behavioural models of the system to automate the test case design process. Colored Petri nets
(CPN) provide a framework for the construction and analysis of distributed and concurrent systems. A CPN model of
a system describes the states which the system may be in and the transitions between these states. CPN have been
applied in a wide range of application areas particularly in software system designs and business process
re-engineering. Extending the usage of Colored Petri Nets beyond the system design phase, particularly in test design
phase proves to be very effective in terms of greater test coverage and reduced test effort.
In this paper a novel idea of applying MBT technique on the CPN model used for verifying the web service
composition is presented. The paper is organized as follows. In next section, we discuss the motivation for our
approach. In section III, we introduce usage of CPN model in web service composition. In section IV, the approach
to automate test case generation for testing web service composition is described in detail. We present results and
discussions in section V followed by conclusions in the last section.
2. Motivation
MBT provides a solid foundation for automating the test design process by generating the test cases from the
business requirements that are formally represented by a model. UML (Unified Modeling Language), FSM (finite
state machines) and CPN are widely used modeling mechanisms to specify, analyze and simulate requirements of the
software, for test automation and for model-based software testing. CPN is a mathematical model and hence has
32
2. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.6, 2012
better formal capabilities to specify and analyze even complicated behaviours of a system. Moreover CPN model
could be simulated dynamically, directed by the data-dependent control flow of system behaviours. It is better in
modeling, analysis and validation of the accuracy of the system functional models. There exist few approaches to
derive test cases from the CPN model. A simple approach to generate test cases using the state space is proposed
in[2].The advantage of this approach is that the correctness of the specification based on CPN can be validated by
simulation tools and the state space can be also generated by state space tools. An efficient approach for building
conformance test suite using the PN-ioco relation is proposed in [3]. U. Farooq, C. P. Lam and H. Lin [4]
proposed a method to convert the AD activities to a CPN model and apply the Random Walk Algorithm to create test
sequences. Harumi Watanabe and Tomohiro Kudoh [5] proposed two techniques that can be applied for concurrent
systems. One uses CPN-Tree and the other uses Colored Petri net Graph (CP-graph). CPN-Graph is considered as
FSM and existing test case generation methods based on FSM is applied. In CPN-Tree method the reachability trees
reduced by the equivalent marking technique are used to achieve the practical test suite length.
There are several methods for automatic test cases generation using the BPEL structure of the composite web service.
MBT [6] can be used along with Symbolic Execution, Model Checking and Petri Nets for testing and verification of
the web service composition. In this paper we will limit to testing case generation. Model Checking technique is
used in [7, 8]. The Timed Extended Finite State Machines (TEFSM) is used for widely due existing tool support
[9-13]. The BPEL specification is transformed into the TEFSM model and then used for test case generation. The
Control Flow Graph(s) (CFG) are used to represent the BPEL processes. In [14] the test cases are generated using
Graph Search Algorithm(s) and test data using Path Analysis (using constraint solving) (PA). In [15], an automated
test data generation framework that extends CFG to represent the BPEL activities as the edges is proposed. Another
extended CFG [16] called BPEL Flow Graph (BFG) that contains both control and data flow of a BPEL process is
used for test data generation and semantic information such as dead paths. Hou et al. [17] suggests that the BPEL
processes can be modelled as a Message Sequence Graph (MSG) from which test cases are generated. Guangquan et
al. [18] propose the use of UML 2.0 Activity Diagram to model the BPEL process and a depth first search method
combined with the test coverage criteria is used to generate test cases. Tarhini et al. [19, 20] proposed two abstract
models; the Task Precedence Graph (TPG) and the Timed Labelled Transition System (TLTS) to represent system
under test. The TPG models the interaction between services and the TLTS models the internal behaviour of the
participating web services.
Most of the existing approaches take into account the BPEL specifications alone for creating the test cases and the
rich information on data available in the WSDL schema is not utilized effectively. On one hand CPN models have
been used in test case generation but not in web service composition testing. On the other hand there are various
approaches to verify web composition using CPN model. We aim at bringing in CPN models into MBT techniques to
create cost effective and efficient test sequences for validating any composition. In this paper we propose an
algorithm to generate of test sequences from the CPN model used for verifying the design of web service
composition.
3. Need for Using CPN
The CPN model provides a formal description of the web composition a system. However it is still possible that
more than one implementation is derived from the same specification and is not compatible. This is mainly due to
incorrect implementation of the composition. Hence there is a need for testing every implementation for
conformance to the business requirements. Testing can be carried out using test sequences generated from the
business requirements. Control flow testing focuses on the transfer of control, while data flow testing focuses on the
definitions of data and their subsequent use. The execution of a service is driven by the received and manipulated
data; hence validation of data flow is very critical in composition. The data flow pertains to service messages namely
the request and response messages through which data is exchanged among the participating services to accomplish a
business goal. Moreover semantics of data structures in Web service composition is complex; for e.g. service
messages are XML documents and service functions are XPath expressions.. CPN is one model that combines data
and control flow with behavioural aspects of a given business requirement. Control flow coverage is defined in terms
of transitions fired while the data flow coverage is defined in terms of tokens used. The web service composition is
modelled and analysed for the accuracy of the functional design. Hence generating test sequences from such a model
33
3. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.6, 2012
is bound to produce completely feasible, effective and efficient test sequences for practical test executions. Also
gains the highest effectiveness by combining the two complementary flows.
CPN has been widely used for verification of BPEL composition. Many deals with converting the BPEL constructs
into CPN model. A data driven approach to compose the web services using CPNs is given in [21]. When a new
business requirement given with input/output details needs to be implemented, the existing service portfolio is
checked for reusability. The data relations in business and service domain are utilized to create a complete and
coherent data model. A service net is created with all the possible composition candidates from given service
portfolio. Then it is reduced with respect to the given business requirement. In [22] a method to create a mapping
between WS-BPEL process and CPN model is proposed. The web service composition is validated by analyzing its
reachability tree. CPN [23] provides an effective means to simulate, analyze and verify the correctness of web
service composition. In this paper the input to the test suite generation algorithm is the data driven model that
considers both business and implementation domains.
3.1 CPN Model
In this paper the CPN model for the composite web service (Service-Net) is defined as a tuple < Σ, I, P, T, A, C, O >,
where
• Σ is a non-empty color set and represents the data types of the participating web services;
• I is a set of input places derived from the business requirement and represents the initial input to the
composite web service;
• P is a limited set of place, P is inclusive of I and O, and represents the state of atomic web service;
• T is a limited set of transition, and represents the operation of atomic web service;
• A is a finite set of arcs.
• C is a color function defined from P into Σ. C is injective, i.e., C (p1) = C (p2) Σ p1 = p2.
• O is a set of output place derived from the business requirement and represents the final output of the
composite web service
3.2 Case Study
A composite service implements a business process which accomplishes a specific organizational goal by using a
coordinated set of tasks performed by humans or software. We take a simple case study; 'Plan for Travel' process
which consists of three web services: a Traveller, a Travel Agent and an Airline Reservation System. A person, who
wants to travel via air, first proposes an itinerary and orders for the trip. He can change or cancel the itinerary. He
reserves, books and then receives the ticket. Web service Traveller helps the person get the tickets for the proposed
itinerary. The Travel Agent web service receives order, checks availability of seats, reserves and books the seat, and
sends statement to the person. It also acts upon the timeout scenarios, change and cancellation requests. The Web
service ‘Airline Reservation System’ verifies the seat availability, books the tickets and sends to the person. It also
handles the cancellation requests. This case study has been used in both WSCI and BPEL4WS composition
languages [22]. The figure 1 gives the composition model of the three web services and this model is used for
creating test sequences.
4. Algorithm
The unique input output pairs (UIO-Pairs) and the CPN model is the input to the algorithm. The business
requirements are represented by business processes consisting abstract activities with input/output data. The input
and output data are mapped to form UIO-Pairs. The UIO-Pairs at the highest level for the ticket booking business
process would be, input: proposed itinerary and outputs: booking succeeded or failed. The unique pairs can be easily
derived from the pre conditions and post conditions specified in the business requirements. Moreover the pairs would
be a limited and finite set. Hence the time taken to create the UIO-Pairs would be minimal. Table 1 gives the
exhaustive set for the case study taken into consideration.
The CPN model is generated as a service net using the WSDL in the service portfolio and the business requirements.
The approach to build the service nets is presented in [21]. Well-designed test data helps identify critical flaws in the
functionality. The test data for each step in the sequence can be generated from the WSDL documents that are used to
34
4. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.6, 2012
create the CPN model. The valid input space of a web service is the subset of the input space satisfying the
precondition of the web service. The proposed algorithm takes the most influencing pair first and starts creating test
sequences. The table of unique input and output can also be used to roughly estimate the test suite size. The number
of test sequences is directly related to the UIO-Pairs. However if there are conditional branches based on
computation and not on user inputs, then the number test sequences depends on the number of branches too. The test
sequences can also be prioritised by prioritising the UIO-Pairs. The test suite size can be reduced by generating
sequences for the UIO-Pairs that are business critical. In table 1 the first row is the most critical scenario where the
end-user of the system is satisfied by receiving the tickets for the proposed itinerary. The fourth row is pertaining to
reservation cancellation due to over time. Such situations are very rare and hence the priority for this row is low.
Therefore the test sequence generated using that UIO-Pair is also low. Definition of a test sequence and the design
coverage evaluation relates to the generation technique used. Thus, it is important to define the coverage criteria and
concepts followed in this paper. The test data is a set of inputs that would be used in the test step during execution. A
test sequence is a set of test steps that trigger a sequence of tasks or operations to accomplish a logical flow of events
in the business process. The focus of the proposed technique is to validate the behavioral correctness of the system
using the generated test sequences. The test sequence will validate the functional correctness and dependencies of the
operations. A test suite is a collection of test sequences.
Figure 2 gives the test sequence generation process for the web service composition. The proposed algorithm will
result in high coverage with minimal effort. The algorithm is analysed using the following coverage criteria.
4.1 Complete Sequence Coverage (CSCov)
The meaningful sequences obtained by traversing the CPN model for all the UIO-Pairs and all the decision points. In
the CPN model the decision point is represented by a place with multiple out-going arcs.
CSCov = No. of UIO-Pairs and decision points used
Total No. of UIO-Pairs and decision points --- (1)
4.2 Decision Coverage (DCov):
The test suite TS fulfils 100% decision coverage if there is at least one test sequence of every decision point in the
CPN model.
DCov = No. of decision points used in sequence
Total No. of decision points --- (2)
4.3 Input Output Coverage (IOCov):
The test suite TS satisfies Input Output coverage if there is at least one test sequence of every unique input output
pair of the business requirement/process.
IOCov = No. of UIO-Pairs used in sequence
Total No. of UIO-Pairs --- (3)
4.4 Transition Coverage (TCov):
The test suite TS satisfies this coverage if there test sequences such that every transition is traversed at once in any of
the sequences.
TCov = No. of transitions used in the test sequence
Total No. of transitions in the CPN Model --- (4)
Algorithm CTS-G (CPN based Test Sequence Generation)
Begin
Initialize the CPN Model
For each UIO-Pairs
Let IP= input set of top UIO-Pair
35
5. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.6, 2012
Let OP= output set of top UIO-Pair
/*Reduce redundant traversal to decision point*/
For each DP
If (IP part of DP) Then
Copy test steps till DP
T = Transitions after DP
Exit For
End If
Next For
Enable T that satisfy IP and pre-conditions
While (P Not OP) Do
/*Places will specify the inputs need for firing a transition*/
Choose an enabled transition (T) that influence OP
Fire (T)
/*Create test step for the test sequence relevant to UIO-Pair*/
Record the Places connected to T as input
Record traversal in the test sequence
Record resulting Places and Arc expressions as output
If (T is web service operation) Then
/*Automatically update traceability matrix (TM) */
Update TM
End If
Analyse resulting place P
If (P has multiple arcs) Then
/*Decision points are Place with multiple arcs */
Save decision points; DP=DP+P
If (this is first decision point) Then
Path_Start = Initial Transition
Else
Path_Start = Previous DP
End If
Path_End = Current DP
Save test steps to Place along with pre-conditions
End If
End Do
Remove UIO-Pair
Calculate TCov, DCov
Update test sequence with post-conditions and coverage
Initialize CPN to decision point that is yet to be covered
Next For
End
In the CPN model for web service composition, a test sequence is any path from one of the initial state to one of the
final state. In the case study taken up the ‘Change Itinerary’ flow takes up input as Proposed Itinerary but the output
depends on the ‘Check Seat Availability’ operation. In this paper external input and output pairs are focused and the
sequences that are missed are merged when the transition coverage is calculated. In the algorithm each Place and
Transition considered is the data type of web service operation and web service operation respectively. Intermediate
transitions and places are not recorded as part of test sequences. The test sequences generated can be used for black
box testing of web service composition. Moreover the traceability is created to business requirements and the web
services. To best of our knowledge traceability has not be considered in existing approaches. Moreover the gap
between business domain and implementation domain is bridged in this approach.
36
6. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.6, 2012
5. Results and discussions
In traditional MBT, the models are created using requirements or specifications only. However, the existing
model-based test case generation approaches generate models from executable code: the BPEL code. Hence the
model created reflects the behaviour of the executable code rather than the expected behaviour of the system.
Using such a model for test case generation will lead to validation of BPEL code and not business requirements
which specifies the actual system. In the proposed approach the CPN model used is a data driven model that bridges
the gap between the service and business domain. Such a CPN model is already validated for accuracy of the
system’s functional design. Generating test cases for such a model is bound to produce effective and efficient test
cases. The algorithm also inherits another default benefit by using CPN model for deriving test sequences. The
CPN tools have been exhaustively used in verification of web service compositions; be it choreography or
orchestration. BPEL code constructs and WSCI code constructs have been transformed to CPN models to check
reachability and soundness of the composition. Table 2 presents an analysis by comparing the approaches that exist
for automatic generation of test sequences from the CPN Model. Traceability refers to traceability of the test
sequences. Usage refers to the domain or the applications that uses test sequences generated. In future test suite
length and the algorithm complexity will also be analyzed. One of the other advantages of MBT is that the test
sequences generated also aims at providing maximum testing coverage of the system under test. Generally the
coverage based adequacy metric usually relates to the model. The data driven CPN model is created from the
business requirements and WSDLs, hence the requirement coverage and web service coverage are inherited by
default if the whole model is covered. Table 3 represents the test sequence for Reservation time out scenario. The
traveller enters the proposed itinerary and his personal details to order tickets for the trip. The airline reservation
system checks and verifies the seat availability for the itinerary received from the travel agent. The reserved seat
details will be sent to traveller or there might happen a time out situation where the system enters the failure state
after notifying that time is the reason for failure. Here the UIO-Pair is row 3 from Table 1.
6. Conclusion
Testing is the most critical and expensive phase of the software development life cycle. Generation of test sequences
or cases is most challenging part of testing phase as an efficient test design can detect greater number of faults.
Moreover around 40% of software testing cost is spent on test design. In this paper we have present an approach to
reduce that cost by means of automating the generation of test sequences for web service composition. We first
analysed the existing approaches to generate test cases from the CPN model. Then we analysed the usage of CPN in
web service composition. Then finally we consider the data driven CPN model used for verifying the design of
composition and UIO-Pairs created from the business process requirements as input to create test sequences. As
opportunities for future work, on-the-fly test sequence prioritization and automatic tracing back to business
requirements can be taken up. Moreover the decision points which are places that have multiple arcs can be used to
reduce redundancy in test sequences.
References
[1] Thomas Erl (2005), Service-Oriented Architecture (SOA): Concepts, Technology, and Design. Prentice Hall
PTR
[2] Lizhi Cai, Juan Zhang, Zhenyu Liu, (2011), A CPN-based Software Testing Approach, JOURNAL OF
SOFTWARE, VOL. 6, NO. 3, pp.468-474
[3] Jing LIU, Xinming YE, Jun LI, (2011), Colored Petri Nets Model based Conformance Test Generation, IEEE
Xplorer, pp: 967-970.
[4] U. Farooq, C.P. Lam and H. Li, (2008), Towards Automated Test Sequence Generation. 19th Australian
Conference on Software Engineering, IEEE Computer Society.
[5] H. Watanabe and T. Kudoh. (1995), Test Suite Generation Methods for Concurrent Systems based on
Coloured Petri Nets.2nd Asia-Pacific Software Engineering Conference (APSEC 1995), Brisbane, Australia, pp.
242-251.
[6] Mustafa Bozkurt, Mark Harman and Youssef Hassoun, (2009), Testing & Verification In Service-Oriented
Architecture: A Survey, SOFTWARE TESTING, VERIFICATION AND RELIABILITY, Wiley InterScience,
pp.1-67.
37
7. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.6, 2012
[7] Jose Garcia-Fanjul, Javier Tuya, Claudio de la Riva, (2006), Generating Test Cases Specifications for BPEL
Compositions of Web Services Using SPIN, International Workshop on Web Services Modeling and Testing, pp.
83-94.
[8] Y. Zheng, J. Zhou, P. Krause, (2007), A Model Checking based Test Case Generation Framework for Web
Services, (2007), International Conference on Information Technology.
[9] Y. Zheng, P. Krause, Automata Semantics and Analysis of BPEL, International Conference on Digital
Ecosystems and technologies.
[10] X. Fu T. Bultan J. Su,(2004), Analysis of Interacting BPEL Web Services, International Conference on
World Wide Web. May 17 - 22New York, USA.
[11] M. Lallali, F. Zaidi, A. Cavalli, (2008), Transforming BPEL into Intermediate Format Language for Web
Services Composition Testing, The 4th IEEE International Conference on Next Generation Web Services
Practices.
[12] M. Lallali, F. Zaidi, A. Cavalli, Iksoon Hwang, (2008), Automatic Timed Test Case Generation for Web
Services Composition, Sixth European Conference on Web Services. Dublin, Ireland, Nov 12 - 14.
[13] Tien-Dung Cao, Patrick Felix, Richard Castanet and Ismail Berrada, (2009), Testing Web Services
Composition using the TGSE Tool, 2009 Congress on Service-I, IEEE Computer Society, pp. 187-194
[14] A. T. Endo, A. S. Sim˜ao, S. R. S. Souza, and P. S. L. Souza, (2008), Web services composition testing: A
strategy based on structural testing of parallel programs, TAIC-PART ’08: Proceedings of the Testing: Academic
& Industrial Conference - Practice and Research Techniques. Windsor, UK: IEEE Computer Society, pp. 3–12.
[15] J. Yan, Z. Li, Y. Yuan, W. Sun, and J. Zhang, (2006), BPEL4WS unit testing: Test case generation using a
concurrent path analysis approach, ISSRE ’06: Proceedings of the 17th International Symposium on Software
Reliability Engineering. Raleigh, NC, USA: IEEE Computer Society, pp. 75–84.
[16] Y. Yuan, Z. Li, and W. Sun, (2006), A graph-search based approach to BPEL4WS test generation,
ICSEA ’06: Proceedings of the International Conference on Software Engineering Advances. Tahiti, French
Polynesia: IEEE Computer Society, p. 14.
[17] S. S. Hou, L. Zhang, Q. Lan, H. Mei, and J. S. Sun, (2009), Generating e_ective test sequences for BPEL
testing, QSIC 2009: Proceedings of the 9th International Conference on Quality Software. Jeju, Korea: IEEE
Computer Society Press.
[18] Z. Guangquan, R. Mei, and Z. Jun, (2007), A business process of web services testing method based on
uml2.0 activity diagram, IITA’07: Proceedings of the Workshop on Intelligent Information Technology
Application. Nanchang, China: IEEE Computer Society, pp. 59–65.
[19] A. Tarhini, H. Fouchal, and N. Mansour, (2006), Regression testing web services-based applications,
Proceedings of the 4th ACS/IEEE International Conference on Computer Systems and Applications. Sharjah,
UAE: IEEE Computer Society, pp. 163–170.
[20] A. Tarhini, H. Fouchal, and N. Mansour, (2005), A simple approach for testing web service based
applications, Proceedings of the 5th International Workshop on Innovative Internet Community Systems (IICS
2005), ser. Lecture Notes in Computer Science, vol. 3908. Paris, France: Springer, pp. 134–146.
[21] Wei Tan, Yushun Fan, MengChu Zhou and Zhong Tian, (2010), Data-Driven Service Composition in
Enterprise SOA Solutions: A Petri Net Approach, IEEE TRANSACTIONS, pp. 686-695
[22] Xinguo Deng, Ziyu Lin, Weiqing Cheng, Ruliang Xiao,Ling Li,Lina Fang, (2007), Modeling and Verifying
Web Service Composition Using Colored Petri Nets Based On WSCI, IEEE, pp:1863-1867
38
8. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.6, 2012
Poonkavithai Kalamegam received B.Tech degree in Computer Science and Engineering
from Pondicherry University and M.E. degree in Computer Science from Anna University.
Her research interest includes Service Oriented Architecture, Model Based Testing, and Web
Service Composition Testing. She has around 10 years of industry experience in functional
testing of banking domain applications. She has been involved in all phases of testing,
starting from estimation for testing phase to closure with test summary report. She has
worked in Cognizant Technology Solutions for 6 years. JP Morgan Chase Bank, Dutsche
Bank and Boeing Financials are some of the clients she has worked for. She is currently pursuing PhD degree in
the area Web Service Composition Testing in Pondicherry Engineering College.
Dr. G. Zayaraz is currently working as Associate Professor in Computer Science & Engineering
at Pondicherry Engineering College, Puducherry, India. He received his Bachelor`s, Masters and
Doctorate degree in Computer Science & Engineering from Pondicherry University. He has
published more than Thirty five research papers in reputed International Journals and
Conferences. His areas of specialization include Software Architecture and Information Security.
He is a reviewer/editorial member for several reputed International Journals and Conferences
and Life Member of CSE, and ISTE.
39
9. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.6, 2012
Figure 1. CPN Model for web service composition
Test Data
WSDLs
Data Driven CTS
Test Sequences
Business CPN Model Generator
Requirements
UIO-Pairs
Evaluation
Figure 2. Test Suite generation process
40
10. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.6, 2012
Table 1. UIO-Pairs
SNo Input Output
Traveller details Success
1
Proposed Itinerary (Tickets Booked)
Cancel Itinerary Failure notification
2
Reserved Seat Details (Cancel Itinerary)
Cancel Reservation Failure notification
3
Reserved Itinerary (Cancel Reservation)
AirLine Reservation Failure Failure notification
4
OverTime (Timeout nofication)
Table 2. The comparison between some existing approaches of CPN to Test Sequence generation
Reference Traceability Usage Redundancy Test Priority
Lizhi Cai,J Zhang, Zhenyu Liu[2] To model Generic Not Handled Not Handled
Jing LIU, Xinming YE, Jun LI[3] To model GUI Apps Not Handled Not Handled
To AD and
U. Farooq, C.P. Lam and H. Li[4] SOA Limited Not Handled
model
H. Watanabe and T. Kudoh[5] To model Concurrency Not Handled Not Handled
To Bus. Req and
Our Work SOA Handled Handled
Web Services
41
11. This academic article was published by The International Institute for Science,
Technology and Education (IISTE). The IISTE is a pioneer in the Open Access
Publishing service based in the U.S. and Europe. The aim of the institute is
Accelerating Global Knowledge Sharing.
More information about the publisher can be found in the IISTE’s homepage:
http://www.iiste.org
The IISTE is currently hosting more than 30 peer-reviewed academic journals and
collaborating with academic institutions around the world. Prospective authors of
IISTE journals can find the submission instruction on the following page:
http://www.iiste.org/Journals/
The IISTE editorial team promises to the review and publish all the qualified
submissions in a fast manner. All the journals articles are available online to the
readers all over the world without financial, legal, or technical barriers other than
those inseparable from gaining access to the internet itself. Printed version of the
journals is also available upon request of readers and authors.
IISTE Knowledge Sharing Partners
EBSCO, Index Copernicus, Ulrich's Periodicals Directory, JournalTOCS, PKP Open
Archives Harvester, Bielefeld Academic Search Engine, Elektronische
Zeitschriftenbibliothek EZB, Open J-Gate, OCLC WorldCat, Universe Digtial
Library , NewJour, Google Scholar