A Web service is a software system designed to support interoperable machine-to-machine interaction over a network. Web services provide a standard means of interoperating between different software applications, running on a variety of platforms and/or frameworks. One of the main advantages of
the usage of web services is its ability to integrate with the other services through web service composition and realize the required functionality. This paper presents a new paradigm of dynamic web services composition using network analysis paired with backtracking. An algorithm called “Zeittafel” for the selection and scheduling of services that are to be composed is also presented. With the proposed system better percentage of job success rate is obtained compared to the existing
methodology.
Effective and Efficient Job Scheduling in Grid ComputingAditya Kokadwar
The integration of remote and diverse resources and the increasing computational needs of Grand Challenges problems combined with the faster growth of the internet and communication technologies leads to the development of global computational grids. Grid computing is a prevailing technology, which unites underutilized resources in order to support sharing of resources and services distributed across numerous administrative region. An efficient and effective scheduling system is essentially required in order to achieve the promising capacity of grids. The main goal of scheduling is to maximize the resource utilization and minimize processing time and cost of the jobs. In this research, the objective is to prioritize the jobs based on execution cost and then allocate the resources with minimum cost by merging it with conventional job grouping strategy to provide the solution for better and more efficient job scheduling which is beneficial to both user and resource broker. The proposed scheduling approach in grid computing employs a dynamic cost-based job scheduling algorithm for making an efficient mapping of a job to available resources in the grid. It also improves communication to computation ratio (CCR) and utilization of available resources by grouping the user jobs before resource allocation.
This document proposes ANGEL, an agent-based scheduling algorithm for real-time tasks in virtualized cloud environments. It employs a bidirectional announcement-bidding mechanism between agents to allocate tasks and dynamically provision resources. The mechanism consists of three phases: basic matching, forward announcement-bidding, and backward announcement-bidding. ANGEL also dynamically adds virtual machines to improve schedulability. Extensive experiments show ANGEL efficiently solves real-time task scheduling in clouds.
OPTIMIZED RESOURCE PROVISIONING METHOD FOR COMPUTATIONAL GRID ijgca
Grid computing is an accumulation of heterogeneous, dynamic resources from multiple administrative areas which are geographically distributed that can be utilized to reach a mutual end. Development of resource provisioning-based scheduling in large-scale distributed environments like grid computing brings in new requirement challenges that are not being believed in traditional distributed computing environments. Computational grid is applying the resources of many systems in a network to a single problem at the same time. Grid scheduling is the method by which work specified by some means is assigned to the resources that complete the work in the environment which cannot fulfill the user requirements considerably. The satisfaction of users while providing the resources might increase the beneficiary level of resource suppliers. Resource scheduling has to satisfy the multiple constraints specified by the user. The option of resource with the satisfaction of multiple constraints is the most tedious process. This trouble is solved by bringing out the particle swarm optimization based heuristic scheduling algorithm which attempts to select the most suitable resource from the set of available resources. The primary parameters that are taken in this work for selecting the most suitable resource are the makespan and cost. The experimental result shows that the proposed method yields optimal scheduling with the atonement of all user requirements.
AN ADAPTIVE APPROACH FOR DYNAMIC RECOVERY DECISIONS IN WEB SERVICE COMPOSITIO...ijwscjournal
Service Oriented Architecture facilitates automatic execution and composition of web services in
distributed environment. This service composition in the heterogeneous environment may suffer from
various kinds of service failures. These failures interrupt the execution of composite web services and
lead towards complete system failure. The dynamic recovery decisions of the failed services are
dependent on non-functional attributes of the services. In the recent years, various methodologies
have been presented to provide recovery decisions based on time related QoS (Quality of Service)
factors. These QoS attributes can be categorized further. Our paper categorized these attributes as
space and time. In this paper, we have proposed an affinity model to quantify the location affinity for
composition of web services. Furthermore, we have also suggested a replication mechanism and
algorithm for taking recovery decisions based on time and space based QoS parameters and usage
pattern of the services by the user.
Recently with the increasing development of distributed computer systems (DCSs) in networked
industrial and manufacturing applications on the World Wide Web (WWW) platform, including service-oriented
architecture and Web of Things QoS-aware systems, it has become important to predict the Web performance.
In this paper, we present Web performance prediction in time by making a forecast of a Web resource
downloading using the Efficient Turning Bands (TB) geostatistical simulation method. Real-life data for the
research were obtained from our own website named "Distributed forecasting system". Generation of log file
form website and performing monitoring of a group of Web clients from connected LAN. For better web
prediction we used spatio temporal prediction method with time utility for downloading particular file from
website and calculate forecasting result using Turning bands method but improving more forecasting
accuracy use the efficient turning band method basically efficient turning band use Naive bays algorithm and
calculate efficient result and that result is compared with Turning band and efficient turning band method.
The efficient turning band method result show good forecasting quality of Web performance prediction and
forecasting.
FDMC: Framework for Decision Making in Cloud for EfficientResource Management IJECEIAES
An effective resource management is one of the critical success factors for precise virtualization process in cloud computing in presence of dynamic demands of the user. After reviewing the existing research work towards resource management in cloud, it was found that there is still a large scope of enhancement. The existing techniques are found not to completely utilize the potential features of virtual machine in order to perform resource allocation. This paper presents a framework called FDMC or Framework for Decision Making in Cloud that gives better capability for the VMs to perform resource allocation. The contribution of FDMC is a joint operation of VM to ensure faster processing of task and thereby withstand more number of increasing traffic. The study outcome was compared with some of the existing systems to find FDMC excels better performance in the scale of task allocation time, amount of core wasted, amount of storage wasted, and communication cost.
Information Extraction using Rule based Software Agents in Knowledge Grididescitation
For the successful information processing and
handling of document collections, effective information
extraction methods are necessary. A distributed team work
environment requires team knowledge management. A
knowledge flow exists in team work processes and this
knowledge flow reflects the knowledge level cooperation in
team work, which in turn defines the effectiveness of team
work. Distributed software development team focuses on work
co-operation and resource sharing between members during
software development life cycle and knowledge flow should
reflect cognitive cooperation process dynamically. Hence each
team member can use experience of predecessor accumulated
during previous projects and avoid redundant work. With the
advent of the networks, the system specification is done in
one geographic area and the design in some other place. The
entire software development process has distributed resources
such as five generic up-level ontologies and a knowledge based
[KB] issues and solutions ontology. An issue and solution pair
criteria is based on organizational goals, priorities, cost and
timeliness. In this paper, we present the challenges in
distributed team environment and information extraction
mechanisms focusing on text marker system and its
applications.
Grid computing can involve lot of computational tasks which requires trustworthy computational nodes. Load balancing in grid computing is a technique which overall optimizes the whole process of assigning computational tasks to processing nodes. Grid computing is a form of distributed computing but different from conventional distributed computing in a manner that it tends to be heterogeneous, more loosely coupled and dispersed geographically. Optimization of this process must contains the overall maximization of resources utilization with balance load on each processing unit and also by decreasing the overall time or output. Evolutionary algorithms like genetic algorithms have studied so far for the implementation of load balancing across the grid networks. But problem with these genetic algorithm is that they are quite slow in cases where large number of tasks needs to be processed. In this paper we give a novel approach of parallel genetic algorithms for enhancing the overall performance and optimization of managing the whole process of load balancing across the grid nodes.
Effective and Efficient Job Scheduling in Grid ComputingAditya Kokadwar
The integration of remote and diverse resources and the increasing computational needs of Grand Challenges problems combined with the faster growth of the internet and communication technologies leads to the development of global computational grids. Grid computing is a prevailing technology, which unites underutilized resources in order to support sharing of resources and services distributed across numerous administrative region. An efficient and effective scheduling system is essentially required in order to achieve the promising capacity of grids. The main goal of scheduling is to maximize the resource utilization and minimize processing time and cost of the jobs. In this research, the objective is to prioritize the jobs based on execution cost and then allocate the resources with minimum cost by merging it with conventional job grouping strategy to provide the solution for better and more efficient job scheduling which is beneficial to both user and resource broker. The proposed scheduling approach in grid computing employs a dynamic cost-based job scheduling algorithm for making an efficient mapping of a job to available resources in the grid. It also improves communication to computation ratio (CCR) and utilization of available resources by grouping the user jobs before resource allocation.
This document proposes ANGEL, an agent-based scheduling algorithm for real-time tasks in virtualized cloud environments. It employs a bidirectional announcement-bidding mechanism between agents to allocate tasks and dynamically provision resources. The mechanism consists of three phases: basic matching, forward announcement-bidding, and backward announcement-bidding. ANGEL also dynamically adds virtual machines to improve schedulability. Extensive experiments show ANGEL efficiently solves real-time task scheduling in clouds.
OPTIMIZED RESOURCE PROVISIONING METHOD FOR COMPUTATIONAL GRID ijgca
Grid computing is an accumulation of heterogeneous, dynamic resources from multiple administrative areas which are geographically distributed that can be utilized to reach a mutual end. Development of resource provisioning-based scheduling in large-scale distributed environments like grid computing brings in new requirement challenges that are not being believed in traditional distributed computing environments. Computational grid is applying the resources of many systems in a network to a single problem at the same time. Grid scheduling is the method by which work specified by some means is assigned to the resources that complete the work in the environment which cannot fulfill the user requirements considerably. The satisfaction of users while providing the resources might increase the beneficiary level of resource suppliers. Resource scheduling has to satisfy the multiple constraints specified by the user. The option of resource with the satisfaction of multiple constraints is the most tedious process. This trouble is solved by bringing out the particle swarm optimization based heuristic scheduling algorithm which attempts to select the most suitable resource from the set of available resources. The primary parameters that are taken in this work for selecting the most suitable resource are the makespan and cost. The experimental result shows that the proposed method yields optimal scheduling with the atonement of all user requirements.
AN ADAPTIVE APPROACH FOR DYNAMIC RECOVERY DECISIONS IN WEB SERVICE COMPOSITIO...ijwscjournal
Service Oriented Architecture facilitates automatic execution and composition of web services in
distributed environment. This service composition in the heterogeneous environment may suffer from
various kinds of service failures. These failures interrupt the execution of composite web services and
lead towards complete system failure. The dynamic recovery decisions of the failed services are
dependent on non-functional attributes of the services. In the recent years, various methodologies
have been presented to provide recovery decisions based on time related QoS (Quality of Service)
factors. These QoS attributes can be categorized further. Our paper categorized these attributes as
space and time. In this paper, we have proposed an affinity model to quantify the location affinity for
composition of web services. Furthermore, we have also suggested a replication mechanism and
algorithm for taking recovery decisions based on time and space based QoS parameters and usage
pattern of the services by the user.
Recently with the increasing development of distributed computer systems (DCSs) in networked
industrial and manufacturing applications on the World Wide Web (WWW) platform, including service-oriented
architecture and Web of Things QoS-aware systems, it has become important to predict the Web performance.
In this paper, we present Web performance prediction in time by making a forecast of a Web resource
downloading using the Efficient Turning Bands (TB) geostatistical simulation method. Real-life data for the
research were obtained from our own website named "Distributed forecasting system". Generation of log file
form website and performing monitoring of a group of Web clients from connected LAN. For better web
prediction we used spatio temporal prediction method with time utility for downloading particular file from
website and calculate forecasting result using Turning bands method but improving more forecasting
accuracy use the efficient turning band method basically efficient turning band use Naive bays algorithm and
calculate efficient result and that result is compared with Turning band and efficient turning band method.
The efficient turning band method result show good forecasting quality of Web performance prediction and
forecasting.
FDMC: Framework for Decision Making in Cloud for EfficientResource Management IJECEIAES
An effective resource management is one of the critical success factors for precise virtualization process in cloud computing in presence of dynamic demands of the user. After reviewing the existing research work towards resource management in cloud, it was found that there is still a large scope of enhancement. The existing techniques are found not to completely utilize the potential features of virtual machine in order to perform resource allocation. This paper presents a framework called FDMC or Framework for Decision Making in Cloud that gives better capability for the VMs to perform resource allocation. The contribution of FDMC is a joint operation of VM to ensure faster processing of task and thereby withstand more number of increasing traffic. The study outcome was compared with some of the existing systems to find FDMC excels better performance in the scale of task allocation time, amount of core wasted, amount of storage wasted, and communication cost.
Information Extraction using Rule based Software Agents in Knowledge Grididescitation
For the successful information processing and
handling of document collections, effective information
extraction methods are necessary. A distributed team work
environment requires team knowledge management. A
knowledge flow exists in team work processes and this
knowledge flow reflects the knowledge level cooperation in
team work, which in turn defines the effectiveness of team
work. Distributed software development team focuses on work
co-operation and resource sharing between members during
software development life cycle and knowledge flow should
reflect cognitive cooperation process dynamically. Hence each
team member can use experience of predecessor accumulated
during previous projects and avoid redundant work. With the
advent of the networks, the system specification is done in
one geographic area and the design in some other place. The
entire software development process has distributed resources
such as five generic up-level ontologies and a knowledge based
[KB] issues and solutions ontology. An issue and solution pair
criteria is based on organizational goals, priorities, cost and
timeliness. In this paper, we present the challenges in
distributed team environment and information extraction
mechanisms focusing on text marker system and its
applications.
Grid computing can involve lot of computational tasks which requires trustworthy computational nodes. Load balancing in grid computing is a technique which overall optimizes the whole process of assigning computational tasks to processing nodes. Grid computing is a form of distributed computing but different from conventional distributed computing in a manner that it tends to be heterogeneous, more loosely coupled and dispersed geographically. Optimization of this process must contains the overall maximization of resources utilization with balance load on each processing unit and also by decreasing the overall time or output. Evolutionary algorithms like genetic algorithms have studied so far for the implementation of load balancing across the grid networks. But problem with these genetic algorithm is that they are quite slow in cases where large number of tasks needs to be processed. In this paper we give a novel approach of parallel genetic algorithms for enhancing the overall performance and optimization of managing the whole process of load balancing across the grid nodes.
The Cloud computing becomes an important topic
in the area of high performance distributed computing. On the
other hand, task scheduling is considered one the most significant
issues in the Cloud computing where the user has to pay for the
using resource based on the time. Therefore, distributing the
cloud resource among the users' applications should maximize
resource utilization and minimize task execution Time. The goal
of task scheduling is to assign tasks to appropriate resources that
optimize one or more performance parameters (i.e., completion
time, cost, resource utilization, etc.). In addition, the scheduling
belongs to a category of a problem known as an NP-complete
problem. Therefore, the heuristic algorithm could be applied to
solve this problem. In this paper, an enhanced dependent task
scheduling algorithm based on Genetic Algorithm (DTGA) has
been introduced for mapping and executing an application’s
tasks. The aim of this proposed algorithm is to minimize the
completion time. The performance of this proposed algorithm has
been evaluated using WorkflowSim toolkit and Standard Task
Graph Set (STG) benchmark.
The document discusses optimization of resource allocation in cloud environments using a modified particle swarm optimization (PSO) approach. It proposes a Modified Resource Allocation Mutation PSO (MRAMPSO) strategy that uses an Extended Multi Queue Scheduling algorithm to schedule tasks based on resource availability and reschedules failed tasks. The MRAMPSO strategy is compared to standard PSO and other algorithms to show it can reduce execution time, makespan, transmission cost, and round trip time.
ANALYSIS OF THRESHOLD BASED CENTRALIZED LOAD BALANCING POLICY FOR HETEROGENEO...ijait
Heterogeneous machines can be significantly better than homogeneous machines but for that an effective workload distribution policy is required. Maximum realization of the performance can be achieved when system designer will overcome load imbalance condition within the system. Load
distribution and load balancing policy together can reduce total execution time and increase system throughput.
In this paper; we provide algorithm analysis of a threshold based job allocation and load balancing policy for heterogeneous system where all incoming jobs are judiciously and transparently distributed among sharing nodes on the basis of jobs’ requirement and processor capability for the maximization of performance and decline in execution time. A brief discussion of job allocation, transfer and location policy is given with explanation of how load imbalance condition is solved within the system. A flow of scheme is given with essential code and analysis of present algorithm is given to show how this algorithm is better.
An Improved Design of Contract Net Trust Establishment ProtocolIDES Editor
Contract Net Protocol (CNP) is FIPA standardized
high level communication protocol which specifies the way
software agents should follow while communicating. However
it lacks methods for ensuring trust and reliability of the agents
participating in the communication. In an earlier paper
authors proposed a variation of CNP involving trust
establishment feature into it, termed as Contract Net Trust
Establishment Protocol (CNTEP). However, efficient
communication can not be ensured unless the communicating
counterpart is reliable. This fact provided the motivation for
the present work, which extends CNTEP and incorporates
reliability computation component in it.
A novel scheduling algorithm for cloud computing environmentSouvik Pal
The document describes a proposed genetic algorithm-based scheduling approach for cloud computing environments. It aims to minimize waiting time and queue length. The algorithm first permutes task burst times and finds minimum waiting times using FCFS and genetic algorithms. It then applies a queuing model to the sequences with minimum waiting time from each approach. Experimental results on 4 sample tasks show the genetic algorithm reduces waiting time compared to FCFS. The genetic operators of selection, crossover and mutation are applied to evolve optimal task scheduling sequences.
Planning is a guideline in implementing the project so that development can be
implemented in accordance with the time and cost planned. Control discrepancy
between initial plan and realization that exists in implementation project required a
project management, therefore required optimization analysis of project duration, so
it can be known how long a project is completed and look for the possibility of project
acceleration implementation by Project Evaluation and Review Technique (PERT)
and Critical Path Method (CPM) or critical path method. This research aims to apply
Project Evaluation and Review Technique (PERT) and Critical Project Management
(CPM) methods to find optimize solutions and control the performance of time and
cost in project scheduling. The research method used case study method at hospital
project in Bogor District, Indonesia, by collecting data direct observation and
interview results at contractor. Based on these data, create a schedule by using PERT
and CPM methods, which will be measured performance of time performance and
project cost which is expected to overcome the problem of controlling and completion
of project. The results of this study, using PERT and CPM methods proved to optimize
the project. Based on calculation by PERT method reduce duration of work: 12 days
(13, 18%). Based on calculation by CPM method reduce duration of work: 31 days
(34, 06%) but direct cost increase 112.208,300, - rupiahs (0, 25%).
An Improved Parallel Activity scheduling algorithm for large datasetsIJERA Editor
Parallel processing is capable of executing a large number of tasks on a multiprocessor at the same time period, and it is also one of the emerging concepts. Complex and computational problems can be resolved in an efficient way with the help of parallel processing. The parallel processing system can be divided into two categories depending on the nature of tasks such are homogenous parallel system and the heterogeneous parallel processing system. In the homogeneous environment, the number of processors required for executing different tasks is similar in capacity. In case of heterogeneous environments, tasks are allocated to various processors with different capacity and speed. The main objective of parallel processing is to optimize the execution speed and to shorten the duration of task execution with independent of environment. In this proposed work, an optimized parallel project selection method was implemented to find the optimal resource utilization and project scheduling. The execution speeds of the task increases and the overall average execution time of the task decreases by allocating different tasks to various processors with the task scheduling algorithm.
A HEURISTIC APPROACH FOR WEB-SERVICE DISCOVERY AND SELECTIONijcsit
This document proposes a new heuristic approach for web service discovery and selection using an algorithm inspired by honey bee behavior called the Bees Algorithm. The approach structures service registries by domain to simplify discovery. It uses the Bees Algorithm as an intelligent search method to efficiently find the optimal service matching a client's request and quality of service requirements from the relevant registry in least time.
Task scheduling methodologies for high speed computing systemsijesajournal
High Speed computing meets ever increasing real-time computational demands through the leveraging of
flexibility and parallelism. The flexibility is achieved when computing platform designed with
heterogeneous resources to support multifarious tasks of an application where as task scheduling brings
parallel processing. The efficient task scheduling is critical to obtain optimized performance in
heterogeneous computing Systems (HCS). In this paper, we brought a review of various application
scheduling models which provide parallelism for homogeneous and heterogeneous computing systems. In
this paper, we made a review of various scheduling methodologies targeted to high speed computing
systems and also prepared summary chart. The comparative study of scheduling methodologies for high
speed computing systems has been carried out based on the attributes of platform & application as well.
The attributes are execution time, nature of task, task handling capability, type of host & computing
platform. Finally a summary chart has been prepared and it demonstrates that the need of developing
scheduling methodologies for Heterogeneous Reconfigurable Computing Systems (HRCS) which is an
emerging high speed computing platform for real time applications.
Comparative Analysis of Various Grid Based Scheduling Algorithmsiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
The document discusses scheduling algorithms for desktop grid computing. Desktop grid computing harnesses idle computing resources from networked computers to work together on computationally intensive applications. Effective scheduling is important for utilizing resources on a desktop grid due to factors like heterogeneous capabilities, failures, volatility of resources, and lack of trust. The document proposes several scheduling mechanisms for desktop grids, including resource grouping, reputation-based scheduling, fault tolerant scheduling, and agent-based autonomous scheduling using availability information. These scheduling mechanisms aim to increase reliability, performance and decrease overhead compared to existing scheduling approaches for desktop grids.
International Refereed Journal of Engineering and Science (IRJES) is a peer reviewed online journal for professionals and researchers in the field of computer science. The main aim is to resolve emerging and outstanding problems revealed by recent social and technological change. IJRES provides the platform for the researchers to present and evaluate their work from both theoretical and technical aspects and to share their views.
SURVEY ON SCHEDULING AND ALLOCATION IN HIGH LEVEL SYNTHESIScscpconf
This paper presents the detailed survey of scheduling and allocation techniques in the High Level Synthesis (HLS) presented in the research literature. It also presents the methodologies and techniques to improve the Speed, (silicon) Area and Power in High Level Synthesis, which are presented in the research literature.
Survey of streaming data warehouse update schedulingeSAT Journals
In this paper, we study scheduling problem of updates for the streaming data warehouses. The streaming data warehouses are the combination of traditional data warehouses and data stream systems. In this, jobs are nothing but the processes which are responsible for loading new data in the tables. Its purpose is to decrease the data staleness. In addition, it handles well, the challenges faced by the streaming warehouses like, data consistency, view hierarchies, heterogeneity found in update jobs because of dissimilar arrival times as well as size of data, preempt updates etc. The staleness of data is the scheduling metric considered here. In this, jobs are nothing but the processes which are responsible for loading new data in the tables. Its purpose is to decrease the data staleness. In addition, it handles well, the challenges faced by the streaming warehouses like, data consistency, view hierarchies, heterogeneity found in update jobs because of dissimilar arrival times as well as size of data, preempt updates etc. The staleness of data is the scheduling metric considered here.
Keywords: partitioning strategy, scalable scheduling, data stream management system.
This document discusses project scheduling techniques, specifically the critical path method (CPM). It provides definitions and explanations of key CPM concepts like critical path, float, earliest and latest event times. It also presents the algorithms for performing CPM calculations on an activity-on-branch network, including the event numbering algorithm, earliest event time algorithm and latest event time algorithm. Sample network diagrams and calculations are provided to illustrate how CPM is implemented.
This document discusses the relationship between work breakdown structures (WBS), network diagrams, and risk management in performance technology projects. It states that WBS, network diagrams, and risk management are interrelated processes used in the planning phase. A WBS breaks a project into manageable tasks, while a network diagram visually maps the sequence and dependencies of tasks. Together, a WBS and network diagram can help identify risks by estimating workloads, resources, and timelines. Managing risks through careful planning and continuous monitoring can help projects be completed on time and on budget.
A distributed system can be viewed as an environment in which, number of computers/nodes are connected and resources are shared among these computers/nodes. But unfortunately, distributed systems often face the problem of traffic, which can degrade the performance of the system. Traffic management is used to improve scalability and overall system throughput in distributed systems using Software Defined Network (SDN) based systems. Traffic management improves system performance by dividing the work traffic effectively among the participating computers/nodes. Many algorithms were proposed for traffic management and their performance is measured based on certain parameters such as response time, resource utilization, and fault tolerance. Traffic management algorithms are broadly classified into two categories- scheduling and machine learning traffic management. This work presents the study of performance analysis of traffic management algorithms. This analysis can further help in the design of new algorithms. However, when multiple servers are assigned to compile the mysterious code, different kinds of techniques are used. One common example is traffic management. The processes are managed based on power efficiency, networking bandwidth, Processor speed. The desired output will again send back to the developer. If multiple programs have to be compiled then appropriate technique such as scheduling algorithm is used. So the compilation process becomes faster and also the other process can get a chance to compile. SDN based clustering algorithm based on Simulated Annealing whose main goal is to increase network lifetime while maintaining adequate sensing coverage in scenarios where sensor nodes produce uniform or non-uniform data traffic.
EARLY PERFORMANCE PREDICTION OF WEB SERVICESijwscjournal
The document describes a methodology for early performance prediction of web services. It involves modeling web services using UML diagrams, simulating the model using a tool called SMTQA, and analyzing performance metrics. The methodology was applied to model a general web services system using use case and sequence diagrams. The model was simulated and found that internet connections and the service broker disk were bottlenecks based on high average waiting times and request dropping probabilities.
EARLY PERFORMANCE PREDICTION OF WEB SERVICESijwscjournal
The document describes a methodology for early performance prediction of web services. It involves modeling web services using UML diagrams, simulating the model using a tool called SMTQA, and analyzing performance metrics. The methodology was applied to model a general web services system using use case and sequence diagrams. The model was simulated and bottleneck resources like Internet connections and disks were identified based on high average waiting times and probability of request dropping.
Referring Expressions with Rational Speech Act Framework: A Probabilistic App...IJDKP
This paper focuses on a referring expression generation (REG) task in which the aim is to pick
out an object in a complex visual scene. One common theoretical approach to this problem is
to model the task as a two-agent cooperative scheme in which a ‘speaker’ agent would generate
the expression that best describes a targeted area and a ‘listener’ agent would identify the target.
Several recent REG systems have used deep learning approaches to represent the speaker/listener
agents. The Rational Speech Act framework (RSA), a Bayesian approach to pragmatics that can
predict human linguistic behavior quite accurately, has been shown to generate high quality and
explainable expressions on toy datasets involving simple visual scenes. Its application to large scale
problems, however, remains largely unexplored. This paper applies a combination of the probabilistic
RSA framework and deep learning approaches to larger datasets involving complex visual scenes
in a multi-step process with the aim of generating better-explained expressions. We carry out
experiments on the RefCOCO and RefCOCO+ datasets and compare our approach with other endto-end deep learning approaches as well as a variation of RSA to highlight our key contribution.
Experimental results show that while achieving lower accuracy than SOTA deep learning methods,
our approach outperforms similar RSA approach in human comprehension and has an advantage
over end-to-end deep learning under limited data scenario. Lastly, we provide a detailed analysis
on the expression generation process with concrete examples, thus providing a systematic view
on error types and deficiencies in the generation process and identifying possible areas for future
improvements.
EARLY PERFORMANCE PREDICTION OF WEB SERVICESijwscjournal
Web Service is an interface which implements business logic. Performance is an important quality aspect of Web services because of their distributed nature. Predicting the performance of web services during early stages of software development is significant. In this paper we model web service using Unified Modeling Language, Use Case Diagram, Sequence Diagram, Deployment Diagram. We obtain the Performance metrics by simulating the web services model using a simulation tool Simulation of Multi-Tier Queuing Architecture. We have identified the bottle neck resources.
EARLY PERFORMANCE PREDICTION OF WEB SERVICESijwscjournal
Web Service is an interface which implements business logic. Performance is an important quality aspect of Web services because of their distributed nature. Predicting the performance of web services during early stages of software development is significant. In this paper we model web service using Unified Modeling Language, Use Case Diagram, Sequence Diagram, Deployment Diagram. We obtain the Performance metrics by simulating the web services model using a simulation tool Simulation of Multi-Tier Queuing Architecture. We have identified the bottle neck resources.
The Cloud computing becomes an important topic
in the area of high performance distributed computing. On the
other hand, task scheduling is considered one the most significant
issues in the Cloud computing where the user has to pay for the
using resource based on the time. Therefore, distributing the
cloud resource among the users' applications should maximize
resource utilization and minimize task execution Time. The goal
of task scheduling is to assign tasks to appropriate resources that
optimize one or more performance parameters (i.e., completion
time, cost, resource utilization, etc.). In addition, the scheduling
belongs to a category of a problem known as an NP-complete
problem. Therefore, the heuristic algorithm could be applied to
solve this problem. In this paper, an enhanced dependent task
scheduling algorithm based on Genetic Algorithm (DTGA) has
been introduced for mapping and executing an application’s
tasks. The aim of this proposed algorithm is to minimize the
completion time. The performance of this proposed algorithm has
been evaluated using WorkflowSim toolkit and Standard Task
Graph Set (STG) benchmark.
The document discusses optimization of resource allocation in cloud environments using a modified particle swarm optimization (PSO) approach. It proposes a Modified Resource Allocation Mutation PSO (MRAMPSO) strategy that uses an Extended Multi Queue Scheduling algorithm to schedule tasks based on resource availability and reschedules failed tasks. The MRAMPSO strategy is compared to standard PSO and other algorithms to show it can reduce execution time, makespan, transmission cost, and round trip time.
ANALYSIS OF THRESHOLD BASED CENTRALIZED LOAD BALANCING POLICY FOR HETEROGENEO...ijait
Heterogeneous machines can be significantly better than homogeneous machines but for that an effective workload distribution policy is required. Maximum realization of the performance can be achieved when system designer will overcome load imbalance condition within the system. Load
distribution and load balancing policy together can reduce total execution time and increase system throughput.
In this paper; we provide algorithm analysis of a threshold based job allocation and load balancing policy for heterogeneous system where all incoming jobs are judiciously and transparently distributed among sharing nodes on the basis of jobs’ requirement and processor capability for the maximization of performance and decline in execution time. A brief discussion of job allocation, transfer and location policy is given with explanation of how load imbalance condition is solved within the system. A flow of scheme is given with essential code and analysis of present algorithm is given to show how this algorithm is better.
An Improved Design of Contract Net Trust Establishment ProtocolIDES Editor
Contract Net Protocol (CNP) is FIPA standardized
high level communication protocol which specifies the way
software agents should follow while communicating. However
it lacks methods for ensuring trust and reliability of the agents
participating in the communication. In an earlier paper
authors proposed a variation of CNP involving trust
establishment feature into it, termed as Contract Net Trust
Establishment Protocol (CNTEP). However, efficient
communication can not be ensured unless the communicating
counterpart is reliable. This fact provided the motivation for
the present work, which extends CNTEP and incorporates
reliability computation component in it.
A novel scheduling algorithm for cloud computing environmentSouvik Pal
The document describes a proposed genetic algorithm-based scheduling approach for cloud computing environments. It aims to minimize waiting time and queue length. The algorithm first permutes task burst times and finds minimum waiting times using FCFS and genetic algorithms. It then applies a queuing model to the sequences with minimum waiting time from each approach. Experimental results on 4 sample tasks show the genetic algorithm reduces waiting time compared to FCFS. The genetic operators of selection, crossover and mutation are applied to evolve optimal task scheduling sequences.
Planning is a guideline in implementing the project so that development can be
implemented in accordance with the time and cost planned. Control discrepancy
between initial plan and realization that exists in implementation project required a
project management, therefore required optimization analysis of project duration, so
it can be known how long a project is completed and look for the possibility of project
acceleration implementation by Project Evaluation and Review Technique (PERT)
and Critical Path Method (CPM) or critical path method. This research aims to apply
Project Evaluation and Review Technique (PERT) and Critical Project Management
(CPM) methods to find optimize solutions and control the performance of time and
cost in project scheduling. The research method used case study method at hospital
project in Bogor District, Indonesia, by collecting data direct observation and
interview results at contractor. Based on these data, create a schedule by using PERT
and CPM methods, which will be measured performance of time performance and
project cost which is expected to overcome the problem of controlling and completion
of project. The results of this study, using PERT and CPM methods proved to optimize
the project. Based on calculation by PERT method reduce duration of work: 12 days
(13, 18%). Based on calculation by CPM method reduce duration of work: 31 days
(34, 06%) but direct cost increase 112.208,300, - rupiahs (0, 25%).
An Improved Parallel Activity scheduling algorithm for large datasetsIJERA Editor
Parallel processing is capable of executing a large number of tasks on a multiprocessor at the same time period, and it is also one of the emerging concepts. Complex and computational problems can be resolved in an efficient way with the help of parallel processing. The parallel processing system can be divided into two categories depending on the nature of tasks such are homogenous parallel system and the heterogeneous parallel processing system. In the homogeneous environment, the number of processors required for executing different tasks is similar in capacity. In case of heterogeneous environments, tasks are allocated to various processors with different capacity and speed. The main objective of parallel processing is to optimize the execution speed and to shorten the duration of task execution with independent of environment. In this proposed work, an optimized parallel project selection method was implemented to find the optimal resource utilization and project scheduling. The execution speeds of the task increases and the overall average execution time of the task decreases by allocating different tasks to various processors with the task scheduling algorithm.
A HEURISTIC APPROACH FOR WEB-SERVICE DISCOVERY AND SELECTIONijcsit
This document proposes a new heuristic approach for web service discovery and selection using an algorithm inspired by honey bee behavior called the Bees Algorithm. The approach structures service registries by domain to simplify discovery. It uses the Bees Algorithm as an intelligent search method to efficiently find the optimal service matching a client's request and quality of service requirements from the relevant registry in least time.
Task scheduling methodologies for high speed computing systemsijesajournal
High Speed computing meets ever increasing real-time computational demands through the leveraging of
flexibility and parallelism. The flexibility is achieved when computing platform designed with
heterogeneous resources to support multifarious tasks of an application where as task scheduling brings
parallel processing. The efficient task scheduling is critical to obtain optimized performance in
heterogeneous computing Systems (HCS). In this paper, we brought a review of various application
scheduling models which provide parallelism for homogeneous and heterogeneous computing systems. In
this paper, we made a review of various scheduling methodologies targeted to high speed computing
systems and also prepared summary chart. The comparative study of scheduling methodologies for high
speed computing systems has been carried out based on the attributes of platform & application as well.
The attributes are execution time, nature of task, task handling capability, type of host & computing
platform. Finally a summary chart has been prepared and it demonstrates that the need of developing
scheduling methodologies for Heterogeneous Reconfigurable Computing Systems (HRCS) which is an
emerging high speed computing platform for real time applications.
Comparative Analysis of Various Grid Based Scheduling Algorithmsiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
The document discusses scheduling algorithms for desktop grid computing. Desktop grid computing harnesses idle computing resources from networked computers to work together on computationally intensive applications. Effective scheduling is important for utilizing resources on a desktop grid due to factors like heterogeneous capabilities, failures, volatility of resources, and lack of trust. The document proposes several scheduling mechanisms for desktop grids, including resource grouping, reputation-based scheduling, fault tolerant scheduling, and agent-based autonomous scheduling using availability information. These scheduling mechanisms aim to increase reliability, performance and decrease overhead compared to existing scheduling approaches for desktop grids.
International Refereed Journal of Engineering and Science (IRJES) is a peer reviewed online journal for professionals and researchers in the field of computer science. The main aim is to resolve emerging and outstanding problems revealed by recent social and technological change. IJRES provides the platform for the researchers to present and evaluate their work from both theoretical and technical aspects and to share their views.
SURVEY ON SCHEDULING AND ALLOCATION IN HIGH LEVEL SYNTHESIScscpconf
This paper presents the detailed survey of scheduling and allocation techniques in the High Level Synthesis (HLS) presented in the research literature. It also presents the methodologies and techniques to improve the Speed, (silicon) Area and Power in High Level Synthesis, which are presented in the research literature.
Survey of streaming data warehouse update schedulingeSAT Journals
In this paper, we study scheduling problem of updates for the streaming data warehouses. The streaming data warehouses are the combination of traditional data warehouses and data stream systems. In this, jobs are nothing but the processes which are responsible for loading new data in the tables. Its purpose is to decrease the data staleness. In addition, it handles well, the challenges faced by the streaming warehouses like, data consistency, view hierarchies, heterogeneity found in update jobs because of dissimilar arrival times as well as size of data, preempt updates etc. The staleness of data is the scheduling metric considered here. In this, jobs are nothing but the processes which are responsible for loading new data in the tables. Its purpose is to decrease the data staleness. In addition, it handles well, the challenges faced by the streaming warehouses like, data consistency, view hierarchies, heterogeneity found in update jobs because of dissimilar arrival times as well as size of data, preempt updates etc. The staleness of data is the scheduling metric considered here.
Keywords: partitioning strategy, scalable scheduling, data stream management system.
This document discusses project scheduling techniques, specifically the critical path method (CPM). It provides definitions and explanations of key CPM concepts like critical path, float, earliest and latest event times. It also presents the algorithms for performing CPM calculations on an activity-on-branch network, including the event numbering algorithm, earliest event time algorithm and latest event time algorithm. Sample network diagrams and calculations are provided to illustrate how CPM is implemented.
This document discusses the relationship between work breakdown structures (WBS), network diagrams, and risk management in performance technology projects. It states that WBS, network diagrams, and risk management are interrelated processes used in the planning phase. A WBS breaks a project into manageable tasks, while a network diagram visually maps the sequence and dependencies of tasks. Together, a WBS and network diagram can help identify risks by estimating workloads, resources, and timelines. Managing risks through careful planning and continuous monitoring can help projects be completed on time and on budget.
A distributed system can be viewed as an environment in which, number of computers/nodes are connected and resources are shared among these computers/nodes. But unfortunately, distributed systems often face the problem of traffic, which can degrade the performance of the system. Traffic management is used to improve scalability and overall system throughput in distributed systems using Software Defined Network (SDN) based systems. Traffic management improves system performance by dividing the work traffic effectively among the participating computers/nodes. Many algorithms were proposed for traffic management and their performance is measured based on certain parameters such as response time, resource utilization, and fault tolerance. Traffic management algorithms are broadly classified into two categories- scheduling and machine learning traffic management. This work presents the study of performance analysis of traffic management algorithms. This analysis can further help in the design of new algorithms. However, when multiple servers are assigned to compile the mysterious code, different kinds of techniques are used. One common example is traffic management. The processes are managed based on power efficiency, networking bandwidth, Processor speed. The desired output will again send back to the developer. If multiple programs have to be compiled then appropriate technique such as scheduling algorithm is used. So the compilation process becomes faster and also the other process can get a chance to compile. SDN based clustering algorithm based on Simulated Annealing whose main goal is to increase network lifetime while maintaining adequate sensing coverage in scenarios where sensor nodes produce uniform or non-uniform data traffic.
EARLY PERFORMANCE PREDICTION OF WEB SERVICESijwscjournal
The document describes a methodology for early performance prediction of web services. It involves modeling web services using UML diagrams, simulating the model using a tool called SMTQA, and analyzing performance metrics. The methodology was applied to model a general web services system using use case and sequence diagrams. The model was simulated and found that internet connections and the service broker disk were bottlenecks based on high average waiting times and request dropping probabilities.
EARLY PERFORMANCE PREDICTION OF WEB SERVICESijwscjournal
The document describes a methodology for early performance prediction of web services. It involves modeling web services using UML diagrams, simulating the model using a tool called SMTQA, and analyzing performance metrics. The methodology was applied to model a general web services system using use case and sequence diagrams. The model was simulated and bottleneck resources like Internet connections and disks were identified based on high average waiting times and probability of request dropping.
Referring Expressions with Rational Speech Act Framework: A Probabilistic App...IJDKP
This paper focuses on a referring expression generation (REG) task in which the aim is to pick
out an object in a complex visual scene. One common theoretical approach to this problem is
to model the task as a two-agent cooperative scheme in which a ‘speaker’ agent would generate
the expression that best describes a targeted area and a ‘listener’ agent would identify the target.
Several recent REG systems have used deep learning approaches to represent the speaker/listener
agents. The Rational Speech Act framework (RSA), a Bayesian approach to pragmatics that can
predict human linguistic behavior quite accurately, has been shown to generate high quality and
explainable expressions on toy datasets involving simple visual scenes. Its application to large scale
problems, however, remains largely unexplored. This paper applies a combination of the probabilistic
RSA framework and deep learning approaches to larger datasets involving complex visual scenes
in a multi-step process with the aim of generating better-explained expressions. We carry out
experiments on the RefCOCO and RefCOCO+ datasets and compare our approach with other endto-end deep learning approaches as well as a variation of RSA to highlight our key contribution.
Experimental results show that while achieving lower accuracy than SOTA deep learning methods,
our approach outperforms similar RSA approach in human comprehension and has an advantage
over end-to-end deep learning under limited data scenario. Lastly, we provide a detailed analysis
on the expression generation process with concrete examples, thus providing a systematic view
on error types and deficiencies in the generation process and identifying possible areas for future
improvements.
EARLY PERFORMANCE PREDICTION OF WEB SERVICESijwscjournal
Web Service is an interface which implements business logic. Performance is an important quality aspect of Web services because of their distributed nature. Predicting the performance of web services during early stages of software development is significant. In this paper we model web service using Unified Modeling Language, Use Case Diagram, Sequence Diagram, Deployment Diagram. We obtain the Performance metrics by simulating the web services model using a simulation tool Simulation of Multi-Tier Queuing Architecture. We have identified the bottle neck resources.
EARLY PERFORMANCE PREDICTION OF WEB SERVICESijwscjournal
Web Service is an interface which implements business logic. Performance is an important quality aspect of Web services because of their distributed nature. Predicting the performance of web services during early stages of software development is significant. In this paper we model web service using Unified Modeling Language, Use Case Diagram, Sequence Diagram, Deployment Diagram. We obtain the Performance metrics by simulating the web services model using a simulation tool Simulation of Multi-Tier Queuing Architecture. We have identified the bottle neck resources.
Cloud computing is the fastest emerging technology and a novel buzzword in the field of IT domain that offer distinct services, applications and focuses on providing sustainable, reliable, scalable and virtualized resources to its consumer. The main aim of cloud computing is to enhance the use of distributed resources to achieve higher throughput and resource utilization in large-scale computation problems. Scheduling affects the efficiency of cloud and plays a significant role in cloud computing to create high performance environment. The Quality of Service (QoS) requirements of user application define the scheduling of resources. Numbers of researchers have tried to solve these scheduling problems using different QoS based scheduling techniques. In this paper, a detail analysis of resource scheduling methodology is presented, with different types of scheduling based on soft computing techniques, their comparisons, benefits and results are discussed. Major finding of this paper helps researchers to decide suitable approach for scheduling user’s applications considering their QoS requirements.
AN ADAPTIVE APPROACH FOR DYNAMIC RECOVERY DECISIONS IN WEB SERVICE COMPOSITIO...ijwscjournal
Service Oriented Architecture facilitates automatic execution and composition of web services in distributed environment. This service composition in the heterogeneous environment may suffer from various kinds of service failures. These failures interrupt the execution of composite web services and lead towards complete system failure. The dynamic recovery decisions of the failed services are dependent on non-functional attributes of the services. In the recent years, various methodologies have been presented to provide recovery decisions based on time related QoS (Quality of Service) factors. These QoS attributes can be categorized further. Our paper categorized these attributes as space and time. In this paper, we have proposed an affinity model to quantify the location affinity for composition of web services. Furthermore, we have also suggested a replication mechanism and algorithm for taking recovery decisions based on time and space based QoS parameters and usage pattern of the services by the user.
Immune-Inspired Method for Selecting the Optimal Solution in Semantic Web Ser...IJwest
The increasing interest in developing efficient and effective optimization techniques has conducted researchers to turn their attention towards biology. It has been noticed that biology offers many clues for designing novel optimization techniques, these approaches exhibit self-organizing capabilities and permit the reachability of promising solutions without the existence of a central coordinator. In this paper we handle the problem of dynamic web service composition, by using the clonal selection algorithm. In order to assess the optimality rate of a given composition, we use the QOS attributes of the services involved in the workflow as well as, the semantic similarity between these components. The experimental evaluation shows that the proposed approach has a better performance in comparison with other approaches such as the genetic algorithm.
MMUNE-INSPIRED METHOD FOR SELECTING THE OPTIMAL SOLUTION IN SEMANTIC WEB SE...dannyijwest
The increasing interest in developing efficient and effective optimization techniques has conducted
researchers to turn their attention towards biology. It has been noticed that biology offers many clues for
designing novel optimization techniques, these approaches exhibit self-organizing capabilities and permit
the reachability of promising solutions without the existence of a central coordinator. In this paper we
handle the problem of dynamic web service composition, by using the clonal selection algorithm. In order
to assess the optimality rate of a given composition, we use the QOS attributes of the services involved in
the workflow as well as, the semantic similarity between these components. The experimental evaluation
shows that the proposed approach has a better performance in comparison with other approaches such as
the genetic algorithm
An Overview of Workflow Management on Mobile Agent TechnologyIJERA Editor
This document discusses mobile agent technology for workflow management. It provides an overview of current research on using mobile agents to automate business processes across distributed systems. The document summarizes several related works on topics like inter-organizational workflows, mobile agent communication, coordination techniques, and workflow partitioning and scheduling algorithms. It aims to improve methods for designing and implementing prototype models for mobile agent-based workflow management systems.
Dynamic Three Stages Task Scheduling Algorithm on Cloud Computing
Naglaa Sayed Abdelrehem, Fathi Ahmed Amer, Imane Aly Saroit,
Department of Information Technology, Faculty of Computer and Artificial Intelligence, Cairo University, Cairo, Egypt.
A CLOUD COMPUTING USING ROUGH SET THEORY FOR CLOUD SERVICE PARAMETERS THROUGH...cscpconf
Cloud Computing is an awesome technology which have go ahead and annex the computing
world. The development of cloud computing embed its sprouting continually in the recent era.
Cloud Computing have contrived clan very lavish comfortable to perform their chore. As its
fundamental defnition says that As You Pay As You Go. In this delving work talk about the cloud
simulator working in our algorithm which is prepared with the help of rough set theory. The
algorithm is implemented in the cloud simulator in which cloudlets, datacenters, cloud brokers
are created to perform the algorithms with the help of rough sets. The Ontology is the system
consists of cloud services in which cloud service discovery system is maintained. At last there is
the implementation of work done using the rough set in cloud simulator using net beans and sql
as back end. The net beans is loaded with the cloud Sim packages in which some of the
packages are prepared according to our algorithm and gives the expected output of the
optimization using rough set as a new concept.
A cloud computing using rough set theory for cloud service parameters through...csandit
Cloud Computing is an awesome technology which have go ahead and annex the computing
world. The development of cloud computing embed its sprouting continually in the recent era.
Cloud Computing have contrived clan very lavish comfortable to perform their chore. As its
fundamental defnition says that As You Pay As You Go. In this delving work talk about the cloud
simulator working in our algorithm which is prepared with the help of rough set theory. The
algorithm is implemented in the cloud simulator in which cloudlets, datacenters, cloud brokers
are created to perform the algorithms with the help of rough sets. The Ontology is the system
consists of cloud services in which cloud service discovery system is maintained. At last there is
the implementation of work done using the rough set in cloud simulator using net beans and sql
as back end. The net beans is loaded with the cloud Sim packages in which some of the
packages are prepared according to our algorithm and gives the expected output of the
optimization using rough set as a new concept
A CLOUD COMPUTING USING ROUGH SET THEORY FOR CLOUD SERVICE PARAMETERS THROUGH...csandit
Cloud Computing is an awesome technology which have go ahead and annex the computing world. The development of cloud computing embed its sprouting continually in the recent era. Cloud Computing have contrived clan very lavish comfortable to perform their chore. As its fundamental defnition says that As You Pay As You Go. In this delving work talk about the cloud simulator working in our algorithm which is prepared with the help of rough set theory. The algorithm is implemented in the cloud simulator in which cloudlets, datacenters, cloud brokers are created to perform the algorithms with the help of rough sets. The Ontology is the system consists of cloud services in which cloud service discovery system is maintained. At last there is the implementation of work done using the rough set in cloud simulator using net beans and sql as back end. The net beans is loaded with the cloud Sim packages in which some of the packages are prepared according to our algorithm and gives the expected output of the optimization using rough set as a new concept.
WEB SERVICE COMPOSITION PROCESSES: A COMPARATIVE STUDYijwscjournal
The document provides a comparative study of web service composition processes and methods. It summarizes the key phases of composition as planning, discovery, selection, and execution. For the planning phase, it describes various approaches including workflow-based, AI planning based on state space, logic, graph models, and satisfiability. AI planning methods are further broken down into state-space based planning using forward/backward search, logic-based planning using rules and constraints, graph-based planning using directed graphs, and planning as satisfiability using reasoning algorithms. The document aims to classify existing solutions for each composition phase to better understand their capabilities and limitations.
WEB SERVICE COMPOSITION PROCESSES: A COMPARATIVE STUDYijwscjournal
Service composition is the process of constructing new services by combining several existing ones. It considered as one of the complex challenges in distributed and dynamic environment. The composition process includes, in general, the searching for existing services in a specific domain, and selecting the appropriate service, then coordinating composition flow and invoking services. Over the past years, the problem of web service composition has been studied intensively by researchers. Therefore, a significant amount of solutions and new methods to tackle this problem are presented. In this paper, our objective is to investigate algorithms and methodologies to provide a classification of existing methods in each composition phase. Moreover, we aim at conducting a comparative study to discover the main features and limitation in each phase in order to assist future research in this area.
A SERVICE ORIENTED DESIGN APPROACH FOR E-GOVERNANCE SYSTEMSijitcs
The document describes a service-oriented design approach for e-governance systems. It discusses key challenges in developing e-governance systems and proposes addressing these challenges through a service-oriented paradigm. The approach defines concepts like service types (readily available, composable, collaborative), service windows, service composition, and service collaboration. Service types depend on the complexity of processing required - readily available services require minimal processing, composable services may invoke other related services, and collaborative services require coordination across service windows. The approach aims to provide reusable, interoperable services and facilitate integration of existing applications in e-governance systems.
A HYPER-HEURISTIC METHOD FOR SCHEDULING THEJOBS IN CLOUD ENVIRONMENTieijjournal1
Currently cloud computing has turned into a promising technology and has become a great key for
satisfying a flexible service oriented , online provision and storage of computing resources and user’s
information in lesser expense with dynamism framework on pay per use basis.In this technology Job
Scheduling Problem is acritical issue. For well-organizedmanaging and handling resources,
administrations, scheduling plays a vital role. This paper shares out the improved Hyper- Heuristic
Scheduling Approach to schedule resources, by taking account of computation time and makespan with two
detection operators. Operators are used to select the low-level heuristics automatically. Conditional
Revealing Algorithm (CRA)idea is applied for finding the job failures while allocating the resources. We
believe that proposed hyper-heuristic achieve better results than other individual heuristics
A HYPER-HEURISTIC METHOD FOR SCHEDULING THEJOBS IN CLOUD ENVIRONMENTieijjournal
The document proposes a hyper-heuristic method for scheduling jobs in a cloud environment. It combines two low-level heuristics - Ant Colony Optimization and Particle Swarm Optimization - and uses two operators, intensification and diversity revealing, to select the heuristics. It also uses a conditional revealing operator to identify job failures while allocating resources. The hyper-heuristic aims to achieve better results than individual heuristics in terms of lower makespan time.
Similar to Dynamic Web Service Composition based on Network Modeling with Statistical Analysis and Backtracking (20)
Adaptive synchronous sliding control for a robot manipulator based on neural ...IJECEIAES
Robot manipulators have become important equipment in production lines, medical fields, and transportation. Improving the quality of trajectory tracking for
robot hands is always an attractive topic in the research community. This is a
challenging problem because robot manipulators are complex nonlinear systems
and are often subject to fluctuations in loads and external disturbances. This
article proposes an adaptive synchronous sliding control scheme to improve trajectory tracking performance for a robot manipulator. The proposed controller
ensures that the positions of the joints track the desired trajectory, synchronize
the errors, and significantly reduces chattering. First, the synchronous tracking
errors and synchronous sliding surfaces are presented. Second, the synchronous
tracking error dynamics are determined. Third, a robust adaptive control law is
designed,the unknown components of the model are estimated online by the neural network, and the parameters of the switching elements are selected by fuzzy
logic. The built algorithm ensures that the tracking and approximation errors
are ultimately uniformly bounded (UUB). Finally, the effectiveness of the constructed algorithm is demonstrated through simulation and experimental results.
Simulation and experimental results show that the proposed controller is effective with small synchronous tracking errors, and the chattering phenomenon is
significantly reduced.
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
Dynamic Web Service Composition based on Network Modeling with Statistical Analysis and Backtracking
1. International Journal on Web Service Computing (IJWSC), Vol.3, No.2, June 2012
DOI : 10.5121/ijwsc.2012.3202 11
Dynamic Web Service Composition based on
Network Modeling with Statistical Analysis and
Backtracking
M.SureshKumar1 and P.Varalakshmi2
1 Research Scholar, Anna University, Chennai,
suresh.priya.kumar@gmail.com
2 Department of Information Technology, MIT Campus, Anna University, Chennai,
varanip@gmail.com
ABSTRACT
A Web service is a software system designed to support interoperable machine-to-machine interaction
over a network. Web services provide a standard means of interoperating between different software
applications, running on a variety of platforms and/or frameworks. One of the main advantages of
the usage of web services is its ability to integrate with the other services through web service
composition and realize the required functionality. This paper presents a new paradigm of dynamic
web services composition using network analysis paired with backtracking. An algorithm called
“Zeittafel” for the selection and scheduling of services that are to be composed is also presented.
With the proposed system better percentage of job success rate is obtained compared to the existing
methodology.
KEYWORDS
Dynamic composition, PERT, backtracking, tour planner
1. INTRODUCTION
Web services are considered as self-contained, self-describing, modular applications that
can be published, located, and invoked across the Web. Nowadays, an increasing amount of
companies and organizations only implement their core business and outsource other
application services over Internet. Thus, the ability to efficiently and effectively select and
integrate inter-organizational and heterogeneous services on the Web at runtime is an important
step towards the development of the Web service applications. Web services can be engaged
as in Figure 1. In particular, if no single Web service can satisfy the functionality required
by the user, there should be a possibility to combine existing services together in order to
fulfill the request. This trend has triggered a considerable number of research efforts on the
composition of Web services both in academia and in industry. In the research related to Web
services, several initiatives have been conducted with the intention to provide platforms and
languages that will allow easy integration of heterogeneous systems. There are two
methods for web services composition [10, 11, 12]. One is static web service composition
and other is automated/dynamic web service composition. In static web service composition,
composition is performed manually, that is each web service is executed one by one in order to
achieve the desired goal/requirement. It is a time consuming task which requires a lot of
effort. In automated web service composition, agents are used to select a web service that may
be composed of multiple web services but from user’s viewpoint, it is considered as a single
service [13].
2. International Journal on Web Service Computing (IJWSC), Vol.3, No.2, June 2012
12
Figure 1: General Process of Engaging a Web Service
The integration of heterogeneous web services together to realize business functionality is
called web service composition. In other words web service composition can be termed as an
aggregation of individual web services to automate a particular task or a business process
[2]. Through this paper we present tour planner software, “Web Safari”, which uses web
services composition by the proposed “Zeittafel” algorithm. In this software the various
activities involved in the tour such as flight booking, hotel booking, scenic spot searching,
dinner booking, etc are modeled as the individual nodes or activities of the Program
Evaluation and Review Technique (PERT) network. There may be several individual service
providers available for each activity.
Each of these is modeled as a web service. For example, various hotels may be present in the
same locality offering same service. The best service is selected based on the availability of the
service at that particular instant of time and various other constraints. This service selection
is performed using the PERT method [3]. In case if no service is available in a particular
category the processing may get into busy wait. In order to avoid this condition backtracking is
employed [4]. This enables us to obtain an optimal solution and finally a set of services will be
obtained that is the tour plan for the user. The reservation for these services can be done.
2. RELATED WORK
The QoS factors performance, reliability, availability and cost were dealt in the web service
composition for e-business systems by Sayed Gholam and Hassan Tabatabaei [8]. This uses
Web Service Modelling Ontology (WSMO). Web service composition using Web Service
Databases (WSDB) is proposed by Farhan Hassan Khan, M.Younus Javed, and Saba Bashir [9].
It presented a framework in which multiple repositories and WSDBs have been introduced
in order to make system more reliable and ensure data availability. Incheon Paik and
Daisuke Maruyama [7] proposes a framework for automated web services composition through
AI (Artificial Intelligence) planning technique by combining logical combination (HTN) and
physical composition (CSP (Constraint Satisfaction Problem)). This paper discusses the real life
problems on the web that is related to planning and scheduling. It provides task ordering to reach
at desired goal. P.Sandhya and M.Lakshmi [3] deal with an algorithm for automatic quality
driven web service called Opus deviser algorithm for business and task planners using PERT.
Here infinite looping and exhaustive waiting problems arise. Gexin Li, Aixin Zu, Chengwen
Wu, Zhengzhong Wang [4] deal with backtracking to arrive at optimal solutions for web
service composition. This paper combines both PERT and backtracking where PERT is used
for service evaluation and backtracking for service selection into the composition to obtain an
optimal tour plan.
3. International Journal on Web Service Computing (IJWSC), Vol.3, No.2, June 2012
13
3. PERT AND BACKTRACKING
Network analysis is the general name given to certain specific techniques which can be used for
the planning, management and control of projects. There are two widely applied network
analysis models namely Critical Path Method (CPM) and Program Evaluation and Review
Technique (PERT).
CPM was the trade-off between the cost of the project and its overall completion time (e.g.
for certain activities it may be possible to decrease their completion times by spending more
money - how does this affect the overall completion time of the project). PERT was used for
the planning and control of the Polaris missile program and the emphasis was on completing the
program in the shortest possible time. In addition PERT had the ability to cope with uncertain
activity completion times (e.g. for a particular activity the most likely completion time is 4
weeks but it could be anywhere between 3 weeks and 8 weeks).
Due to the complex nature of most projects, it is very difficult to completely innate the delays
and the cost overruns. However, with the appropriate management systems for planning,
organizing, and controlling, it is possible to reduce them to a reasonable level. The problem is
that the cost of implementing and executing such systems can exceed their benefits because of
the large amount of monitoring and reporting that is required. The major purpose of PERT and
CPM is to objectively identify these critical activities. Further, these techniques can tell us
how close the remaining activities are to becoming critical. (This available delay is called
slack or float.).
PERT and CPM are very similar in their approach however, two distinctions are usually made.
The first relates to the way in which activity duration are estimated. In PERT, three estimates are
used to form a weighted average of the expected completion time, based on a probability
distribution of completion times. Therefore, PERT is considered a probabilistic tool. In CPM,
there is only one estimate of duration; that is, CPM is a deterministic tool. The second
difference is that CPM allows an explicit estimate of costs in addition to time. Thus, while
PERT is basically a tool for planning and control of time, CPM can be used to control
both the time and the cost of the project. Extensions of both PERT and CPM allow the user
to manage other resources in addition to time and money, to trade off resources, to analyze
different types of schedules, and to balance the use of resources.
Program Evaluation and Review Technique (PERT) [5] is a statistical tool used in
project management that is designed to analyze and represent the tasks involved in the given
project. One key element to PERT's application is that four estimates are required because of the
element of uncertainty and to provide time frames for the PERT network. It is probabilistic
in nature and involves the calculation of
1. Optimistic time (O): The minimum possible time required to accomplish a task, assuming
everything proceeds better than is normally expected.
2. Pessimistic time (P): The maximum possible time required to accomplish a task, assuming
everything goes wrong (but excluding major catastrophes).
3. Most likely time (M): The best estimate of the time required to accomplish a task, assuming
everything proceeds as normal.
4. International Journal on Web Service Computing (IJWSC), Vol.3, No.2, June 2012
14
4. Expected time (TE): The best estimate of the time required to accomplish a task, accounting for
the fact that things don't always proceed as normal (the implication being that the expected time
is the average time the task would require if the task were repeated on a number of occasions
over an extended period of time). It can be calculated using the formula specified in equation
(1).
TE = (O + 4M + P) / 6 ------------------- (1)
Two other elements comprise the PERT network: the path, or critical path, and slack time.
The critical path is a combination of events and activities that will necessitate the greatest
expected completion time. Slack time is defined as the difference between the total expected
activity time for the project and the actual time for the entire project. Slack time is the spare time
experienced in the PERT network.
Backtracking [6] is a general algorithm or a tool to arrive to a solution for constraint satisfaction
problems. A classic example for this is the N-queens problem. Backtracking is also used to
optimize the solution from a given set of partial candidate solutions.
4. PROPOSED WORK
The proposed work can be split into 2 steps.
1. Selection of services
2. Composition of services
Step 1: Selection of services
As stated before several services are available in each category. Only one of them must be
selected from each of the categories. The primary constraint that we seek is the availability of
the service. In the case of tour planning, say once a flight is completely booked the service can
be blocked in the registry. Thus if the reservation is not available then the service is also not
available. Thus first and foremost the availability factor of the Quality of Service constraints is
checked. Once the service is available it is counted as an option for the future processing.
Secondly with respect to the tour planner, the constraints of the user such as cost, number
of people, etc are applied and some services are filtered out.
All the available services are constructed as a set. Some of the services may need to
follow a particular sequence while some may not. For the services that follow a predefined order
the normal Opus deviser algorithm may be followed. But for the activities where the sequence
is not important, for example sight seeing, then the modified Opus deviser algorithm is
followed. This makes use of the PERT method. In some cases no service in a category may be
available. In such situations the backtracking is employed in order to change the order of
activities or to intimate the user about the failure. This rescues the system from entering into the
busy wait condition.Finally a single solution or a set of solutions can be obtained. If multiple
solutions are available the choice is left to the user.
Step 2: Composition of services
The selected set of services is composed together and orchestrated using BPEL engine. The
booking for the required spots are done and the tour plan is finalized.
5. International Journal on Web Service Computing (IJWSC), Vol.3, No.2, June 2012
15
Figure 2: Architecture of proposed system
5. ALGORITHM
Our algorithm named “Zeittafel” is a PERT model based algorithm using backtracking to
handle exceptions.
Input:
Set of web services
S =
Where m= number of web services in each category
n= number of web service categories
Goal deadline (g)
Fixed sequence of categories (where applicable) FC= [ fc1 fc2….. fcp] where p<=n
Non-fixed sequence of categories (generated randomly) NC= [ nc1 nc2 ….. ncq] where
q<=n and p+q=n.
Output: Composed web services
Algorithm:
1. Begin
2. Build a matrix of web services W such that there is at least one service that is available
from each category. Therefore W S
6. International Journal on Web Service Computing (IJWSC), Vol.3, No.2, June 2012
16
3. Generate all partial candidate combinations from the matrix W such that there is at
least one service from each category following the given sequence of categories FC
wherever applicable along with the randomly generated NC. The solution must be
performed within the goal deadline.
4. If successful combination(s) is(are) generated then
Go to step 5. Else
a. Backtrack to change the order of the non-sequential category NC. Replace NC
with changed NC by changing the categories in a circular fashion until the circle is
completed. Go to step 2.
b. If no solution can be found then negotiate with client for the withdrawal of certain
non available sub categories of the service. Go to step 2.
5. For each combination PERT network analysis model is applied.
a. Let optimistic time as O, most likely time as M, pessimistic time as P for each
activity of the PERT model.
b. Compute expected time T for each activity as T= (O+4M+P)/6.
c. Create a directed graphical representation of each combination with each event as a
node and each activity as an arrow. Compute the float time for each activity and
the critical path.
d. Calculate the variance for each activity in all combinations as vc= ((P-O)/6)2.
Thus the
set of variances become V= {vc1, vc2, ....., vcn}
e. Compute the variance of critical path of each combination as CV={ cv1, cv2,….,
cvn}
where cvc= sum of variances of each activity in the critical path of the cth
combination.
n
cvc= vcc
1
------------------- (2)
f. Compute standard deviation for each combination as
SDc=
cvc -------------------- (3)
g. In order to achieve the task within the goal deadline g, assign X=g where X is the
time under consideration.
h. Apply normal distribution of probability with normal variate Z={z1,z2,….,zn}. zi=
(X- xi)/SDi where xi is the critical path time of the combination.
i. Find the probability of completion of the project with the given sequence from
the
normal table and construct the probability set A={a1,a2,….,an}
6. Select a combination such that the probability of completion is high. If probability remains
same for more combinations, the one with low critical path is selected. User
intervention may also be preferred. This optimizes the solution.
7. The services that are within the selected combination are composed.
8. Stop
7. International Journal on Web Service Computing (IJWSC), Vol.3, No.2, June 2012
17
6. IMPLEMENTATION AND RESULTS
This project has been implemented in J2EE framework. J2EE web services were developed with Axis
2 engine. Their respective WSDL files were generated and thus they were added to the registry.
Another web service that implements the Zeittafel algorithm is developed.
Figure 3: Examples of calculating task completion time using PERT. The data in red shows the total expected
time taken to complete the activity
This involves the PERT and backtracking and it selects the web services for composition. The
example PERT representation is depicted in Figure 3. These are composed using BPEL engine and
are orchestrated together.
Figure 4: BPEL implementation
Business Process Execution Language (BPEL), short for Web Services Business Process Execution
Language (WS-BPEL) is a standard executable language for specifying actions within business
processes with web services. Processes in BPEL export and import information by using web
service interfaces exclusively. WS-BPEL 2.0 is an XML-based language for defining business
processes that orchestrate web services and this is used in the project. The implementation uses Riftsaw
Open Source JBOSS BPEL engine.
8. International Journal on Web Service Computing (IJWSC), Vol.3, No.2, June 2012
18
The UI part of the algorithm gets the necessary input from the user such as deadline, etc
and invokes the algorithm.
We voluntarily introduce situations in the system that lead to the invocation of backtracking.
The deadline g is given as 450 minutes.
With respect to the “Web Safari” project the implementation is described as below. There
are 6 categories of web service providers such as flight, taxi, hotel, tourist spot1, tourist spot2
and tourist spot3. Each of the categories has 3 web services. They are named C1 through C6
respectively. The first 3 categories are fixed and the rest of them are non-fixed.
i.e FC={ C1, C2, C3} NC= {C4, C5, C6}
The constraints for backtracking is introduced such that C4 is not available at the required point
of time but if the sequence NC= {C5, C4, C6} the output is obtained. The remaining
combinations do not meet the constraints. The following table (Table 1) describes the time
taken in minutes to complete each task in each category by the web services.
Table 1: Time taken to complete each task (in mins)
Cateogory WS1 WS2 WS3
C1 180 210 150
C2 20 30 25
C3 10 12 15
C4 90 100 85
C5 30 30 25
C6 120 135 125
For example, to reach the given place through flight represented by WS1 in C1 (say Air India)
180 minutes is required. With these given values various combinations are generated and the
program is executed. This also involves the random non-sequential category generation once it
is found the C4 is not available. To demonstrate the working a few examples are shown in
figure 3. The probability of task completion is also calculated and a normal distribution curve
for the combination is drawn. A bell-shaped normal distribution curve as shown in figure 5 is
obtained. The region in dark line denotes the probability of completion.
Figure 5: Normal distribution curve
9. International Journal on Web Service Computing (IJWSC), Vol.3, No.2, June 2012
19
Executiontime(ms)
Thus the combination with least completion time is selected and this is composed together and
the tour plan is presented to the user.
Web service composition using PERT
350
300
250
200
150
100
50
0
With backtracking
Without backtracking
25 50 75 100 125 150
No. of se rvices in ea ch ca te gory
Figure 6: Efficiency of our algorithm
The same data is input to the program without backtracking. In that case the execution time of
the system with backtracking is efficient than that without backtracking. Figure6 represents
the difference between the systems with and without backtracking. The system constructed
has constraints where backtracking logic has to be invoked. Though the execution time will
be increased due to the invocation of backtracking logic the overall process completion time
of the system will be less than the system without backtracking due to the fact that the system
enters into an exhaustive waiting state.
7. CONCLUSION
The uncertain nature of the completion of activities is taken into account. Highly optimized
dynamic web service composition with provisions for reliability in the form of backtracking is
introduced. QoS of availability is dealt here with respect to the blocking of services that are not
available for the given point of time along with the filtration of them in the matrix.
REFERENCES
[1] Web Services Architecture http://www.w3.org/TR/ws-arch/ [2] Thomas Erl, SOA Principles
http://www.soaprinciples.com/
[3] P.Sandhya and Dr. M. Lakshmi, “A Novel Approach for Realizing Business Agility through
Temporally Planned Automatic Web Service Composition using Network Analysis” ,
International Conference On Semantic Web & Web Services SWWS 2011.
[4] Gexin Li, Aixin Zu, Chengwen Wu, Zhengzhong Wang, “Web service composition based on
backtracking”, Second International Conference on Artificial Intelligence, Management Science and
Electronic Commerce AIMSEC 2011.
[5] Program Evaluation and Review Technique (PERT)
http://en.wikipedia.org/wiki/Program_Evaluation_and_Review_Technique
[6] Backtracking http://en.wikipedia.org/wiki/Backtracking
[7] Incheon Paik, Daisuke Maruyama “Automatic Web Services Composition Using Combining HT-
and
CSP” Seventh International Conference on Computer And Information Technology, IEEE 2007
[8] Sayed Gholam Hassan Tabatabaei, “Web Service Composition Approaches to Support
Dynamic E- Business Systems”, Communications of the IBIMA, 2008
10. International Journal on Web Service Computing (IJWSC), Vol.3, No.2, June 2012
20
[9] Farhan Hassan Khan, M.Younus Javed, Saba Bashir , “QoS Based Dynamic Web Services
Composition & Execution”, International Journal of Computer Science and Information Security,
Vol. 7, No. 2, February 2010
[10] Yilan Gu and Mikhail Soutchanski. “ A Logic For Decidable Reasoning About Services” Proceedings
of AAAI-06 workshop on AIDriven Technologies for Services-Oriented Computing 2006.
[11] A. Marconi, M. Pistore, and P. Traverso. "Implicit vs. Explicit Data-Flow Requirements in Web
Service Composition Goals". In Proceeding of the 4th International Conference on Service-
Oriented Computing (ICSOC06), volume 4294 of Lecture Notes in Computer Science (LNCS),
Chicago, Illinois, USA, 2006. Springer.
[12] Michael Hu, Howard Foster “Using a Rigorous Approach for Engineering Web Services
Compositions: A Case Study”. IEEE International Conference on Services Computing, Vol 2, July
2005
[13] Daniela Barreirs claro, Patrick Albers , Jin-kao Hao “Selecting Web Services for Optimal
Composition” 2nd International Workshop On Semantic And Dynamic Web Processes (SDWP 2005).
M. SureshKumar received the B.E degree in Computer Science & Engineering from
Madras University, TamilNadu, India and the M.E Degree in Computer Science &
Engineering from Sathyabama University, India. He is working as an Assistant
Professor in Information Technology Department in Sri Sai Ram Engineering College-
Chennai.
Dr.P.Varalakshmi obtained Ph.D. from the Faculty of Information and Communication
Engineering at Anna University. She is working as a Assistant Professor in the
Department of Information Technology at Madras Institute of Technology, Anna
University. She has published many research papers in international and national
journals and conference proceedings with very high citation index of about 25 papers so
far.