The most important purpose of grid networks is resource subscription in a dynamic and heterogeneous environment.
They are accessible through using various methods. Subscription has mainly computational, scientific and other implications. In
order to reach grid purposes and to use available resources in grid environment, subtasks are distributed among resources and are
scheduled by considering the quality of service. It has been tried to distribute subtasks between resources in a way that maximum
QOS can be obtained. In this study, a method has been presented. In this method, three parameters; namely, sent and transferred
time between RMS and resource, process time of subtask by the resource, and the load of available tasks in resources row, have
been taken into account. In this way, multi-criteria decision is made by using TOPSIS method and this priority of the resources
are determined to assign them to subtasks. Finally, time response, as an efficient parameter, has been improved and optimized by
optimal assignment of the resources to subtasks.
A Survey of Job Scheduling Algorithms Whit Hierarchical Structure to Load Ba...Editor IJCATR
Due to the advances in human civilization, problems in science and engineering are becoming more complicated than ever
before. To solve these complicated problems, grid computing becomes a popular tool. a grid environment collects, integrates, and uses
heterogeneous or homogeneous resources scattered around the globe by a high-speed network. Scheduling problems are at the heart of
any Grid-like computational system. a good scheduling algorithm can assign jobs to resources efficiently and can balance the system
load. in this paper we survey three algorithms for grid scheduling and compare benefit and disadvantages of their based on makespan.
This document discusses load balancing strategies for grid computing. It proposes a dynamic tree-based model to represent grid architecture in a hierarchical way that supports heterogeneity and scalability. It then develops a hierarchical load balancing strategy and algorithms based on neighborhood properties to decrease communication overhead. Conventional scheduling algorithms like Min-Min, Max-Min, and Sufferage are discussed but determined to ignore dynamic network status, which is important for load balancing. Genetic algorithms are also mentioned as a potential solution.
Scheduling Algorithm Based Simulator for Resource Allocation Task in Cloud Co...IRJET Journal
This document proposes a scheduling algorithm for allocating resources in cloud computing based on the Project Evaluation and Review Technique (PERT). It aims to address issues like starvation of lower priority tasks. The algorithm models task allocation as a directed acyclic graph and uses PERT to schedule critical and non-critical tasks, prioritizing higher priority tasks. The algorithm is evaluated against other scheduling methods and shows improvements in reducing completion time and optimizing resource allocation for all tasks.
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
DYNAMIC TASK SCHEDULING BASED ON BURST TIME REQUIREMENT FOR CLOUD ENVIRONMENTIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
Time Efficient VM Allocation using KD-Tree Approach in Cloud Server Environmentrahulmonikasharma
This document summarizes a research paper that proposes a new algorithm called KD-Tree approach for efficient virtual machine (VM) allocation in cloud computing environments. The algorithm aims to minimize the response time for allocating VMs to user requests. It does this by adopting a KD-Tree data structure to index physical host machines, allowing the scheduler to quickly find the host that can accommodate a new VM request with the minimum latency in O(Log n) time. The proposed approach is evaluated through simulations using the CloudSim toolkit and is shown to outperform an existing linear scheduling strategy (LSTR) algorithm in terms of reducing VM allocation times.
Challenges in Dynamic Resource Allocation and Task Scheduling in Heterogeneou...rahulmonikasharma
This document discusses the challenges of dynamic resource allocation and task scheduling in heterogeneous cloud environments. It outlines that resource allocation involves deciding how to allocate resources to tasks to maximize utilization, while task scheduling assigns tasks to processors to minimize execution time. The major challenges are optimizing allocated resources to minimize costs while meeting customer demands and application requirements. Allocating resources dynamically in heterogeneous cloud environments is difficult due to issues like resource contention, scarcity, and fragmentation. The document also discusses approaches to resource modeling, allocation, offering, discovery and monitoring that algorithms must address to effectively allocate resources on demand.
The Impact of Data Replication on Job Scheduling Performance in Hierarchical ...graphhoc
In data-intensive applications data transfer is a primary cause of job execution delay. Data access time depends on bandwidth. The major bottleneck to supporting fast data access in Grids is the high latencies of Wide Area Networks and Internet. Effective scheduling can reduce the amount of data transferred across the internet by dispatching a job to where the needed data are present. Another solution is to use a data replication mechanism. Objective of dynamic replica strategies is reducing file access time which leads to reducing job runtime. In this paper we develop a job scheduling policy and a dynamic data replication strategy, called HRS (Hierarchical Replication Strategy), to improve the data access efficiencies. We study our approach and evaluate it through simulation. The results show that our algorithm has improved 12% over the current strategies
A Survey of Job Scheduling Algorithms Whit Hierarchical Structure to Load Ba...Editor IJCATR
Due to the advances in human civilization, problems in science and engineering are becoming more complicated than ever
before. To solve these complicated problems, grid computing becomes a popular tool. a grid environment collects, integrates, and uses
heterogeneous or homogeneous resources scattered around the globe by a high-speed network. Scheduling problems are at the heart of
any Grid-like computational system. a good scheduling algorithm can assign jobs to resources efficiently and can balance the system
load. in this paper we survey three algorithms for grid scheduling and compare benefit and disadvantages of their based on makespan.
This document discusses load balancing strategies for grid computing. It proposes a dynamic tree-based model to represent grid architecture in a hierarchical way that supports heterogeneity and scalability. It then develops a hierarchical load balancing strategy and algorithms based on neighborhood properties to decrease communication overhead. Conventional scheduling algorithms like Min-Min, Max-Min, and Sufferage are discussed but determined to ignore dynamic network status, which is important for load balancing. Genetic algorithms are also mentioned as a potential solution.
Scheduling Algorithm Based Simulator for Resource Allocation Task in Cloud Co...IRJET Journal
This document proposes a scheduling algorithm for allocating resources in cloud computing based on the Project Evaluation and Review Technique (PERT). It aims to address issues like starvation of lower priority tasks. The algorithm models task allocation as a directed acyclic graph and uses PERT to schedule critical and non-critical tasks, prioritizing higher priority tasks. The algorithm is evaluated against other scheduling methods and shows improvements in reducing completion time and optimizing resource allocation for all tasks.
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
DYNAMIC TASK SCHEDULING BASED ON BURST TIME REQUIREMENT FOR CLOUD ENVIRONMENTIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
Time Efficient VM Allocation using KD-Tree Approach in Cloud Server Environmentrahulmonikasharma
This document summarizes a research paper that proposes a new algorithm called KD-Tree approach for efficient virtual machine (VM) allocation in cloud computing environments. The algorithm aims to minimize the response time for allocating VMs to user requests. It does this by adopting a KD-Tree data structure to index physical host machines, allowing the scheduler to quickly find the host that can accommodate a new VM request with the minimum latency in O(Log n) time. The proposed approach is evaluated through simulations using the CloudSim toolkit and is shown to outperform an existing linear scheduling strategy (LSTR) algorithm in terms of reducing VM allocation times.
Challenges in Dynamic Resource Allocation and Task Scheduling in Heterogeneou...rahulmonikasharma
This document discusses the challenges of dynamic resource allocation and task scheduling in heterogeneous cloud environments. It outlines that resource allocation involves deciding how to allocate resources to tasks to maximize utilization, while task scheduling assigns tasks to processors to minimize execution time. The major challenges are optimizing allocated resources to minimize costs while meeting customer demands and application requirements. Allocating resources dynamically in heterogeneous cloud environments is difficult due to issues like resource contention, scarcity, and fragmentation. The document also discusses approaches to resource modeling, allocation, offering, discovery and monitoring that algorithms must address to effectively allocate resources on demand.
The Impact of Data Replication on Job Scheduling Performance in Hierarchical ...graphhoc
In data-intensive applications data transfer is a primary cause of job execution delay. Data access time depends on bandwidth. The major bottleneck to supporting fast data access in Grids is the high latencies of Wide Area Networks and Internet. Effective scheduling can reduce the amount of data transferred across the internet by dispatching a job to where the needed data are present. Another solution is to use a data replication mechanism. Objective of dynamic replica strategies is reducing file access time which leads to reducing job runtime. In this paper we develop a job scheduling policy and a dynamic data replication strategy, called HRS (Hierarchical Replication Strategy), to improve the data access efficiencies. We study our approach and evaluate it through simulation. The results show that our algorithm has improved 12% over the current strategies
THRESHOLD BASED VM PLACEMENT TECHNIQUE FOR LOAD BALANCED RESOURCE PROVISIONIN...IJCNCJournal
The unbalancing load issue is a multi-variation, multi-imperative issue that corrupts the execution and productivity of processing assets. Workload adjusting methods give solutions of load unbalancing circumstances for two bothersome aspects over-burdening and under-stacking. Cloud computing utilizes planning and workload balancing for a virtualized environment, resource partaking in cloud foundation. These two factors must be handled in an improved way in cloud computing to accomplish ideal resource sharing. Henceforth, there requires productive resource, asset reservation for guaranteeing load advancement in the cloud. This work aims to present an incorporated resource, asset reservation, and workload adjusting calculation for effective cloud provisioning. The strategy develops a Priority-based Resource Scheduling Model to acquire the resource, asset reservation with threshold-based load balancing for improving the proficiency in cloud framework. Extending utilization of Virtual Machines through the suitable and sensible outstanding task at hand modifying is then practiced by intensely picking a job from submitting jobs using Priority-based Resource Scheduling Model to acquire resource asset reservation. Experimental evaluations represent, the proposed scheme gives better results by reducing execution time, with minimum resource cost and improved resource utilization in dynamic resource provisioning conditions.
Effective and Efficient Job Scheduling in Grid ComputingAditya Kokadwar
The integration of remote and diverse resources and the increasing computational needs of Grand Challenges problems combined with the faster growth of the internet and communication technologies leads to the development of global computational grids. Grid computing is a prevailing technology, which unites underutilized resources in order to support sharing of resources and services distributed across numerous administrative region. An efficient and effective scheduling system is essentially required in order to achieve the promising capacity of grids. The main goal of scheduling is to maximize the resource utilization and minimize processing time and cost of the jobs. In this research, the objective is to prioritize the jobs based on execution cost and then allocate the resources with minimum cost by merging it with conventional job grouping strategy to provide the solution for better and more efficient job scheduling which is beneficial to both user and resource broker. The proposed scheduling approach in grid computing employs a dynamic cost-based job scheduling algorithm for making an efficient mapping of a job to available resources in the grid. It also improves communication to computation ratio (CCR) and utilization of available resources by grouping the user jobs before resource allocation.
Grid computing can involve lot of computational tasks which requires trustworthy computational nodes. Load balancing in grid computing is a technique which overall optimizes the whole process of assigning computational tasks to processing nodes. Grid computing is a form of distributed computing but different from conventional distributed computing in a manner that it tends to be heterogeneous, more loosely coupled and dispersed geographically. Optimization of this process must contains the overall maximization of resources utilization with balance load on each processing unit and also by decreasing the overall time or output. Evolutionary algorithms like genetic algorithms have studied so far for the implementation of load balancing across the grid networks. But problem with these genetic algorithm is that they are quite slow in cases where large number of tasks needs to be processed. In this paper we give a novel approach of parallel genetic algorithms for enhancing the overall performance and optimization of managing the whole process of load balancing across the grid nodes.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
OPTIMIZED RESOURCE PROVISIONING METHOD FOR COMPUTATIONAL GRID ijgca
Grid computing is an accumulation of heterogeneous, dynamic resources from multiple administrative areas which are geographically distributed that can be utilized to reach a mutual end. Development of resource provisioning-based scheduling in large-scale distributed environments like grid computing brings in new requirement challenges that are not being believed in traditional distributed computing environments. Computational grid is applying the resources of many systems in a network to a single problem at the same time. Grid scheduling is the method by which work specified by some means is assigned to the resources that complete the work in the environment which cannot fulfill the user requirements considerably. The satisfaction of users while providing the resources might increase the beneficiary level of resource suppliers. Resource scheduling has to satisfy the multiple constraints specified by the user. The option of resource with the satisfaction of multiple constraints is the most tedious process. This trouble is solved by bringing out the particle swarm optimization based heuristic scheduling algorithm which attempts to select the most suitable resource from the set of available resources. The primary parameters that are taken in this work for selecting the most suitable resource are the makespan and cost. The experimental result shows that the proposed method yields optimal scheduling with the atonement of all user requirements.
The document discusses optimization of resource allocation in computational grids. It proposes using a Teaching-Learning Based Optimization (TLBO) approach for resource allocation. The TLBO algorithm is found to outperform existing algorithms like Ant Colony Optimization, Genetic Algorithm, and Particle Swarm Optimization in terms of execution time and cost. The algorithm is simulated using GRIDSIM and results are presented. Existing resource allocation strategies in computational grids are also reviewed, including static and dynamic approaches as well as auction/market-based models.
This document provides an overview of scheduling mechanisms in cloud computing. It discusses task scheduling, gang scheduling based on performance and cost evaluation, and resource scheduling. For task scheduling, it describes classifying tasks based on quality of service parameters and MapReduce level scheduling. It then explains two gang scheduling algorithms - Adaptive First Come First Serve (AFCFS) and Largest Job First Serve (LJFS) - and how they are used to evaluate performance and cost. Finally, it briefly discusses resource scheduling and factors that affect scheduling mechanisms in cloud computing like efficiency, fairness, costs, and communication patterns.
Load balancing functionalities are crucial for best Grid performance and utilization. Accordingly,this paper presents a new meta-scheduling method called TunSys. It is inspired from the natural phenomenon of heat propagation and thermal equilibrium. TunSys is based on a Grid polyhedron model with a spherical like structure used to ensure load balancing through a local neighborhood propagation strategy. Furthermore, experimental results compared to FCFS, DGA and HGA show encouraging results in terms of system performance and scalability and in terms of load balancing efficiency.
An Approach to Reduce Energy Consumption in Cloud data centers using Harmony ...ijccsa
Fast development of knowledge and communication has established a new computational style which is
known as cloud computing. One of the main issues considered by the cloud infrastructure providers, is to
minimize the costs and maximize the profitability. Energy management in the cloud data centers is very
important to achieve such goal. Energy consumption can be reduced either by releasing idle nodes or by
reducing the virtual machines migrations. To do the latter, one of the challenges is to select the placement
approach of the migrated virtual machines on the appropriate node. In this paper, an approach to reduce
the energy consumption in cloud data centers is proposed. This approach adapts harmony search
algorithm to migrate the virtual machines. It performs the placement by sorting the nodes and virtual
machines based on their priority in descending order. The priority is calculated based on the workload.
The proposed approach is simulated. The evaluation results show the reduction in the virtual machine
migrations, the increase of efficiency and the reduction of energy consumption.
KEYWORDS
Energy Consumption, Virtual Machine Placement, Harmony Search Algorithm, Server Consolidati
Dynamic selection of cluster head in in networks for energy managementeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Dynamic selection of cluster head in in networks for energy managementeSAT Journals
Abstract In this project, we presented Multipath Region Routing (MRR) protocol for energy conservation in Wireless Sensor Networks (WSNs). Large scale dense WSNs are used in different types of applications for accurate monitoring. Energy conservation is an important issue in WSNs. In order to save energy, Multipath Region Routing protocol is used which provides balance in energy consumption and sustains the network life-span. By using this method, we can reduce the number of energy dissipation because the cluster head will collect data directly from other nodes. Hence, the energy can be preserved and network life time is extended to reasonable time. Keywords: Clustering; Wireless Sensor Networks; Security; Multipath Region Routing;
The task scheduling is a key process in large-scale distributed systems like cloud computing infrastructures
which can have much impressed on system performance. This problem is referred to as a NP-hard problem
because of some reasons such as heterogeneous and dynamic features and dependencies among the
requests. Here, we proposed a bi-objective method called DWSGA to obtain a proper solution for
allocating the requests on resources. The purpose of this algorithm is to earn the response quickly, with
some goal-oriented operations. At first, it makes a good initial population by a special way that uses a bidirectional
tasks prioritization. Then the algorithm moves to get the most appropriate possible solution in a
conscious manner by focus on optimizing the makespan, and considering a good distribution of workload
on resources by using efficient parameters in the mentioned systems. Here, the experiments indicate that
the DWSGA amends the results when the numbers of tasks are increased in application graph, in order to
mentioned objectives. The results are compared with other studied algorithms.
AN EFFECTIVE CONTROL OF HELLO PROCESS FOR ROUTING PROTOCOL IN MANETSIJCNCJournal
In the mobile ad hoc network (MANET) update of link connectivity is necessary to refresh the neighbor tables in data transfer. A existing hello process periodically exchanges the link connectivity information, which is not adequate for dynamic topology. Here, slow update of neighbour table entries causes link failures which affect performance parameter as packet drop, maximum delay, energy consumption, and reduced throughput. In the dynamic hello technique, new neighbour nodes and lost neighbour nodes are used to compute link change rate (LCR) and hello-interval/refresh rate (r). Exchange of link connectivity information at a fast rate consumes unnecessary bandwidth and energy. In MANET resource wastage can be controlled by avoiding the re-route discovery, frequent error notification, and local repair in the entire network. We are enhancing the existing hello process, which shows significant improvement in performance.
Cross Layer- Performance Enhancement Architecture (CL-PEA) for MANETijcncs
This document summarizes a proposed Cross Layer- Performance Enhancement Architecture (CL-PEA) for mobile ad hoc networks (MANETs). The key points are:
1) Existing TCP/IP architecture is not well-suited for the dynamic topology and limited resources of MANETs. A cross-layer design where all layers can exchange information is proposed to better optimize protocol performance.
2) The proposed CL-PEA adds a new hardware layer where parameters from the hardware, operating system, and other layers can be stored. This allows all layers to access information to make more informed decisions.
3) By exchanging parameters across layers, CL-PEA aims to enhance protocol performance in
This document provides an overview of the book "Fuzzy Multi-Criteria Decision Making: Theory and Applications with Recent Developments". It is edited by Cengiz Kahraman and contains 22 chapters on fuzzy multi-criteria decision making (MCDM) methods and applications. The book is divided into two parts, with the first focusing on fuzzy multiple-attribute decision making (MADM) techniques and applications, and the second on fuzzy multiple-objective decision making (MODM) methods. Some of the key MADM and MODM techniques covered include fuzzy analytic hierarchy process, fuzzy TOPSIS, fuzzy outranking methods, fuzzy multi-objective linear programming, and fuzzy multi-objective integer goal programming.
This document defines key concepts in multi-criteria decision making (MCDM) including criteria, alternatives, and decisions. It provides examples of single-criterion and multiple-criteria decision problems. For multiple-criteria problems, alternatives differ in more than one criterion and criteria are often competing. Formal MCDM analysis is useful when criteria are competing and trade-offs are difficult to evaluate. The document discusses types of MCDM problems and contexts for MCDM including mutually exclusive alternatives, portfolio selection, design, and measurement.
Experience Mazda Zoom Zoom Lifestyle and Culture by Visiting and joining the Official Mazda Community at http://www.MazdaCommunity.org for additional insight into the Zoom Zoom Lifestyle and special offers for Mazda Community Members. If you live in Arizona, check out CardinaleWay Mazda's eCommerce website at http://www.Cardinale-Way-Mazda.com
Decision-making in education based on multi-criteria ranking of alternativesVladimir Bakhrushin
This document discusses methods of multi-criteria ranking used in decision making, including in education. It provides examples of linear convolution rankings, such as university rankings and competitive scores for Ukrainian higher education institution applicants. It also examines some uncertainty factors that can affect competitive scores, such as variations in test complexity and applicant preparedness levels across years. Analysis of Mathematics and English test data from 2011-2014 showed variations in average scores and passing thresholds from year to year can impact outcome scores by 2-10 points.
Integrative Approach to Work Psychology and The Integration of Multi Criteria...H.Tezcan Uysal
Abstract
The purpose of this study is analysing the work psychology through a holistic view, so
determining the right choice to designate a strategic management move through multi criteria
decision making method, by performing positive and negative work psychology analysis. In the
study, 221 the positive and negative work psychologies perception oriented to employees were
determined through survey method. The data were processed through correlation and regression
methods and a new set of information was obtained for ELECTRE analysis, a multi criteria
decision making method. Thus, the cycle of ELECTRE analysis was provided by using positive
work psychology outputs as alternative, and negative psychology outputs as criteria. In the result
of the analyses related to the work psychologies of employees, a reasonably significant relation
was determined between the outputs of positive and negative work psychologies. However, this
could not set forth which was the action plan to be implemented by managers. This problem was
solved through ELECTRE analysis. In the result of the ELECTRE analysis performed, it was
determined that, among the outputs of positive work psychology, “job satisfaction” was the most
dominant output to enhance the work psychology.
This document discusses using intuitionistic fuzzy sets for multi-criteria decision making. It defines intuitionistic fuzzy sets and describes approaches to multi-criteria decision making including the score function method. The document provides an example of using the score function method to select the best air conditioning system by establishing criteria, alternatives, an intuitionistic decision matrix, and calculating score functions to rank the alternatives.
The document discusses multiple criteria decision making (MCDM) approaches. It introduces several common MCDM methods: the weighted score method, TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) method, and Analytic Hierarchy Process (AHP). It then provides a detailed example of how to apply the weighted score method and TOPSIS method to a problem of selecting the best car based on criteria like style, reliability, fuel economy, and cost.
THRESHOLD BASED VM PLACEMENT TECHNIQUE FOR LOAD BALANCED RESOURCE PROVISIONIN...IJCNCJournal
The unbalancing load issue is a multi-variation, multi-imperative issue that corrupts the execution and productivity of processing assets. Workload adjusting methods give solutions of load unbalancing circumstances for two bothersome aspects over-burdening and under-stacking. Cloud computing utilizes planning and workload balancing for a virtualized environment, resource partaking in cloud foundation. These two factors must be handled in an improved way in cloud computing to accomplish ideal resource sharing. Henceforth, there requires productive resource, asset reservation for guaranteeing load advancement in the cloud. This work aims to present an incorporated resource, asset reservation, and workload adjusting calculation for effective cloud provisioning. The strategy develops a Priority-based Resource Scheduling Model to acquire the resource, asset reservation with threshold-based load balancing for improving the proficiency in cloud framework. Extending utilization of Virtual Machines through the suitable and sensible outstanding task at hand modifying is then practiced by intensely picking a job from submitting jobs using Priority-based Resource Scheduling Model to acquire resource asset reservation. Experimental evaluations represent, the proposed scheme gives better results by reducing execution time, with minimum resource cost and improved resource utilization in dynamic resource provisioning conditions.
Effective and Efficient Job Scheduling in Grid ComputingAditya Kokadwar
The integration of remote and diverse resources and the increasing computational needs of Grand Challenges problems combined with the faster growth of the internet and communication technologies leads to the development of global computational grids. Grid computing is a prevailing technology, which unites underutilized resources in order to support sharing of resources and services distributed across numerous administrative region. An efficient and effective scheduling system is essentially required in order to achieve the promising capacity of grids. The main goal of scheduling is to maximize the resource utilization and minimize processing time and cost of the jobs. In this research, the objective is to prioritize the jobs based on execution cost and then allocate the resources with minimum cost by merging it with conventional job grouping strategy to provide the solution for better and more efficient job scheduling which is beneficial to both user and resource broker. The proposed scheduling approach in grid computing employs a dynamic cost-based job scheduling algorithm for making an efficient mapping of a job to available resources in the grid. It also improves communication to computation ratio (CCR) and utilization of available resources by grouping the user jobs before resource allocation.
Grid computing can involve lot of computational tasks which requires trustworthy computational nodes. Load balancing in grid computing is a technique which overall optimizes the whole process of assigning computational tasks to processing nodes. Grid computing is a form of distributed computing but different from conventional distributed computing in a manner that it tends to be heterogeneous, more loosely coupled and dispersed geographically. Optimization of this process must contains the overall maximization of resources utilization with balance load on each processing unit and also by decreasing the overall time or output. Evolutionary algorithms like genetic algorithms have studied so far for the implementation of load balancing across the grid networks. But problem with these genetic algorithm is that they are quite slow in cases where large number of tasks needs to be processed. In this paper we give a novel approach of parallel genetic algorithms for enhancing the overall performance and optimization of managing the whole process of load balancing across the grid nodes.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
OPTIMIZED RESOURCE PROVISIONING METHOD FOR COMPUTATIONAL GRID ijgca
Grid computing is an accumulation of heterogeneous, dynamic resources from multiple administrative areas which are geographically distributed that can be utilized to reach a mutual end. Development of resource provisioning-based scheduling in large-scale distributed environments like grid computing brings in new requirement challenges that are not being believed in traditional distributed computing environments. Computational grid is applying the resources of many systems in a network to a single problem at the same time. Grid scheduling is the method by which work specified by some means is assigned to the resources that complete the work in the environment which cannot fulfill the user requirements considerably. The satisfaction of users while providing the resources might increase the beneficiary level of resource suppliers. Resource scheduling has to satisfy the multiple constraints specified by the user. The option of resource with the satisfaction of multiple constraints is the most tedious process. This trouble is solved by bringing out the particle swarm optimization based heuristic scheduling algorithm which attempts to select the most suitable resource from the set of available resources. The primary parameters that are taken in this work for selecting the most suitable resource are the makespan and cost. The experimental result shows that the proposed method yields optimal scheduling with the atonement of all user requirements.
The document discusses optimization of resource allocation in computational grids. It proposes using a Teaching-Learning Based Optimization (TLBO) approach for resource allocation. The TLBO algorithm is found to outperform existing algorithms like Ant Colony Optimization, Genetic Algorithm, and Particle Swarm Optimization in terms of execution time and cost. The algorithm is simulated using GRIDSIM and results are presented. Existing resource allocation strategies in computational grids are also reviewed, including static and dynamic approaches as well as auction/market-based models.
This document provides an overview of scheduling mechanisms in cloud computing. It discusses task scheduling, gang scheduling based on performance and cost evaluation, and resource scheduling. For task scheduling, it describes classifying tasks based on quality of service parameters and MapReduce level scheduling. It then explains two gang scheduling algorithms - Adaptive First Come First Serve (AFCFS) and Largest Job First Serve (LJFS) - and how they are used to evaluate performance and cost. Finally, it briefly discusses resource scheduling and factors that affect scheduling mechanisms in cloud computing like efficiency, fairness, costs, and communication patterns.
Load balancing functionalities are crucial for best Grid performance and utilization. Accordingly,this paper presents a new meta-scheduling method called TunSys. It is inspired from the natural phenomenon of heat propagation and thermal equilibrium. TunSys is based on a Grid polyhedron model with a spherical like structure used to ensure load balancing through a local neighborhood propagation strategy. Furthermore, experimental results compared to FCFS, DGA and HGA show encouraging results in terms of system performance and scalability and in terms of load balancing efficiency.
An Approach to Reduce Energy Consumption in Cloud data centers using Harmony ...ijccsa
Fast development of knowledge and communication has established a new computational style which is
known as cloud computing. One of the main issues considered by the cloud infrastructure providers, is to
minimize the costs and maximize the profitability. Energy management in the cloud data centers is very
important to achieve such goal. Energy consumption can be reduced either by releasing idle nodes or by
reducing the virtual machines migrations. To do the latter, one of the challenges is to select the placement
approach of the migrated virtual machines on the appropriate node. In this paper, an approach to reduce
the energy consumption in cloud data centers is proposed. This approach adapts harmony search
algorithm to migrate the virtual machines. It performs the placement by sorting the nodes and virtual
machines based on their priority in descending order. The priority is calculated based on the workload.
The proposed approach is simulated. The evaluation results show the reduction in the virtual machine
migrations, the increase of efficiency and the reduction of energy consumption.
KEYWORDS
Energy Consumption, Virtual Machine Placement, Harmony Search Algorithm, Server Consolidati
Dynamic selection of cluster head in in networks for energy managementeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Dynamic selection of cluster head in in networks for energy managementeSAT Journals
Abstract In this project, we presented Multipath Region Routing (MRR) protocol for energy conservation in Wireless Sensor Networks (WSNs). Large scale dense WSNs are used in different types of applications for accurate monitoring. Energy conservation is an important issue in WSNs. In order to save energy, Multipath Region Routing protocol is used which provides balance in energy consumption and sustains the network life-span. By using this method, we can reduce the number of energy dissipation because the cluster head will collect data directly from other nodes. Hence, the energy can be preserved and network life time is extended to reasonable time. Keywords: Clustering; Wireless Sensor Networks; Security; Multipath Region Routing;
The task scheduling is a key process in large-scale distributed systems like cloud computing infrastructures
which can have much impressed on system performance. This problem is referred to as a NP-hard problem
because of some reasons such as heterogeneous and dynamic features and dependencies among the
requests. Here, we proposed a bi-objective method called DWSGA to obtain a proper solution for
allocating the requests on resources. The purpose of this algorithm is to earn the response quickly, with
some goal-oriented operations. At first, it makes a good initial population by a special way that uses a bidirectional
tasks prioritization. Then the algorithm moves to get the most appropriate possible solution in a
conscious manner by focus on optimizing the makespan, and considering a good distribution of workload
on resources by using efficient parameters in the mentioned systems. Here, the experiments indicate that
the DWSGA amends the results when the numbers of tasks are increased in application graph, in order to
mentioned objectives. The results are compared with other studied algorithms.
AN EFFECTIVE CONTROL OF HELLO PROCESS FOR ROUTING PROTOCOL IN MANETSIJCNCJournal
In the mobile ad hoc network (MANET) update of link connectivity is necessary to refresh the neighbor tables in data transfer. A existing hello process periodically exchanges the link connectivity information, which is not adequate for dynamic topology. Here, slow update of neighbour table entries causes link failures which affect performance parameter as packet drop, maximum delay, energy consumption, and reduced throughput. In the dynamic hello technique, new neighbour nodes and lost neighbour nodes are used to compute link change rate (LCR) and hello-interval/refresh rate (r). Exchange of link connectivity information at a fast rate consumes unnecessary bandwidth and energy. In MANET resource wastage can be controlled by avoiding the re-route discovery, frequent error notification, and local repair in the entire network. We are enhancing the existing hello process, which shows significant improvement in performance.
Cross Layer- Performance Enhancement Architecture (CL-PEA) for MANETijcncs
This document summarizes a proposed Cross Layer- Performance Enhancement Architecture (CL-PEA) for mobile ad hoc networks (MANETs). The key points are:
1) Existing TCP/IP architecture is not well-suited for the dynamic topology and limited resources of MANETs. A cross-layer design where all layers can exchange information is proposed to better optimize protocol performance.
2) The proposed CL-PEA adds a new hardware layer where parameters from the hardware, operating system, and other layers can be stored. This allows all layers to access information to make more informed decisions.
3) By exchanging parameters across layers, CL-PEA aims to enhance protocol performance in
This document provides an overview of the book "Fuzzy Multi-Criteria Decision Making: Theory and Applications with Recent Developments". It is edited by Cengiz Kahraman and contains 22 chapters on fuzzy multi-criteria decision making (MCDM) methods and applications. The book is divided into two parts, with the first focusing on fuzzy multiple-attribute decision making (MADM) techniques and applications, and the second on fuzzy multiple-objective decision making (MODM) methods. Some of the key MADM and MODM techniques covered include fuzzy analytic hierarchy process, fuzzy TOPSIS, fuzzy outranking methods, fuzzy multi-objective linear programming, and fuzzy multi-objective integer goal programming.
This document defines key concepts in multi-criteria decision making (MCDM) including criteria, alternatives, and decisions. It provides examples of single-criterion and multiple-criteria decision problems. For multiple-criteria problems, alternatives differ in more than one criterion and criteria are often competing. Formal MCDM analysis is useful when criteria are competing and trade-offs are difficult to evaluate. The document discusses types of MCDM problems and contexts for MCDM including mutually exclusive alternatives, portfolio selection, design, and measurement.
Experience Mazda Zoom Zoom Lifestyle and Culture by Visiting and joining the Official Mazda Community at http://www.MazdaCommunity.org for additional insight into the Zoom Zoom Lifestyle and special offers for Mazda Community Members. If you live in Arizona, check out CardinaleWay Mazda's eCommerce website at http://www.Cardinale-Way-Mazda.com
Decision-making in education based on multi-criteria ranking of alternativesVladimir Bakhrushin
This document discusses methods of multi-criteria ranking used in decision making, including in education. It provides examples of linear convolution rankings, such as university rankings and competitive scores for Ukrainian higher education institution applicants. It also examines some uncertainty factors that can affect competitive scores, such as variations in test complexity and applicant preparedness levels across years. Analysis of Mathematics and English test data from 2011-2014 showed variations in average scores and passing thresholds from year to year can impact outcome scores by 2-10 points.
Integrative Approach to Work Psychology and The Integration of Multi Criteria...H.Tezcan Uysal
Abstract
The purpose of this study is analysing the work psychology through a holistic view, so
determining the right choice to designate a strategic management move through multi criteria
decision making method, by performing positive and negative work psychology analysis. In the
study, 221 the positive and negative work psychologies perception oriented to employees were
determined through survey method. The data were processed through correlation and regression
methods and a new set of information was obtained for ELECTRE analysis, a multi criteria
decision making method. Thus, the cycle of ELECTRE analysis was provided by using positive
work psychology outputs as alternative, and negative psychology outputs as criteria. In the result
of the analyses related to the work psychologies of employees, a reasonably significant relation
was determined between the outputs of positive and negative work psychologies. However, this
could not set forth which was the action plan to be implemented by managers. This problem was
solved through ELECTRE analysis. In the result of the ELECTRE analysis performed, it was
determined that, among the outputs of positive work psychology, “job satisfaction” was the most
dominant output to enhance the work psychology.
This document discusses using intuitionistic fuzzy sets for multi-criteria decision making. It defines intuitionistic fuzzy sets and describes approaches to multi-criteria decision making including the score function method. The document provides an example of using the score function method to select the best air conditioning system by establishing criteria, alternatives, an intuitionistic decision matrix, and calculating score functions to rank the alternatives.
The document discusses multiple criteria decision making (MCDM) approaches. It introduces several common MCDM methods: the weighted score method, TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) method, and Analytic Hierarchy Process (AHP). It then provides a detailed example of how to apply the weighted score method and TOPSIS method to a problem of selecting the best car based on criteria like style, reliability, fuel economy, and cost.
The document discusses the formation of an ISPOR task force to develop guidance on the use of multi-criteria decision analysis (MCDA) in healthcare decision making. The task force will define MCDA, identify different MCDA techniques, and provide guidance on which techniques are best suited for different types of healthcare decisions. The document also discusses proposed definitions of MCDA and debates the appropriate scope and focus of the task force's work.
The Art of Asking Survey Questions: 7 Survey-Writing Don'tsHubSpot
What types of questions should you avoid the next time you have to write a survey to get feedback?
This presentation covers just a section of our guide: What not to do when writing survey questions. Get your free copy of the complete guide and workbook, The Art of Asking Survey Questions, right here: http://hub.am/1imzkQ6
TOPSIS - A multi-criteria decision making approachPresi
This document discusses the TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) method for multi-criteria decision making (MCDM). It defines key terms like alternatives, criteria, weights, and decision matrices. It then outlines the steps in the TOPSIS method, which include standardizing the decision matrix, determining the ideal and negative ideal solutions, calculating the separation from each alternative to the ideal and negative ideal solutions, and selecting the alternative with the shortest distance from the ideal and farthest from the negative ideal solution.
An enhanced adaptive scoring job scheduling algorithm with replication strate...eSAT Publishing House
This document describes an enhanced adaptive scoring job scheduling algorithm with replication strategy for grid environments. The algorithm aims to improve upon an existing adaptive scoring job scheduling algorithm by identifying whether jobs are data-intensive or computation-intensive. It then divides large jobs into subtasks, replicates the subtasks, and allocates the replicas to clusters based on a computed cluster score in order to improve resource utilization and job completion times. The algorithm is evaluated through simulation using the GridSim toolkit.
Job Scheduling on the Grid Environment using Max-Min Firefly AlgorithmEditor IJCATR
Grid computing indeed is the next generation of distributed systems and its goals is creating a powerful virtual, great, and
autonomous computer that is created using countless Heterogeneous resource with the purpose of sharing resources. Scheduling is one
of the main steps to exploit the capabilities of emerging computing systems such as the grid. Scheduling of the jobs in computational
grids due to Heterogeneous resources is known as an NP-Complete problem. Grid resources belong to different management domains
and each applies different management policies. Since the nature of the grid is Heterogeneous and dynamic, techniques used in
traditional systems cannot be applied to grid scheduling, therefore new methods must be found. This paper proposes a new algorithm
which combines the firefly algorithm with the Max-Min algorithm for scheduling of jobs on the grid. The firefly algorithm is a new
technique based on the swarm behavior that is inspired by social behavior of fireflies in nature. Fireflies move in the search space of
problem to find the optimal or near-optimal solutions. Minimization of the makespan and flowtime of completing jobs simultaneously
are the goals of this paper. Experiments and simulation results show that the proposed method has a better efficiency than other
compared algorithms.
GROUPING BASED JOB SCHEDULING ALGORITHM USING PRIORITY QUEUE AND HYBRID ALGOR...ijgca
Grid computing enlarge with computing platform which is collection of heterogeneous computing resources connected by a network across dynamic and geographically dispersed organization to form a distributed high performance computing infrastructure. Grid computing solves the complex computing
problems amongst multiple machines. Grid computing solves the large scale computational demands in a high performance computing environment. The main emphasis in the grid computing is given to the resource management and the job scheduler .The goal of the job scheduler is to maximize the resource utilization and minimize the processing time of the jobs. Existing approaches of Grid scheduling doesn’t give much emphasis on the performance of a Grid scheduler in processing time parameter. Schedulers allocate resources to the jobs to be executed using the First come First serve algorithm. In this paper, we have provided an optimize algorithm to queue of the scheduler using various scheduling methods like Shortest Job First, First in First out, Round robin. The job scheduling system is responsible to select best suitable machines in a grid for user jobs. The management and scheduling system generates job schedules for each machine in the grid by taking static restrictions and dynamic parameters of jobs and machines
into consideration. The main purpose of this paper is to develop an efficient job scheduling algorithm to maximize the resource utilization and minimize processing time of the jobs. Queues can be optimized by using various scheduling algorithms depending upon the performance criteria to be improved e.g. response
time, throughput. The work has been done in MATLAB using the parallel computing toolbox.
GROUPING BASED JOB SCHEDULING ALGORITHM USING PRIORITY QUEUE AND HYBRID ALGOR...ijgca
This document describes a proposed grouping based job scheduling algorithm for grid computing that aims to maximize resource utilization and minimize job processing times. It discusses related work on job scheduling algorithms and then presents the steps of the proposed algorithm. The algorithm uses shortest job first, first-in first-out, and round robin scheduling to process jobs in groups. The algorithm is evaluated experimentally in MATLAB and shown to reduce total job processing time compared to using only first-in first-out scheduling. Graphs demonstrate the processing time improvements achieved by the combined scheduling approach.
AN ENTROPIC OPTIMIZATION TECHNIQUE IN HETEROGENEOUS GRID COMPUTING USING BION...ijcsit
This document summarizes a research paper that proposes a new method for improving both fault tolerance and load balancing in grid computing networks. The method converts the tree structure of grid computing nodes into a distributed R-tree index structure and then applies an entropy estimation technique. This entropy estimation helps discard nodes with high entropy from the tree, reducing complexity. The method then uses thresholding and control algorithms to select optimal route paths based on load balance and fault tolerance. Various optimization techniques like genetic algorithms, ant colony optimization, and particle swarm optimization are also applied to reach better solutions. Experimental results showed the proposed method improved performance over other existing methods.
Optimized Resource Provisioning Method for Computational Gridijgca
Grid computing is an accumulation of heterogeneous, dynamic resources from multiple administrative areas which are geographically distributed that can be utilized to reach a mutual end. Development of resource provisioning-based scheduling in large-scale distributed environments like grid computing brings in new requirement challenges that are not being believed in traditional distributed computing environments. Computational grid is applying the resources of many systems in a network to a single problem at the same time. Grid scheduling is the method by which work specified by some means is assigned to the resources that complete the work in the environment which cannot fulfill the user requirements considerably. The satisfaction of users while providing the resources might increase the beneficiary level of resource suppliers. Resource scheduling has to satisfy the multiple constraints specified by the user. The option of resource with the satisfaction of multiple constraints is the most tedious process. This trouble is solved by bringing out the particle swarm optimization based heuristic scheduling algorithm which attempts to select the most suitable resource from the set of available resources. The primary parameters that are taken in this work for selecting the most suitable resource are the makespan and cost. The experimental result shows that the proposed method yields optimal scheduling with the atonement of all user requirements
Optimization of resource allocation in computational gridsijgca
The resource allocation in Grid computing system needs to be scalable, reliable and smart. It should also be adaptable to change its allocation mechanism depending upon the environment and user’s requirements. Therefore, a scalable and optimized approach for resource allocation where the system can adapt itself to the changing environment and the fluctuating resources is essentially needed. In this paper, a Teaching Learning based optimization approach for resource allocation in Computational Grids is proposed. The proposed algorithm is found to outperform the existing ones in terms of execution time and cost. The algorithm is simulated using GRIDSIM and the simulation results are presented.
RSDC (Reliable Scheduling Distributed in Cloud Computing)IJCSEA Journal
This document summarizes the PPDD algorithm for scheduling divisible loads originating from multiple sites in distributed computing environments. The PPDD algorithm is a two-phase approach that first derives a near-optimal load distribution and then considers actual communication delays when transferring load fractions. It guarantees a near-optimal solution and improved performance over previous algorithms like RSA by avoiding unnecessary load transfers between processors.
DGBSA : A BATCH JOB SCHEDULINGALGORITHM WITH GA WITH REGARD TO THE THRESHOLD ...IJCSEA Journal
In this paper , we will provide a scheduler on batch jobs with GA regard to the threshold detector. In The algorithm proposed in this paper, we will provide the batch independent jobs with a new technique ,so we can optimize the schedule them. To do this, we use a threshold detector then among the selected jobs, processing resources can process batch jobs with priority. Also hierarchy of tasks in each batch, will be determined with using DGBSA algorithm. Now , with the regard to the works done by previous ,we can provide an algorithm that by adding specific parameters to fitness function of the previous algorithms ,develop a optimum fitness function that in the proposed algorithm has been used. According to assessment done on DGBSA algorithm, in compare with the similar algorithms, it has more performance. The effective parameters that used in the proposed algorithm can reduce the total wasting time in compare with previous algorithms. Also this algorithm can improve the previous problems in batch processing with a new technique.
Efficient Resource Management Mechanism with Fault Tolerant Model for Computa...Editor IJCATR
Grid computing provides a framework and deployment environment that enables resource
sharing, accessing, aggregation and management. It allows resource and coordinated use of various
resources in dynamic, distributed virtual organization. The grid scheduling is responsible for resource
discovery, resource selection and job assignment over a decentralized heterogeneous system. In the
existing system, primary-backup approach is used for fault tolerance in a single environment. In this
approach, each task has a primary copy and backup copy on two different processors. For dependent
tasks, precedence constraint among tasks must be considered when scheduling backup copies and
overloading backups. Then, two algorithms have been developed to schedule backups of dependent and
independent tasks. The proposed work is to manage the resource failures in grid job scheduling. In this
method, data source and resource are integrated from different geographical environment. Faulttolerant
scheduling with primary backup approach is used to handle job failures in grid environment.
Impact of communication protocols is considered. Communication protocols such as Transmission
Control Protocol (TCP), User Datagram Protocol (UDP) which are used to distribute the message of
each task to grid resources.
A Survey of File Replication Techniques In Grid SystemsEditor IJCATR
Grid is a type of parallel and distributed systems that is designed to provide reliable access to data
and computational resources in wide area networks. These resources are distributed in different geographical
locations. Efficient data sharing in global networks is complicated by erratic node failure, unreliable network
connectivity and limited bandwidth. Replication is a technique used in grid systems to improve the
applications’ response time and to reduce the bandwidth consumption. In this paper, we present a survey on
basic and new replication techniques that have been proposed by other researchers. After that, we have a full
comparative study on these replication strategies.
This document provides a survey of file replication techniques used in grid systems. It begins with an introduction to grid systems and discusses their use of replication to improve response times and reduce bandwidth consumption. It then categorizes replication techniques as static or dynamic and describes challenges of replication including maintaining consistency and overhead. The document surveys various replication strategies for different grid topologies like peer-to-peer, tree and hybrid. It evaluates strategies based on factors like access latency, bandwidth consumption and fault tolerance. Specific replication techniques are discussed for peer-to-peer architectures aimed at availability, placement strategies and balancing workloads.
A Survey of File Replication Techniques In Grid SystemsEditor IJCATR
Grid is a type of parallel and distributed systems that is designed to provide reliable access to data
and computational resources in wide area networks. These resources are distributed in different geographical
locations. Efficient data sharing in global networks is complicated by erratic node failure, unreliable network
connectivity and limited bandwidth. Replication is a technique used in grid systems to improve the
applications’ response time and to reduce the bandwidth consumption. In this paper, we present a survey on
basic and new replication techniques that have been proposed by other researchers. After that, we have a full
comparative study on these replication strategies
Max Min Fair Scheduling Algorithm using In Grid Scheduling with Load Balancing IJORCS
This paper shows the importance of fair scheduling in grid environment such that all the tasks get equal amount of time for their execution such that it will not lead to starvation. The load balancing of the available resources in the computational grid is another important factor. This paper considers uniform load to be given to the resources. In order to achieve this, load balancing is applied after scheduling the jobs. It also considers the Execution Cost and Bandwidth Cost for the algorithms used here because in a grid environment, the resources are geographically distributed. The implementation of this approach the proposed algorithm reaches optimal solution and minimizes the make span as well as the execution cost and bandwidth cost.
T AXONOMY OF O PTIMIZATION A PPROACHES OF R ESOURCE B ROKERS IN D ATA G RIDSijcsit
A novel taxonomy of replica selection techniques is proposed. We studied some data grid approaches
where the selection strategies of data management is different. The aim of the study is to determine the
common concepts and observe their performance and to compare their performance with our strategy
Cost-Efficient Task Scheduling with Ant Colony Algorithm for Executing Large ...Editor IJCATR
This document summarizes a research paper that proposes an optimized ant colony optimization (ACO) algorithm for task scheduling in cloud computing. The goal is to minimize makespan and cost while improving fairness and load balancing. The ACO algorithm is adapted to prioritize and fairly allocate tasks to machines based on their performance. Simulations show the proposed ACO algorithm reduces makespan by 80% compared to Berger and greedy algorithms. It also increases processor utilization and balances loads across machines better than the other algorithms. The researchers conclude the optimized ACO approach improves resource usage and user satisfaction for task scheduling in cloud computing.
A novel scheduling algorithm for cloud computing environmentSouvik Pal
The document describes a proposed genetic algorithm-based scheduling approach for cloud computing environments. It aims to minimize waiting time and queue length. The algorithm first permutes task burst times and finds minimum waiting times using FCFS and genetic algorithms. It then applies a queuing model to the sequences with minimum waiting time from each approach. Experimental results on 4 sample tasks show the genetic algorithm reduces waiting time compared to FCFS. The genetic operators of selection, crossover and mutation are applied to evolve optimal task scheduling sequences.
Bragged Regression Tree Algorithm for Dynamic Distribution and Scheduling of ...Editor IJCATR
In the past few years, Grid computing came up as next generation computing platform which is a combination of
heterogeneous computing resources combined by a network across dynamic and geographically separated organizations. So, it
provides the perfect computing environment to solve large-scale computational demands. As the Grid computing demands are still
increasing from day to day due to rise in large number of complex jobs worldwide. So, the jobs may take much longer time to
complete due to poor distribution of batches or groups of jobs to inappropriate CPU’s. Therefore there is need to develop an efficient
dynamic job scheduling algorithm that would assign jobs to appropriate CPU’s dynamically. The main problem which dealt in the
paper is, how to distribute the jobs when the payload, importance, urgency, flow time etc. dynamically keeps on changing as the grid
expands or is flooded with number of job requests from different machines within the grid.
In this paper, we present a scheduling strategy which takes the advantage of decision tree algorithm to take dynamic decision
based on the current scenarios and which automatically incorporates factor analysis for considering the distribution of jobs.
A survey of various scheduling algorithm in cloud computing environmenteSAT Journals
Abstract Cloud computing is known as a provider of dynamic services using very large scalable and virtualized resources over the Internet. Due to novelty of cloud computing field, there is no many standard task scheduling algorithm used in cloud environment. Especially that in cloud, there is a high communication cost that prevents well known task schedulers to be applied in large scale distributed environment. Today, researchers attempt to build job scheduling algorithms that are compatible and applicable in Cloud Computing environment Job scheduling is most important task in cloud computing environment because user have to pay for resources used based upon time. Hence efficient utilization of resources must be important and for that scheduling plays a vital role to get maximum benefit from the resources. In this paper we are studying various scheduling algorithm and issues related to them in cloud computing. Index Terms: cloud computing, scheduling, algorithm
A survey of various scheduling algorithm in cloud computing environmenteSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Similar to Propose a Method to Improve Performance in Grid Environment, Using Multi-Criteria Decision Making Techniques (20)
Text Mining in Digital Libraries using OKAPI BM25 ModelEditor IJCATR
The emergence of the internet has made vast amounts of information available and easily accessible online. As a result, most libraries have digitized their content in order to remain relevant to their users and to keep pace with the advancement of the internet. However, these digital libraries have been criticized for using inefficient information retrieval models that do not perform relevance ranking to the retrieved results. This paper proposed the use of OKAPI BM25 model in text mining so as means of improving relevance ranking of digital libraries. Okapi BM25 model was selected because it is a probability-based relevance ranking algorithm. A case study research was conducted and the model design was based on information retrieval processes. The performance of Boolean, vector space, and Okapi BM25 models was compared for data retrieval. Relevant ranked documents were retrieved and displayed at the OPAC framework search page. The results revealed that Okapi BM 25 outperformed Boolean model and Vector Space model. Therefore, this paper proposes the use of Okapi BM25 model to reward terms according to their relative frequencies in a document so as to improve the performance of text mining in digital libraries.
Green Computing, eco trends, climate change, e-waste and eco-friendlyEditor IJCATR
This document discusses green computing practices and sustainable IT services. It provides an overview of factors driving adoption of green computing to reduce costs and environmental impact of data centers, such as rising energy costs and density. Green strategies discussed include improving infrastructure efficiency, power management, thermal management, efficient product design, and virtualization to optimize resource utilization. The document examines how green computing aims to lower costs and environmental footprint, and how sustainable IT services take a broader approach considering economic, environmental and social impacts.
Policies for Green Computing and E-Waste in NigeriaEditor IJCATR
Computers today are an integral part of individuals’ lives all around the world, but unfortunately these devices are toxic to the environment given the materials used, their limited battery life and technological obsolescence. Individuals are concerned about the hazardous materials ever present in computers, even if the importance of various attributes differs, and that a more environment -friendly attitude can be obtained through exposure to educational materials. In this paper, we aim to delineate the problem of e-waste in Nigeria and highlight a series of measures and the advantage they herald for our country and propose a series of action steps to develop in these areas further. It is possible for Nigeria to have an immediate economic stimulus and job creation while moving quickly to abide by the requirements of climate change legislation and energy efficiency directives. The costs of implementing energy efficiency and renewable energy measures are minimal as they are not cash expenditures but rather investments paid back by future, continuous energy savings.
Performance Evaluation of VANETs for Evaluating Node Stability in Dynamic Sce...Editor IJCATR
Vehicular ad hoc networks (VANETs) are a favorable area of exploration which empowers the interconnection amid the movable vehicles and between transportable units (vehicles) and road side units (RSU). In Vehicular Ad Hoc Networks (VANETs), mobile vehicles can be organized into assemblage to promote interconnection links. The assemblage arrangement according to dimensions and geographical extend has serious influence on attribute of interaction .Vehicular ad hoc networks (VANETs) are subclass of mobile Ad-hoc network involving more complex mobility patterns. Because of mobility the topology changes very frequently. This raises a number of technical challenges including the stability of the network .There is a need for assemblage configuration leading to more stable realistic network. The paper provides investigation of various simulation scenarios in which cluster using k-means algorithm are generated and their numbers are varied to find the more stable configuration in real scenario of road.
Optimum Location of DG Units Considering Operation ConditionsEditor IJCATR
The optimal sizing and placement of Distributed Generation units (DG) are becoming very attractive to researchers these days. In this paper a two stage approach has been used for allocation and sizing of DGs in distribution system with time varying load model. The strategic placement of DGs can help in reducing energy losses and improving voltage profile. The proposed work discusses time varying loads that can be useful for selecting the location and optimizing DG operation. The method has the potential to be used for integrating the available DGs by identifying the best locations in a power system. The proposed method has been demonstrated on 9-bus test system.
Analysis of Comparison of Fuzzy Knn, C4.5 Algorithm, and Naïve Bayes Classifi...Editor IJCATR
Early detection of diabetes mellitus (DM) can prevent or inhibit complication. There are several laboratory test that must be done to detect DM. The result of this laboratory test then converted into data training. Data training used in this study generated from UCI Pima Database with 6 attributes that were used to classify positive or negative diabetes. There are various classification methods that are commonly used, and in this study three of them were compared, which were fuzzy KNN, C4.5 algorithm and Naïve Bayes Classifier (NBC) with one identical case. The objective of this study was to create software to classify DM using tested methods and compared the three methods based on accuracy, precision, and recall. The results showed that the best method was Fuzzy KNN with average and maximum accuracy reached 96% and 98%, respectively. In second place, NBC method had respective average and maximum accuracy of 87.5% and 90%. Lastly, C4.5 algorithm had average and maximum accuracy of 79.5% and 86%, respectively.
Web Scraping for Estimating new Record from Source SiteEditor IJCATR
Study in the Competitive field of Intelligent, and studies in the field of Web Scraping, have a symbiotic relationship mutualism. In the information age today, the website serves as a main source. The research focus is on how to get data from websites and how to slow down the intensity of the download. The problem that arises is the website sources are autonomous so that vulnerable changes the structure of the content at any time. The next problem is the system intrusion detection snort installed on the server to detect bot crawler. So the researchers propose the use of the methods of Mining Data Records and the method of Exponential Smoothing so that adaptive to changes in the structure of the content and do a browse or fetch automatically follow the pattern of the occurrences of the news. The results of the tests, with the threshold 0.3 for MDR and similarity threshold score 0.65 for STM, using recall and precision values produce f-measure average 92.6%. While the results of the tests of the exponential estimation smoothing using ? = 0.5 produces MAE 18.2 datarecord duplicate. It slowed down to 3.6 datarecord from 21.8 datarecord results schedule download/fetch fix in an average time of occurrence news.
Evaluating Semantic Similarity between Biomedical Concepts/Classes through S...Editor IJCATR
Most of the existing semantic similarity measures that use ontology structure as their primary source can measure semantic similarity between concepts/classes using single ontology. The ontology-based semantic similarity techniques such as structure-based semantic similarity techniques (Path Length Measure, Wu and Palmer’s Measure, and Leacock and Chodorow’s measure), information content-based similarity techniques (Resnik’s measure, Lin’s measure), and biomedical domain ontology techniques (Al-Mubaid and Nguyen’s measure (SimDist)) were evaluated relative to human experts’ ratings, and compared on sets of concepts using the ICD-10 “V1.0” terminology within the UMLS. The experimental results validate the efficiency of the SemDist technique in single ontology, and demonstrate that SemDist semantic similarity techniques, compared with the existing techniques, gives the best overall results of correlation with experts’ ratings.
Semantic Similarity Measures between Terms in the Biomedical Domain within f...Editor IJCATR
The techniques and tests are tools used to define how measure the goodness of ontology or its resources. The similarity between biomedical classes/concepts is an important task for the biomedical information extraction and knowledge discovery. However, most of the semantic similarity techniques can be adopted to be used in the biomedical domain (UMLS). Many experiments have been conducted to check the applicability of these measures. In this paper, we investigate to measure semantic similarity between two terms within single ontology or multiple ontologies in ICD-10 “V1.0” as primary source, and compare my results to human experts score by correlation coefficient.
A Strategy for Improving the Performance of Small Files in Openstack Swift Editor IJCATR
This is an effective way to improve the storage access performance of small files in Openstack Swift by adding an aggregate storage module. Because Swift will lead to too much disk operation when querying metadata, the transfer performance of plenty of small files is low. In this paper, we propose an aggregated storage strategy (ASS), and implement it in Swift. ASS comprises two parts which include merge storage and index storage. At the first stage, ASS arranges the write request queue in chronological order, and then stores objects in volumes. These volumes are large files that are stored in Swift actually. During the short encounter time, the object-to-volume mapping information is stored in Key-Value store at the second stage. The experimental results show that the ASS can effectively improve Swift's small file transfer performance.
Integrated System for Vehicle Clearance and RegistrationEditor IJCATR
Efficient management and control of government's cash resources rely on government banking arrangements. Nigeria, like many low income countries, employed fragmented systems in handling government receipts and payments. Later in 2016, Nigeria implemented a unified structure as recommended by the IMF, where all government funds are collected in one account would reduce borrowing costs, extend credit and improve government's fiscal policy among other benefits to government. This situation motivated us to embark on this research to design and implement an integrated system for vehicle clearance and registration. This system complies with the new Treasury Single Account policy to enable proper interaction and collaboration among five different level agencies (NCS, FRSC, SBIR, VIO and NPF) saddled with vehicular administration and activities in Nigeria. Since the system is web based, Object Oriented Hypermedia Design Methodology (OOHDM) is used. Tools such as Php, JavaScript, css, html, AJAX and other web development technologies were used. The result is a web based system that gives proper information about a vehicle starting from the exact date of importation to registration and renewal of licensing. Vehicle owner information, custom duty information, plate number registration details, etc. will also be efficiently retrieved from the system by any of the agencies without contacting the other agency at any point in time. Also number plate will no longer be the only means of vehicle identification as it is presently the case in Nigeria, because the unified system will automatically generate and assigned a Unique Vehicle Identification Pin Number (UVIPN) on payment of duty in the system to the vehicle and the UVIPN will be linked to the various agencies in the management information system.
Assessment of the Efficiency of Customer Order Management System: A Case Stu...Editor IJCATR
The Supermarket Management System deals with the automation of buying and selling of good and services. It includes both sales and purchase of items. The project Supermarket Management System is to be developed with the objective of making the system reliable, easier, fast, and more informative.
Energy-Aware Routing in Wireless Sensor Network Using Modified Bi-Directional A*Editor IJCATR
Energy is a key component in the Wireless Sensor Network (WSN)[1]. The system will not be able to run according to its function without the availability of adequate power units. One of the characteristics of wireless sensor network is Limitation energy[2]. A lot of research has been done to develop strategies to overcome this problem. One of them is clustering technique. The popular clustering technique is Low Energy Adaptive Clustering Hierarchy (LEACH)[3]. In LEACH, clustering techniques are used to determine Cluster Head (CH), which will then be assigned to forward packets to Base Station (BS). In this research, we propose other clustering techniques, which utilize the Social Network Analysis approach theory of Betweeness Centrality (BC) which will then be implemented in the Setup phase. While in the Steady-State phase, one of the heuristic searching algorithms, Modified Bi-Directional A* (MBDA *) is implemented. The experiment was performed deploy 100 nodes statically in the 100x100 area, with one Base Station at coordinates (50,50). To find out the reliability of the system, the experiment to do in 5000 rounds. The performance of the designed routing protocol strategy will be tested based on network lifetime, throughput, and residual energy. The results show that BC-MBDA * is better than LEACH. This is influenced by the ways of working LEACH in determining the CH that is dynamic, which is always changing in every data transmission process. This will result in the use of energy, because they always doing any computation to determine CH in every transmission process. In contrast to BC-MBDA *, CH is statically determined, so it can decrease energy usage.
Security in Software Defined Networks (SDN): Challenges and Research Opportun...Editor IJCATR
In networks, the rapidly changing traffic patterns of search engines, Internet of Things (IoT) devices, Big Data and data centers has thrown up new challenges for legacy; existing networks; and prompted the need for a more intelligent and innovative way to dynamically manage traffic and allocate limited network resources. Software Defined Network (SDN) which decouples the control plane from the data plane through network vitalizations aims to address these challenges. This paper has explored the SDN architecture and its implementation with the OpenFlow protocol. It has also assessed some of its benefits over traditional network architectures, security concerns and how it can be addressed in future research and related works in emerging economies such as Nigeria.
Measure the Similarity of Complaint Document Using Cosine Similarity Based on...Editor IJCATR
Report handling on "LAPOR!" (Laporan, Aspirasi dan Pengaduan Online Rakyat) system depending on the system administrator who manually reads every incoming report [3]. Read manually can lead to errors in handling complaints [4] if the data flow is huge and grows rapidly, it needs at least three days to prepare a confirmation and it sensitive to inconsistencies [3]. In this study, the authors propose a model that can measure the identities of the Query (Incoming) with Document (Archive). The authors employed Class-Based Indexing term weighting scheme, and Cosine Similarities to analyse document similarities. CoSimTFIDF, CoSimTFICF and CoSimTFIDFICF values used in classification as feature for K-Nearest Neighbour (K-NN) classifier. The optimum result evaluation is pre-processing employ 75% of training data ratio and 25% of test data with CoSimTFIDF feature. It deliver a high accuracy 84%. The k = 5 value obtain high accuracy 84.12%
Hangul Recognition Using Support Vector MachineEditor IJCATR
The recognition of Hangul Image is more difficult compared with that of Latin. It could be recognized from the structural arrangement. Hangul is arranged from two dimensions while Latin is only from the left to the right. The current research creates a system to convert Hangul image into Latin text in order to use it as a learning material on reading Hangul. In general, image recognition system is divided into three steps. The first step is preprocessing, which includes binarization, segmentation through connected component-labeling method, and thinning with Zhang Suen to decrease some pattern information. The second is receiving the feature from every single image, whose identification process is done through chain code method. The third is recognizing the process using Support Vector Machine (SVM) with some kernels. It works through letter image and Hangul word recognition. It consists of 34 letters, each of which has 15 different patterns. The whole patterns are 510, divided into 3 data scenarios. The highest result achieved is 94,7% using SVM kernel polynomial and radial basis function. The level of recognition result is influenced by many trained data. Whilst the recognition process of Hangul word applies to the type 2 Hangul word with 6 different patterns. The difference of these patterns appears from the change of the font type. The chosen fonts for data training are such as Batang, Dotum, Gaeul, Gulim, Malgun Gothic. Arial Unicode MS is used to test the data. The lowest accuracy is achieved through the use of SVM kernel radial basis function, which is 69%. The same result, 72 %, is given by the SVM kernel linear and polynomial.
Application of 3D Printing in EducationEditor IJCATR
This paper provides a review of literature concerning the application of 3D printing in the education system. The review identifies that 3D Printing is being applied across the Educational levels [1] as well as in Libraries, Laboratories, and Distance education systems. The review also finds that 3D Printing is being used to teach both students and trainers about 3D Printing and to develop 3D Printing skills.
Survey on Energy-Efficient Routing Algorithms for Underwater Wireless Sensor ...Editor IJCATR
In underwater environment, for retrieval of information the routing mechanism is used. In routing mechanism there are three to four types of nodes are used, one is sink node which is deployed on the water surface and can collect the information, courier/super/AUV or dolphin powerful nodes are deployed in the middle of the water for forwarding the packets, ordinary nodes are also forwarder nodes which can be deployed from bottom to surface of the water and source nodes are deployed at the seabed which can extract the valuable information from the bottom of the sea. In underwater environment the battery power of the nodes is limited and that power can be enhanced through better selection of the routing algorithm. This paper focuses the energy-efficient routing algorithms for their routing mechanisms to prolong the battery power of the nodes. This paper also focuses the performance analysis of the energy-efficient algorithms under which we can examine the better performance of the route selection mechanism which can prolong the battery power of the node
Comparative analysis on Void Node Removal Routing algorithms for Underwater W...Editor IJCATR
The designing of routing algorithms faces many challenges in underwater environment like: propagation delay, acoustic channel behaviour, limited bandwidth, high bit error rate, limited battery power, underwater pressure, node mobility, localization 3D deployment, and underwater obstacles (voids). This paper focuses the underwater voids which affects the overall performance of the entire network. The majority of the researchers have used the better approaches for removal of voids through alternate path selection mechanism but still research needs improvement. This paper also focuses the architecture and its operation through merits and demerits of the existing algorithms. This research article further focuses the analytical method of the performance analysis of existing algorithms through which we found the better approach for removal of voids
Decay Property for Solutions to Plate Type Equations with Variable CoefficientsEditor IJCATR
In this paper we consider the initial value problem for a plate type equation with variable coefficients and memory in
1 n R n ), which is of regularity-loss property. By using spectrally resolution, we study the pointwise estimates in the spectral
space of the fundamental solution to the corresponding linear problem. Appealing to this pointwise estimates, we obtain the global
existence and the decay estimates of solutions to the semilinear problem by employing the fixed point theorem
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Nordic Marketo Engage User Group_June 13_ 2024.pptx
Propose a Method to Improve Performance in Grid Environment, Using Multi-Criteria Decision Making Techniques
1. International Journal of Computer Applications Technology and Research
Volume 3– Issue 6, 353 - 357, 2014
www.ijcat.com 353
Propose a Method to Improve Performance in Grid
Environment, Using Multi-Criteria Decision Making
Techniques
Robabeh Parvaneh
Department of Computer Science and Research
Branch
Khorasan Razavi،
Islamic Azad University،
Neyshabur،Iran
Ali Harounabadi
Islamic Azad University
Izeh Branch
Izeh، Iran
Abstract: The most important purpose of grid networks is resource subscription in a dynamic and heterogeneous environment.
They are accessible through using various methods. Subscription has mainly computational, scientific and other implications. In
order to reach grid purposes and to use available resources in grid environment, subtasks are distributed among resources and are
scheduled by considering the quality of service. It has been tried to distribute subtasks between resources in a way that maximum
QOS can be obtained. In this study, a method has been presented. In this method, three parameters; namely, sent and transferred
time between RMS and resource, process time of subtask by the resource, and the load of available tasks in resources row, have
been taken into account. In this way, multi-criteria decision is made by using TOPSIS method and this priority of the resources
are determined to assign them to subtasks. Finally, time response, as an efficient parameter, has been improved and optimized by
optimal assignment of the resources to subtasks.
Keywords: Grid network, multi-criteria decision making, response time, Petri net, TOPSIS
1. Introduction
Grid networks are composed of a set of heterogeneous
computers. They have been non-exclusively connected to
each other through connection protocol and grid
management system. The main purpose of grid is using
common resources such as processor power, band width
and tec. Also, its purpose is to make it accessible for central
computer. Currently, computing grid networks are widely
used in developed countries in order to prevent waste of
resources, and to use them optimally. Computing grid has
been considered and used in order to prevent heavy
expenditure paid to use computing power of network. The
most important purpose of grid networks is resource
subscription is a dynamic and heterogeneous environment.
They are accessible by using various methods. This
subscription has mainly computing, scientific and other
applications [9]. Computing grid environment is suitable to
solve those problems requiring long and complex
computations [6]. The main purpose of grid networks is to
provide services having high efficiency, reliability and
fewer costs for many users. Also, its purpose is to support
cooperative tasks. In grid, efficiency is important. In order
to increase efficiency in grid, we need an efficient and
proper scheduling. Dynamic nature of grid resources and
various demands of users have made grid scheduling
complex. The purpose of grid scheduling is to assign tasks
to resources optimally [2]. In this paper, a method has been
presented to decrease response time as an efficient
parameter to increase efficiency. By considering multi-
criteria decision making and Topsis method, resource
priority is determined for subtasks. In this way, Makespan
of the system is decreased. Decision making involves
stating purposes, evaluating their possibility and the
outcomes of executing each solution, selecting and
executing them. In multi-criteria decision making method,
several criteria can be used to select the better alternative,
instead of using one optimality criterion [3]. The proposed
method is evaluated and simulated by using CPN TOOLS.
2. Background
2.1 Grid Environment
Grid is “a wide network having high computing power and
the ability to connect to Internet”. Grid has not been just
composed of special and homogeneous computers. It has
been composed of a set of computers distributed in various
levels of internet or intranet. They are non-exclusively
connected to each other through connection protocol and
grid management system. In other words, grid decreases the
execution time of those tasks and works lasting for several
hours to just some seconds. Grid is a set of resources
connected to each other. Also, it involves some applications
to do works. Grid has been used in 1990 for the first time to
point to computing ultra structure distributed in engineering
2. International Journal of Computer Applications Technology and Research
Volume 3– Issue 6, 353 - 357, 2014
www.ijcat.com 354
and advanced sciences. Grid concepts and technologies
have been considered and used to provide resource
subscription between scientific units, and the aim is to use
the resources of grid environment to solve complex and
difficult problems [5]. In grid environment, tasks are not
individually executed just in one system. Rather, these
tasks are divided into subtasks. Each of them are sent to
resources that are the member of grid. Available resources
are connected to each other in a network by using
connection links. The link provides connection information
exchange between two computers. Link topology
determines connection structure between computers.
Various types of link topologies have been considered in
grid systems such as star, tree, ring and combinational
topologies [4]. In this paper, star topology has been taken
into account. In grid, the procedure of this topology is that
RMS is placed in the center of system, and all resources are
connected to it by connection links [6]. After receiving the
task from the user, RMS divides it into some subtasks.
Redundancy technique is used in resources allocated to
subtasks. Then, RMS assigns each subtask to more than
one resource. In this way, each subtask is allocated to two
or more resources, but each resource processes only one
subtask [4]. Petri networks are appropriate tools for
graphical modeling on the basis of mathematic logic.
Although Petri net is graphical, its mathematical base is
strong. Petri net has been considered as a method for
formal modeling to analyze and describe the systems that
have distributed simultaneous synchronous, parallel or
random characteristics. One of the important characteristics
of Petri net is that it is executable. Unlike UML, analysis
and implementation is carried out simultaneously in Petri
nets. This attribute can be used to evaluate the behavior and
efficiency of a system simultaneously [1].
3. Literature review
Azghomi and his colleagues [4] presented their paper
entitled modeling of tasks distribution and computing
reliability in grid networks having star topology. They
considered time and colored Petri nets in investigations and
implementation. Tasks scheduling is important in grid
networks environment in order to reach to desired quality
level. In this study, grid networks are based on resource
management system. This system receives tasks from the
users and divides them into subtasks. Then, each subtask is
transferred and sent to one or more available resources
(redundancy technique). After executing subtasks, each set
of resources processing the same subtask sends and
transfers the results to one location. Among the resources
that are randomly selected, the resource that executes the
related task as quickly as possible is identified and
transferred to another location. Finally, the maximum
degree is computed so that total time of the task is obtained.
Above mentioned operations are simulated by using time
and colored Petri nets, and reliability is computed.
Parsa and his colleagues [10] proposed the scheduling
algorithm called RASA. This algorithm is based on two
well-known algorithms, namely, Max-min and Min-min
scheduling. RASA has the advantages of this algorithm,
and it has removed disadvantages of them. RASA
alternatively uses these two algorithms on the basis of
estimating the end time of doing works. At first, algorithm
presents a matrix of end time of ti on Rj resource. If the
number of accessible resources is odd, then Min-min
strategy is used; otherwise, Max-min strategy is applied.
Other works are alternatively transferred to resources by
one of these strategies. One of the advantages of this
method is to provide the better load balance compared to
these two algorithms. RASA algorithm has better
performance in comparison with algorithms in distributed
systems.
Kokilavani and his colleagues [7] improved Min-min
algorithm. When the number of small tasks and works is
lesser that the number of large works and tasks in meta-
task, this algorithm does not operate well, and it increases
makespan of the system. Also, it does not create any loads
in the system. They presented an algorithm called LBBM.
This method has been presented in two phases. In the first
phase, Min-min algorithm is presented, while in the second
phase, works and tasks are scheduled to reuse the resources
effectively. Their algorithm has decreased makespan, and
has increased the efficiency of the resources.
Meibody and his colleagues [8] presented a scheme for
resource scheduling in order to optimize resource
scheduling in computing grid. On the basis of demands
classification, three levels (home, local, logical) are
considered. Each level has its own function to receive and
deliver the subtasks to lower or higher layers. In scheduling
scheme, three levels have been presented, and resources
have been connected to each other by a hierarchical
network involving three levels.
Saadi and his colleagues [13] proposed an algorithm for
scheduling of independent tasks in computing grid. They
presented weighted objective function for this scheduler,
and they considered the importance of time and cost of the
works done by the users.
Parsa and his colleagues [11] proposed a new category to
estimate reliability of service and whet they expect from
computing the time of providing service when there are
some defects in grid system.
4. proposed Method
The purpose of scheduling in grid environment is to reach
to maximum quality of service. Quality has various
concepts. The most important parameter of service quality
are efficiency, load balance, reliability, cost, time and etc,
or a combination of them. In order to optimize and improve
response time in presented model, three parameters are
considered. These parameters are transferred and sent time
between RMS and j resource to execute i subtask,
processing time of i subtask by j resource, and available
load in the row of each resource. They are placed in
decision making matrix. After unscaling, weighting and
making decision among them, the priority of resources are
separately obtained for each subtask. Assuming that the
subtask having smaller data has more priority in selecting
the resource (because it has less time to execute it), we
3. International Journal of Computer Applications Technology and Research
Volume 3– Issue 6, 353 - 357, 2014
www.ijcat.com 355
allocate the resource to subtasks. The result is that
minimum response time is obtained.
4.1 Modeling and Scheduling Subtasks
Figure1.Modeling on the basis of Petri nets
According to [4] and Fig1, at first, token is placed in PRMS
location. This token, that is considered as a task, is divided
into subtasks after passing through frag (Tfrag) route. Then,
tokens (subtasks) are allocated to several resources on the
basis of redundancy technique. These subtasks select the
optimal resource on the basis of three parameters such as
transferred and sent time of data between RMS and
resource, processing time of subtask by the resource and
load of available works in resource row, and multi-criteria
decision making. According to [4], if resources are
randomly made accessible for subtasks, then after passing
the selection route (Tselect) and selecting the appropriate
resources for subtasks, these resources are sent and
transferred to subtasks by passing through distribution
route. Then, tokens are placed in Psi location.
4.2 Allocating subtasks to resources by
using the model of multi-criteria decision
making
Decision making model is divided into two groups: multi-
objectives model and multi-criteria model.
Multi-objectives models have been used for designing,
while multi-criteria model are used to select the better
alternative. In this method, multi-criteria models have been
used to select the appropriate resource. We have taken into
account Topsis method among the methods of multi-criteria
models. According to [3], unscaling should be considered
in order to compare different measurement scales and
measures. In this way, the elements of indexes (nij) are
measured without any dimension. In some cases such as
MCDM, especially MADM, we should know the relation
importance of available indexes (objectives). The sum of
them equals one (they are normalized). The relation
importance of priority of each index (objective) is
measured against other for making decision. In this study,
we considered entropy technique to weigh the indexes, and
selected Topsis method to select the suitable resource. In
this method, we considered the distance of Ai from the
ideal point. Its distance from negative ideal points has been
also taken into account. It means that the selected
alternative should have minimum distance from the ideal
solution, and it should have maximum distance from
negative ideal solution. We used this method to prioritize
the resources in each subtask. After converting decision
making matrix to an unscaled matrix and providing an
unscaled matrix, we determine ideal solution as well as
negative-ideal solution.
Afterwards, we compute the distance. Distance of ith
alternative from ideals can be obtained by Euclidean
method.
for Ideal option (A+
) and negative ideal (A-
) defined:
Ideal Option =
{( | ) ( | ́| )}
Negative Ideal Option =
{( | ) ( ́ )}
=
Then calculate the size of separation (distance), the choice
between the i-th to the ideal use of Euclidean method:
{∑ }
{∑ }
Finally, we compute the closeness of Ai to an ideal
solution.
Eventually, we can rank the alternatives in the supposed
problem.
5. Case Study
In this section, the results of two proposed methods are
simulated and analyzed. At first, these methods are
modeled by colored Petri nets; then, they are simulated ny
CPN TOOLS. The results of these simulations are
compared with [1] and [4]. Suppose that, in grid
environment, the task entered by RMS is divided into two
tasks having complex computing characteristics and
4. International Journal of Computer Applications Technology and Research
Volume 3– Issue 6, 353 - 357, 2014
www.ijcat.com 356
required data volume. Also, suppose that although there are
four resources having some characteristics such as
processing speed, band width and lack of any failure in
processing (P), it is possible that there is lack of failure in
connection lines during transferring (q). In this method,
redundancy technique is followed. This means that subtasks
should be fewer than accessible resources. Therefore, after
allocating the task to subtasks, each subtask is allocated to
one resource, but each resource only processes one subtask.
In order to improve the response time, we considered three
parameters such as transferring time of data between RMS
and Rj resource to execute Si subtask (Tij), processing time
of Si subtask by Rj resource (Tij), and execution time in the
row of each resource (q). They are computed by following
equations [4].
A balance is provided between three parameters by using
multi-criteria decision making and TOPSIS, and they are
weighted. After placing them in decision making matrix,
we obtain resource priority for S1 and S2. Since subtask of
S2 has fewer data, selection priority is allocated to it. In
this way, R2 and R3 are selected for S1 and R1, and R1 and
R4 are selected for S2; hence, the selected scenario has the
minimum response time (6/4). This time is the best one. As
it was already mentioned, according to [4], firstly resources
are randomly selected. Secondly, available load in
resources is not considered. Thirdly, it does not result in a
scenario with optimized response time. It has been tried by
[1] to improve reliability. A method for improving response
time has not been presented. It should be mentioned that, in
the proposed method, reliability is high. Through using the
method proposed by [4], there are different scenarios to
allocate subtasks and resources. Since they are selected
randomly, different modes can be observed. In this method,
we analyze six scenarios.
The first scenario: in this scenario, subtask of S1 selects
and resources, and subtask of S2 selects and
resources.
The second scenario: subtask of S1 selects and
resources, and subtask of S2 selects and resources.
The third scenario: subtask of S1 selects and
resources, and subtask of S2 selects and resources.
The fourth scenario: subtask of S1 selects and
resources, and subtask of S2 selects and
resources.
The fifth scenario: subtask of S1 selects and
resources, and subtask of S2 selects and resources.
The sixth scenario: subtask of S1 selects and
resources, and subtask of S2 selects and resources.
Comparison diagram of response time in the above
mentioned scenarios in the proposed and previous method
has been shown in figure 2.
Figure 2. Comparison diagram
After determining parameters subtasks, we have simulated
CPN TOOLS software. The result of this simulation has
been demonstrated in figure 3,4.
Figure 3. System simulation
Figure 4. Subsystem simulation of allocating resources to
subtask
6. CONCLUSION
In grid, various types of computers having different
abilities and operating systems can be found. Open
0
5
10
15
20
25
30
35
40
4 6 8 10
قبلی روش
روش
پیشنهادی
previous M
propose M
Resource
Makespan