This document summarizes an article that proposes applying an Application Level Scheduling system (AppLeS) with quality of service (QoS) considerations for grid computing. It discusses how AppLeS measures application performance on resources to make scheduling decisions, but lacks resource management. The authors propose an AppLeS architecture with a resource manager to address this. They also propose a Page Fault Frequency Replacement algorithm to prevent AppLeS "thrashing" by reallocating pages based on fault frequency measurements.
Cost-Efficient Task Scheduling with Ant Colony Algorithm for Executing Large ...Editor IJCATR
This document summarizes a research paper that proposes an optimized ant colony optimization (ACO) algorithm for task scheduling in cloud computing. The goal is to minimize makespan and cost while improving fairness and load balancing. The ACO algorithm is adapted to prioritize and fairly allocate tasks to machines based on their performance. Simulations show the proposed ACO algorithm reduces makespan by 80% compared to Berger and greedy algorithms. It also increases processor utilization and balances loads across machines better than the other algorithms. The researchers conclude the optimized ACO approach improves resource usage and user satisfaction for task scheduling in cloud computing.
Fault Tollerant scheduling system for computational gridGhulam Asfia
This document presents a fault tolerant scheduling system for computational grids. It introduces a new factor called the scheduling indicator (SI) that considers both a resource's response time and fault rate. The system aims to improve grid reliability by avoiding resources that frequently fail. It consists of five main components: a grid portal, scheduler, resource information server, fault handler, and grid resources. The scheduler calculates the SI for each job-resource pair to select the most reliable resource for task execution.
Dynamically Adapting Software Components for the GridEditor IJCATR
The surfacing of dynamic execution environments such as „grids‟ forces scientific applications to take dynamicity. Dynamic
adaptation of Grid Components in Grid Comput ing is a critical issue for the design of framework for dynamic adaptation towards
self-adaptable software development components for the grid. T h i s paper carries the systematic design of dynamic adaptation
framework with the effective implementation of the structure of adaptable component. i . e . incorporating the layered architecture
e n v i r o nme n t with the concept of dynamicity.
IMPACT OF RESOURCE MANAGEMENT AND SCALABILITY ON PERFORMANCE OF CLOUD APPLICA...IJCSEA Journal
Cloud computing facilitates service providers to rent their computing capabilities for deploying
applications depending on user requirements. Applications of cloud have diverse composition,
configuration and deployment requirements. Quantifying the performance of applications in Cloud
computing environments is a challenging task. In this paper, we try to identify various parameters
associated with performance of cloud applications and analyse the impact of resource management and
scalability among them.
IMPACT OF RESOURCE MANAGEMENT AND SCALABILITY ON PERFORMANCE OF CLOUD APPLICA...IJCSEA Journal
Cloud computing facilitates service providers to rent their computing capabilities for deploying
applications depending on user requirements. Applications of cloud have diverse composition,
configuration and deployment requirements. Quantifying the performance of applications in Cloud
computing environments is a challenging task. In this paper, we try to identify various parameters
associated with performance of
This document proposes a fair scheduling algorithm with dynamic load balancing for grid computing. It begins by introducing grid computing and the need for efficient load balancing algorithms to distribute tasks. It then describes dynamic load balancing approaches, including information, triggering, resource type, location, and selection policies. The proposed algorithm uses a fair scheduling approach that assigns tasks to processors based on their estimated fair completion times to ensure tasks receive equal shares of computing resources. It also includes a dynamic load balancing component that migrates tasks between processors to maintain balanced loads across all resources. Simulation results demonstrated the algorithm achieved balanced loads across processors and reduced overall task completion times.
Resource Optimization of Construction Project Using Primavera P6IOSRJMCE
Construction projects are unique in nature, having their own difficulties, uncertainties and risks, posing never-ending questions concerning the resources and costs. There is always a conflict between ‘how much it will cost?’ and ‘where to raise the finances from?’. The success of a project depends upon the efficiency with which the project management gets the work done by utilizing the planned resources of men, materials, machinery, money and time.. In large scale projects, preparing an accurate and workable plan is very difficult. Resources are required to carry out specific tasks in a project, but the availability of resources within a given firm is always limited. While preparing the schedule structure, the Project Manager might schedule certain tasks in parallel. In such cases it might be possible that the same resource is being used in both the parallel tasks, while its availability is limited. This paper emphasises how the Project Manager could resolve such conflicts by using Resource Balancing in modern softwares such as Primavera (P6) R8.3, to reduce laborious computations. In this paper, the Resource Balancing techniques namely smoothing & leveling have been investigated in detail. This paper uses a case study in order to portray how Resource Balancing could be done using Primavera p6 and its effects are on the duration and cost of the entire project.
Management of context aware software resources deployed in a cloud environmen...ijdpsjournal
This document discusses a new scheduling algorithm proposed for managing requests for context-aware software deployed in a cloud computing environment. The algorithm aims to improve the performance of servers hosting high-demand context-aware applications while reducing cloud providers' costs. It does this by classifying similar context requests and dynamically scoring requests, with the goal of processing requests for similar context data in parallel to reduce response times. The algorithm is evaluated through simulation and found to improve efficiency compared to the gi-FIFO scheduling algorithm.
Cost-Efficient Task Scheduling with Ant Colony Algorithm for Executing Large ...Editor IJCATR
This document summarizes a research paper that proposes an optimized ant colony optimization (ACO) algorithm for task scheduling in cloud computing. The goal is to minimize makespan and cost while improving fairness and load balancing. The ACO algorithm is adapted to prioritize and fairly allocate tasks to machines based on their performance. Simulations show the proposed ACO algorithm reduces makespan by 80% compared to Berger and greedy algorithms. It also increases processor utilization and balances loads across machines better than the other algorithms. The researchers conclude the optimized ACO approach improves resource usage and user satisfaction for task scheduling in cloud computing.
Fault Tollerant scheduling system for computational gridGhulam Asfia
This document presents a fault tolerant scheduling system for computational grids. It introduces a new factor called the scheduling indicator (SI) that considers both a resource's response time and fault rate. The system aims to improve grid reliability by avoiding resources that frequently fail. It consists of five main components: a grid portal, scheduler, resource information server, fault handler, and grid resources. The scheduler calculates the SI for each job-resource pair to select the most reliable resource for task execution.
Dynamically Adapting Software Components for the GridEditor IJCATR
The surfacing of dynamic execution environments such as „grids‟ forces scientific applications to take dynamicity. Dynamic
adaptation of Grid Components in Grid Comput ing is a critical issue for the design of framework for dynamic adaptation towards
self-adaptable software development components for the grid. T h i s paper carries the systematic design of dynamic adaptation
framework with the effective implementation of the structure of adaptable component. i . e . incorporating the layered architecture
e n v i r o nme n t with the concept of dynamicity.
IMPACT OF RESOURCE MANAGEMENT AND SCALABILITY ON PERFORMANCE OF CLOUD APPLICA...IJCSEA Journal
Cloud computing facilitates service providers to rent their computing capabilities for deploying
applications depending on user requirements. Applications of cloud have diverse composition,
configuration and deployment requirements. Quantifying the performance of applications in Cloud
computing environments is a challenging task. In this paper, we try to identify various parameters
associated with performance of cloud applications and analyse the impact of resource management and
scalability among them.
IMPACT OF RESOURCE MANAGEMENT AND SCALABILITY ON PERFORMANCE OF CLOUD APPLICA...IJCSEA Journal
Cloud computing facilitates service providers to rent their computing capabilities for deploying
applications depending on user requirements. Applications of cloud have diverse composition,
configuration and deployment requirements. Quantifying the performance of applications in Cloud
computing environments is a challenging task. In this paper, we try to identify various parameters
associated with performance of
This document proposes a fair scheduling algorithm with dynamic load balancing for grid computing. It begins by introducing grid computing and the need for efficient load balancing algorithms to distribute tasks. It then describes dynamic load balancing approaches, including information, triggering, resource type, location, and selection policies. The proposed algorithm uses a fair scheduling approach that assigns tasks to processors based on their estimated fair completion times to ensure tasks receive equal shares of computing resources. It also includes a dynamic load balancing component that migrates tasks between processors to maintain balanced loads across all resources. Simulation results demonstrated the algorithm achieved balanced loads across processors and reduced overall task completion times.
Resource Optimization of Construction Project Using Primavera P6IOSRJMCE
Construction projects are unique in nature, having their own difficulties, uncertainties and risks, posing never-ending questions concerning the resources and costs. There is always a conflict between ‘how much it will cost?’ and ‘where to raise the finances from?’. The success of a project depends upon the efficiency with which the project management gets the work done by utilizing the planned resources of men, materials, machinery, money and time.. In large scale projects, preparing an accurate and workable plan is very difficult. Resources are required to carry out specific tasks in a project, but the availability of resources within a given firm is always limited. While preparing the schedule structure, the Project Manager might schedule certain tasks in parallel. In such cases it might be possible that the same resource is being used in both the parallel tasks, while its availability is limited. This paper emphasises how the Project Manager could resolve such conflicts by using Resource Balancing in modern softwares such as Primavera (P6) R8.3, to reduce laborious computations. In this paper, the Resource Balancing techniques namely smoothing & leveling have been investigated in detail. This paper uses a case study in order to portray how Resource Balancing could be done using Primavera p6 and its effects are on the duration and cost of the entire project.
Management of context aware software resources deployed in a cloud environmen...ijdpsjournal
This document discusses a new scheduling algorithm proposed for managing requests for context-aware software deployed in a cloud computing environment. The algorithm aims to improve the performance of servers hosting high-demand context-aware applications while reducing cloud providers' costs. It does this by classifying similar context requests and dynamically scoring requests, with the goal of processing requests for similar context data in parallel to reduce response times. The algorithm is evaluated through simulation and found to improve efficiency compared to the gi-FIFO scheduling algorithm.
Effective and Efficient Job Scheduling in Grid ComputingAditya Kokadwar
The integration of remote and diverse resources and the increasing computational needs of Grand Challenges problems combined with the faster growth of the internet and communication technologies leads to the development of global computational grids. Grid computing is a prevailing technology, which unites underutilized resources in order to support sharing of resources and services distributed across numerous administrative region. An efficient and effective scheduling system is essentially required in order to achieve the promising capacity of grids. The main goal of scheduling is to maximize the resource utilization and minimize processing time and cost of the jobs. In this research, the objective is to prioritize the jobs based on execution cost and then allocate the resources with minimum cost by merging it with conventional job grouping strategy to provide the solution for better and more efficient job scheduling which is beneficial to both user and resource broker. The proposed scheduling approach in grid computing employs a dynamic cost-based job scheduling algorithm for making an efficient mapping of a job to available resources in the grid. It also improves communication to computation ratio (CCR) and utilization of available resources by grouping the user jobs before resource allocation.
Cloud computing is that ensuing generation of computation. In all probability folks can have everything they need on the cloud. Cloud computing provides resources to shopper on demand. The resources also are code package resources or hardware resources. Cloud computing architectures unit distributed, parallel and serves the requirements of multiple purchasers in various things. This distributed style deploys resources distributive to deliver services with efficiency to users in various geographical channels. Purchasers in a very distributed setting generate request haphazardly in any processor. So the most important disadvantage of this randomness is expounded to task assignment. The unequal task assignment to the processor creates imbalance i.e., variety of the processors sq. measure over laden and many of them unit of measurement to a lower place loaded. The target of load equalisation is to transfer the load from over laden technique to a lower place loaded technique transparently. Load equalisation is one altogether the central issues in cloud computing. To comprehend high performance, minimum interval and high resource utilization relation we want to transfer the tasks between nodes in cloud network. Load equalisation technique is utilized to distribute tasks from over loaded nodes to a lower place loaded or idle nodes. In following sections we have a tendency to tend to stand live discuss concerning cloud computing, load equalisation techniques and additionally the planned work of our load equalisation system. Proposed load equalisation rule is simulated on Cloud Analyst toolkit. Performance is analyzed on the parameters of overall interval, knowledge transfer, average knowledge center mating time and total value of usage. Results area unit compared with 3 existing load equalisation algorithms specifically spherical Robin, Equally unfold Current Execution Load, and Throttled. Results on the premise of case studies performed shows additional knowledge transfer with minimum interval.
A Comparative Study of Load Balancing Algorithms for Cloud ComputingIJERA Editor
Cloud Computing is fast growing technology in both industry research and academy. User can access the cloud
service and pay based on the usage of resource. Balancing the load is major task of cloud service provider with
minimum response time, maximum throughput and better resource utilization. There are many load balancing
algorithms proposed to assign a user request to cloud resource in efficient manner. In this paper three load balancing
algorithms are simulated in Cloud Analyst and results are compared.
Adapting Software Components Dynamically for the GridIOSR Journals
This document summarizes a research paper on dynamically adapting software components for grid computing. It presents a framework for dynamic adaptation that incorporates a layered architecture with concepts of dynamicity. The framework separates adaptation mechanisms from component content. It defines four steps for adaptation: observe the execution environment, decide if adaptation is needed, plan adaptation actions, and execute planned actions. The framework uses three levels - a functional level for component services, a component-independent level for adaptation mechanisms, and a component-specific level for developer customization. It evaluates using this framework to design dynamically adaptable scientific application components.
Weather and Climate Visualization softwareRahul Gupta
The document describes a software project to develop a visualization tool for weather and climate data analysis. The tool will read netCDF files and allow users to analyze the data, perform statistical operations, generate interpolated spatial maps and images, and visualize shapefiles. The software will be developed using Java and JavaFX for the graphical user interface. It will implement design patterns like MVC and work with data formats like netCDF, shapefile, and others. The goal is to provide an easy to use tool for scientists to perform complex climate and weather data analysis and visualization without needing to write scripts.
The document discusses strategies for migrating Lotus Notes applications to Google Apps. It recommends assessing applications based on usage and complexity in order to determine suitability for migration. Key aspects that can be migrated include application functionality, templates and logic, data, and allowing co-existence of Notes and Google Apps platforms. Google Sites, Spreadsheets, Scripts, Gadgets and App Engine are identified as targets for migrating different application components and functionality.
Challenges in Dynamic Resource Allocation and Task Scheduling in Heterogeneou...rahulmonikasharma
This document discusses the challenges of dynamic resource allocation and task scheduling in heterogeneous cloud environments. It outlines that resource allocation involves deciding how to allocate resources to tasks to maximize utilization, while task scheduling assigns tasks to processors to minimize execution time. The major challenges are optimizing allocated resources to minimize costs while meeting customer demands and application requirements. Allocating resources dynamically in heterogeneous cloud environments is difficult due to issues like resource contention, scarcity, and fragmentation. The document also discusses approaches to resource modeling, allocation, offering, discovery and monitoring that algorithms must address to effectively allocate resources on demand.
This document discusses and compares various load balancing techniques in cloud computing. It begins by introducing load balancing as an important issue in cloud computing for efficiently scheduling user requests and resources. Several load balancing algorithms are then described, including honeybee foraging algorithm, biased random sampling, active clustering, OLB+LBMM, and Min-Min. Metrics for evaluating and comparing load balancing techniques are defined, such as throughput, overhead, fault tolerance, migration time, response time, resource utilization, scalability, and performance. The algorithms are then analyzed based on these metrics.
This document presents a policy-driven architecture for effective service allocation in cloud environments. It begins with an introduction to cloud computing and discusses challenges of scheduling client requests and allocating services as the number of clients increases. It then reviews previous research on resource allocation and process scheduling in distributed cloud systems. The paper proposes a service allocation model and policy-based architecture to address these challenges through effective identification of cloud and client characteristics. This would allow for reliable and efficient allocation of services to clients. The conclusion discusses evaluating the proposed architecture.
This document discusses resource leveling for a construction project. It begins by defining resource leveling as adjusting start and finish dates based on resource constraints to balance demand and supply. The document then describes different types of resource leveling like delaying tasks, splitting tasks, and overtime. It presents a case study of leveling resources like transit mixers and prestressing jacks on a bridge construction project. Initially some resources were overallocated, but leveling resolved the overallocations and avoided project delays. Leveling non-critical paths could cause some tasks to become critical and increase the project duration. The case study demonstrates how resource leveling in Microsoft Project can optimize a schedule.
Support for Goal Oriented Requirements Engineering in Elastic Cloud Applicationszillesubhan
Businesses have already started to exploit potential uses of cloud computing as a new paradigm for promoting their services. Although the general concepts they practically focus on are: viability, survivability, adaptability, etc., however, on the ground, there is still a lack for forming mechanisms to sustain viability with adaptation of new requirements in cloud-based applications. This has inspired a pressing need to adopt new methodologies and abstract models which support system acquisition for self-adaptation, thus guaranteeing autonomic cloud application behavior. This paper relies over state-of-the-art Neptune framework as runtime adaptive software development environment supported with intention-oriented modeling language in the representation and adaptation of goal based model artifacts and their intrinsic properties requirements. Such an approach will in turn support distributed service based applications virtually over the cloud to sustain a self-adaptive behavior with respect to its functional and non-functional characteristics.
OPTIMIZED RESOURCE PROVISIONING METHOD FOR COMPUTATIONAL GRID ijgca
Grid computing is an accumulation of heterogeneous, dynamic resources from multiple administrative areas which are geographically distributed that can be utilized to reach a mutual end. Development of resource provisioning-based scheduling in large-scale distributed environments like grid computing brings in new requirement challenges that are not being believed in traditional distributed computing environments. Computational grid is applying the resources of many systems in a network to a single problem at the same time. Grid scheduling is the method by which work specified by some means is assigned to the resources that complete the work in the environment which cannot fulfill the user requirements considerably. The satisfaction of users while providing the resources might increase the beneficiary level of resource suppliers. Resource scheduling has to satisfy the multiple constraints specified by the user. The option of resource with the satisfaction of multiple constraints is the most tedious process. This trouble is solved by bringing out the particle swarm optimization based heuristic scheduling algorithm which attempts to select the most suitable resource from the set of available resources. The primary parameters that are taken in this work for selecting the most suitable resource are the makespan and cost. The experimental result shows that the proposed method yields optimal scheduling with the atonement of all user requirements.
Optimized Resource Provisioning Method for Computational Gridijgca
Grid computing is an accumulation of heterogeneous, dynamic resources from multiple administrative areas which are geographically distributed that can be utilized to reach a mutual end. Development of resource provisioning-based scheduling in large-scale distributed environments like grid computing brings in new requirement challenges that are not being believed in traditional distributed computing environments. Computational grid is applying the resources of many systems in a network to a single problem at the same time. Grid scheduling is the method by which work specified by some means is assigned to the resources that complete the work in the environment which cannot fulfill the user requirements considerably. The satisfaction of users while providing the resources might increase the beneficiary level of resource suppliers. Resource scheduling has to satisfy the multiple constraints specified by the user. The option of resource with the satisfaction of multiple constraints is the most tedious process. This trouble is solved by bringing out the particle swarm optimization based heuristic scheduling algorithm which attempts to select the most suitable resource from the set of available resources. The primary parameters that are taken in this work for selecting the most suitable resource are the makespan and cost. The experimental result shows that the proposed method yields optimal scheduling with the atonement of all user requirements
IRJET-Framework for Dynamic Resource Allocation and Efficient Scheduling Stra...IRJET Journal
This document discusses a framework for dynamic resource allocation and efficient scheduling strategies in cloud computing platforms for high-performance computing (HPC). It proposes using a parallel genetic algorithm to find optimal allocation of virtual machines to physical resources in order to maximize resource utilization. The algorithm represents the resource allocation problem as an unbalanced job scheduling problem. It uses genetic operators like mutation and crossover to efficiently allocate requests for resources to idle nodes. Compared to a traditional genetic algorithm, the parallel genetic algorithm improves the speed of finding the best allocation and increases resource utilization. Future work could explore implementing dynamic load balancing and using big data concepts on the cloud.
GROUPING BASED JOB SCHEDULING ALGORITHM USING PRIORITY QUEUE AND HYBRID ALGOR...ijgca
Grid computing enlarge with computing platform which is collection of heterogeneous computing resources connected by a network across dynamic and geographically dispersed organization to form a distributed high performance computing infrastructure. Grid computing solves the complex computing
problems amongst multiple machines. Grid computing solves the large scale computational demands in a high performance computing environment. The main emphasis in the grid computing is given to the resource management and the job scheduler .The goal of the job scheduler is to maximize the resource utilization and minimize the processing time of the jobs. Existing approaches of Grid scheduling doesn’t give much emphasis on the performance of a Grid scheduler in processing time parameter. Schedulers allocate resources to the jobs to be executed using the First come First serve algorithm. In this paper, we have provided an optimize algorithm to queue of the scheduler using various scheduling methods like Shortest Job First, First in First out, Round robin. The job scheduling system is responsible to select best suitable machines in a grid for user jobs. The management and scheduling system generates job schedules for each machine in the grid by taking static restrictions and dynamic parameters of jobs and machines
into consideration. The main purpose of this paper is to develop an efficient job scheduling algorithm to maximize the resource utilization and minimize processing time of the jobs. Queues can be optimized by using various scheduling algorithms depending upon the performance criteria to be improved e.g. response
time, throughput. The work has been done in MATLAB using the parallel computing toolbox.
GROUPING BASED JOB SCHEDULING ALGORITHM USING PRIORITY QUEUE AND HYBRID ALGOR...ijgca
This document describes a proposed grouping based job scheduling algorithm for grid computing that aims to maximize resource utilization and minimize job processing times. It discusses related work on job scheduling algorithms and then presents the steps of the proposed algorithm. The algorithm uses shortest job first, first-in first-out, and round robin scheduling to process jobs in groups. The algorithm is evaluated experimentally in MATLAB and shown to reduce total job processing time compared to using only first-in first-out scheduling. Graphs demonstrate the processing time improvements achieved by the combined scheduling approach.
A survey on various resource allocation policies in cloud computing environmenteSAT Journals
Abstract Cloud computing is bringing a revolution in computing environment replacing traditional software installations, licensing issues into complete on-demand services through internet. In Cloud computing multiple cloud users can request number of cloud services simultaneously. So there must be a provision that all resources are made available to requesting user in efficient manner to satisfy their need. Resource allocation is based on quality of service and service level agreement. In cloud computing environment, to allocate resources to the user there are several methods but provider should consider the efficient way to guarantee that the applications’ requirements are attended to correctly and satisfy the user’s need This paper survey different resource allocation policies used in cloud computing environment. Keywords: Cloud computing, Resource allocation
A survey on various resource allocation policies in cloud computing environmenteSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
A Survey of Job Scheduling Algorithms Whit Hierarchical Structure to Load Ba...Editor IJCATR
Due to the advances in human civilization, problems in science and engineering are becoming more complicated than ever
before. To solve these complicated problems, grid computing becomes a popular tool. a grid environment collects, integrates, and uses
heterogeneous or homogeneous resources scattered around the globe by a high-speed network. Scheduling problems are at the heart of
any Grid-like computational system. a good scheduling algorithm can assign jobs to resources efficiently and can balance the system
load. in this paper we survey three algorithms for grid scheduling and compare benefit and disadvantages of their based on makespan.
1) Load balancing is an important issue in cloud computing to improve performance and resource utilization. It aims to distribute tasks evenly among nodes to prevent overloading some nodes while leaving others idle.
2) There are two main categories of load balancing algorithms: static and dynamic. Static algorithms do not consider current system state while dynamic algorithms react to changing system states.
3) Prior research on load balancing in cloud computing has proposed approaches such as using a genetic algorithm to optimize load balancing and addressing delays in dynamic load balancing.
Optimization of resource allocation in computational gridsijgca
The resource allocation in Grid computing system needs to be scalable, reliable and smart. It should also be adaptable to change its allocation mechanism depending upon the environment and user’s requirements. Therefore, a scalable and optimized approach for resource allocation where the system can adapt itself to the changing environment and the fluctuating resources is essentially needed. In this paper, a Teaching Learning based optimization approach for resource allocation in Computational Grids is proposed. The proposed algorithm is found to outperform the existing ones in terms of execution time and cost. The algorithm is simulated using GRIDSIM and the simulation results are presented.
The document discusses optimization of resource allocation in computational grids. It proposes using a Teaching-Learning Based Optimization (TLBO) approach for resource allocation. The TLBO algorithm is found to outperform existing algorithms like Ant Colony Optimization, Genetic Algorithm, and Particle Swarm Optimization in terms of execution time and cost. The algorithm is simulated using GRIDSIM and results are presented. Existing resource allocation strategies in computational grids are also reviewed, including static and dynamic approaches as well as auction/market-based models.
Effective and Efficient Job Scheduling in Grid ComputingAditya Kokadwar
The integration of remote and diverse resources and the increasing computational needs of Grand Challenges problems combined with the faster growth of the internet and communication technologies leads to the development of global computational grids. Grid computing is a prevailing technology, which unites underutilized resources in order to support sharing of resources and services distributed across numerous administrative region. An efficient and effective scheduling system is essentially required in order to achieve the promising capacity of grids. The main goal of scheduling is to maximize the resource utilization and minimize processing time and cost of the jobs. In this research, the objective is to prioritize the jobs based on execution cost and then allocate the resources with minimum cost by merging it with conventional job grouping strategy to provide the solution for better and more efficient job scheduling which is beneficial to both user and resource broker. The proposed scheduling approach in grid computing employs a dynamic cost-based job scheduling algorithm for making an efficient mapping of a job to available resources in the grid. It also improves communication to computation ratio (CCR) and utilization of available resources by grouping the user jobs before resource allocation.
Cloud computing is that ensuing generation of computation. In all probability folks can have everything they need on the cloud. Cloud computing provides resources to shopper on demand. The resources also are code package resources or hardware resources. Cloud computing architectures unit distributed, parallel and serves the requirements of multiple purchasers in various things. This distributed style deploys resources distributive to deliver services with efficiency to users in various geographical channels. Purchasers in a very distributed setting generate request haphazardly in any processor. So the most important disadvantage of this randomness is expounded to task assignment. The unequal task assignment to the processor creates imbalance i.e., variety of the processors sq. measure over laden and many of them unit of measurement to a lower place loaded. The target of load equalisation is to transfer the load from over laden technique to a lower place loaded technique transparently. Load equalisation is one altogether the central issues in cloud computing. To comprehend high performance, minimum interval and high resource utilization relation we want to transfer the tasks between nodes in cloud network. Load equalisation technique is utilized to distribute tasks from over loaded nodes to a lower place loaded or idle nodes. In following sections we have a tendency to tend to stand live discuss concerning cloud computing, load equalisation techniques and additionally the planned work of our load equalisation system. Proposed load equalisation rule is simulated on Cloud Analyst toolkit. Performance is analyzed on the parameters of overall interval, knowledge transfer, average knowledge center mating time and total value of usage. Results area unit compared with 3 existing load equalisation algorithms specifically spherical Robin, Equally unfold Current Execution Load, and Throttled. Results on the premise of case studies performed shows additional knowledge transfer with minimum interval.
A Comparative Study of Load Balancing Algorithms for Cloud ComputingIJERA Editor
Cloud Computing is fast growing technology in both industry research and academy. User can access the cloud
service and pay based on the usage of resource. Balancing the load is major task of cloud service provider with
minimum response time, maximum throughput and better resource utilization. There are many load balancing
algorithms proposed to assign a user request to cloud resource in efficient manner. In this paper three load balancing
algorithms are simulated in Cloud Analyst and results are compared.
Adapting Software Components Dynamically for the GridIOSR Journals
This document summarizes a research paper on dynamically adapting software components for grid computing. It presents a framework for dynamic adaptation that incorporates a layered architecture with concepts of dynamicity. The framework separates adaptation mechanisms from component content. It defines four steps for adaptation: observe the execution environment, decide if adaptation is needed, plan adaptation actions, and execute planned actions. The framework uses three levels - a functional level for component services, a component-independent level for adaptation mechanisms, and a component-specific level for developer customization. It evaluates using this framework to design dynamically adaptable scientific application components.
Weather and Climate Visualization softwareRahul Gupta
The document describes a software project to develop a visualization tool for weather and climate data analysis. The tool will read netCDF files and allow users to analyze the data, perform statistical operations, generate interpolated spatial maps and images, and visualize shapefiles. The software will be developed using Java and JavaFX for the graphical user interface. It will implement design patterns like MVC and work with data formats like netCDF, shapefile, and others. The goal is to provide an easy to use tool for scientists to perform complex climate and weather data analysis and visualization without needing to write scripts.
The document discusses strategies for migrating Lotus Notes applications to Google Apps. It recommends assessing applications based on usage and complexity in order to determine suitability for migration. Key aspects that can be migrated include application functionality, templates and logic, data, and allowing co-existence of Notes and Google Apps platforms. Google Sites, Spreadsheets, Scripts, Gadgets and App Engine are identified as targets for migrating different application components and functionality.
Challenges in Dynamic Resource Allocation and Task Scheduling in Heterogeneou...rahulmonikasharma
This document discusses the challenges of dynamic resource allocation and task scheduling in heterogeneous cloud environments. It outlines that resource allocation involves deciding how to allocate resources to tasks to maximize utilization, while task scheduling assigns tasks to processors to minimize execution time. The major challenges are optimizing allocated resources to minimize costs while meeting customer demands and application requirements. Allocating resources dynamically in heterogeneous cloud environments is difficult due to issues like resource contention, scarcity, and fragmentation. The document also discusses approaches to resource modeling, allocation, offering, discovery and monitoring that algorithms must address to effectively allocate resources on demand.
This document discusses and compares various load balancing techniques in cloud computing. It begins by introducing load balancing as an important issue in cloud computing for efficiently scheduling user requests and resources. Several load balancing algorithms are then described, including honeybee foraging algorithm, biased random sampling, active clustering, OLB+LBMM, and Min-Min. Metrics for evaluating and comparing load balancing techniques are defined, such as throughput, overhead, fault tolerance, migration time, response time, resource utilization, scalability, and performance. The algorithms are then analyzed based on these metrics.
This document presents a policy-driven architecture for effective service allocation in cloud environments. It begins with an introduction to cloud computing and discusses challenges of scheduling client requests and allocating services as the number of clients increases. It then reviews previous research on resource allocation and process scheduling in distributed cloud systems. The paper proposes a service allocation model and policy-based architecture to address these challenges through effective identification of cloud and client characteristics. This would allow for reliable and efficient allocation of services to clients. The conclusion discusses evaluating the proposed architecture.
This document discusses resource leveling for a construction project. It begins by defining resource leveling as adjusting start and finish dates based on resource constraints to balance demand and supply. The document then describes different types of resource leveling like delaying tasks, splitting tasks, and overtime. It presents a case study of leveling resources like transit mixers and prestressing jacks on a bridge construction project. Initially some resources were overallocated, but leveling resolved the overallocations and avoided project delays. Leveling non-critical paths could cause some tasks to become critical and increase the project duration. The case study demonstrates how resource leveling in Microsoft Project can optimize a schedule.
Support for Goal Oriented Requirements Engineering in Elastic Cloud Applicationszillesubhan
Businesses have already started to exploit potential uses of cloud computing as a new paradigm for promoting their services. Although the general concepts they practically focus on are: viability, survivability, adaptability, etc., however, on the ground, there is still a lack for forming mechanisms to sustain viability with adaptation of new requirements in cloud-based applications. This has inspired a pressing need to adopt new methodologies and abstract models which support system acquisition for self-adaptation, thus guaranteeing autonomic cloud application behavior. This paper relies over state-of-the-art Neptune framework as runtime adaptive software development environment supported with intention-oriented modeling language in the representation and adaptation of goal based model artifacts and their intrinsic properties requirements. Such an approach will in turn support distributed service based applications virtually over the cloud to sustain a self-adaptive behavior with respect to its functional and non-functional characteristics.
OPTIMIZED RESOURCE PROVISIONING METHOD FOR COMPUTATIONAL GRID ijgca
Grid computing is an accumulation of heterogeneous, dynamic resources from multiple administrative areas which are geographically distributed that can be utilized to reach a mutual end. Development of resource provisioning-based scheduling in large-scale distributed environments like grid computing brings in new requirement challenges that are not being believed in traditional distributed computing environments. Computational grid is applying the resources of many systems in a network to a single problem at the same time. Grid scheduling is the method by which work specified by some means is assigned to the resources that complete the work in the environment which cannot fulfill the user requirements considerably. The satisfaction of users while providing the resources might increase the beneficiary level of resource suppliers. Resource scheduling has to satisfy the multiple constraints specified by the user. The option of resource with the satisfaction of multiple constraints is the most tedious process. This trouble is solved by bringing out the particle swarm optimization based heuristic scheduling algorithm which attempts to select the most suitable resource from the set of available resources. The primary parameters that are taken in this work for selecting the most suitable resource are the makespan and cost. The experimental result shows that the proposed method yields optimal scheduling with the atonement of all user requirements.
Optimized Resource Provisioning Method for Computational Gridijgca
Grid computing is an accumulation of heterogeneous, dynamic resources from multiple administrative areas which are geographically distributed that can be utilized to reach a mutual end. Development of resource provisioning-based scheduling in large-scale distributed environments like grid computing brings in new requirement challenges that are not being believed in traditional distributed computing environments. Computational grid is applying the resources of many systems in a network to a single problem at the same time. Grid scheduling is the method by which work specified by some means is assigned to the resources that complete the work in the environment which cannot fulfill the user requirements considerably. The satisfaction of users while providing the resources might increase the beneficiary level of resource suppliers. Resource scheduling has to satisfy the multiple constraints specified by the user. The option of resource with the satisfaction of multiple constraints is the most tedious process. This trouble is solved by bringing out the particle swarm optimization based heuristic scheduling algorithm which attempts to select the most suitable resource from the set of available resources. The primary parameters that are taken in this work for selecting the most suitable resource are the makespan and cost. The experimental result shows that the proposed method yields optimal scheduling with the atonement of all user requirements
IRJET-Framework for Dynamic Resource Allocation and Efficient Scheduling Stra...IRJET Journal
This document discusses a framework for dynamic resource allocation and efficient scheduling strategies in cloud computing platforms for high-performance computing (HPC). It proposes using a parallel genetic algorithm to find optimal allocation of virtual machines to physical resources in order to maximize resource utilization. The algorithm represents the resource allocation problem as an unbalanced job scheduling problem. It uses genetic operators like mutation and crossover to efficiently allocate requests for resources to idle nodes. Compared to a traditional genetic algorithm, the parallel genetic algorithm improves the speed of finding the best allocation and increases resource utilization. Future work could explore implementing dynamic load balancing and using big data concepts on the cloud.
GROUPING BASED JOB SCHEDULING ALGORITHM USING PRIORITY QUEUE AND HYBRID ALGOR...ijgca
Grid computing enlarge with computing platform which is collection of heterogeneous computing resources connected by a network across dynamic and geographically dispersed organization to form a distributed high performance computing infrastructure. Grid computing solves the complex computing
problems amongst multiple machines. Grid computing solves the large scale computational demands in a high performance computing environment. The main emphasis in the grid computing is given to the resource management and the job scheduler .The goal of the job scheduler is to maximize the resource utilization and minimize the processing time of the jobs. Existing approaches of Grid scheduling doesn’t give much emphasis on the performance of a Grid scheduler in processing time parameter. Schedulers allocate resources to the jobs to be executed using the First come First serve algorithm. In this paper, we have provided an optimize algorithm to queue of the scheduler using various scheduling methods like Shortest Job First, First in First out, Round robin. The job scheduling system is responsible to select best suitable machines in a grid for user jobs. The management and scheduling system generates job schedules for each machine in the grid by taking static restrictions and dynamic parameters of jobs and machines
into consideration. The main purpose of this paper is to develop an efficient job scheduling algorithm to maximize the resource utilization and minimize processing time of the jobs. Queues can be optimized by using various scheduling algorithms depending upon the performance criteria to be improved e.g. response
time, throughput. The work has been done in MATLAB using the parallel computing toolbox.
GROUPING BASED JOB SCHEDULING ALGORITHM USING PRIORITY QUEUE AND HYBRID ALGOR...ijgca
This document describes a proposed grouping based job scheduling algorithm for grid computing that aims to maximize resource utilization and minimize job processing times. It discusses related work on job scheduling algorithms and then presents the steps of the proposed algorithm. The algorithm uses shortest job first, first-in first-out, and round robin scheduling to process jobs in groups. The algorithm is evaluated experimentally in MATLAB and shown to reduce total job processing time compared to using only first-in first-out scheduling. Graphs demonstrate the processing time improvements achieved by the combined scheduling approach.
A survey on various resource allocation policies in cloud computing environmenteSAT Journals
Abstract Cloud computing is bringing a revolution in computing environment replacing traditional software installations, licensing issues into complete on-demand services through internet. In Cloud computing multiple cloud users can request number of cloud services simultaneously. So there must be a provision that all resources are made available to requesting user in efficient manner to satisfy their need. Resource allocation is based on quality of service and service level agreement. In cloud computing environment, to allocate resources to the user there are several methods but provider should consider the efficient way to guarantee that the applications’ requirements are attended to correctly and satisfy the user’s need This paper survey different resource allocation policies used in cloud computing environment. Keywords: Cloud computing, Resource allocation
A survey on various resource allocation policies in cloud computing environmenteSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
A Survey of Job Scheduling Algorithms Whit Hierarchical Structure to Load Ba...Editor IJCATR
Due to the advances in human civilization, problems in science and engineering are becoming more complicated than ever
before. To solve these complicated problems, grid computing becomes a popular tool. a grid environment collects, integrates, and uses
heterogeneous or homogeneous resources scattered around the globe by a high-speed network. Scheduling problems are at the heart of
any Grid-like computational system. a good scheduling algorithm can assign jobs to resources efficiently and can balance the system
load. in this paper we survey three algorithms for grid scheduling and compare benefit and disadvantages of their based on makespan.
1) Load balancing is an important issue in cloud computing to improve performance and resource utilization. It aims to distribute tasks evenly among nodes to prevent overloading some nodes while leaving others idle.
2) There are two main categories of load balancing algorithms: static and dynamic. Static algorithms do not consider current system state while dynamic algorithms react to changing system states.
3) Prior research on load balancing in cloud computing has proposed approaches such as using a genetic algorithm to optimize load balancing and addressing delays in dynamic load balancing.
Optimization of resource allocation in computational gridsijgca
The resource allocation in Grid computing system needs to be scalable, reliable and smart. It should also be adaptable to change its allocation mechanism depending upon the environment and user’s requirements. Therefore, a scalable and optimized approach for resource allocation where the system can adapt itself to the changing environment and the fluctuating resources is essentially needed. In this paper, a Teaching Learning based optimization approach for resource allocation in Computational Grids is proposed. The proposed algorithm is found to outperform the existing ones in terms of execution time and cost. The algorithm is simulated using GRIDSIM and the simulation results are presented.
The document discusses optimization of resource allocation in computational grids. It proposes using a Teaching-Learning Based Optimization (TLBO) approach for resource allocation. The TLBO algorithm is found to outperform existing algorithms like Ant Colony Optimization, Genetic Algorithm, and Particle Swarm Optimization in terms of execution time and cost. The algorithm is simulated using GRIDSIM and results are presented. Existing resource allocation strategies in computational grids are also reviewed, including static and dynamic approaches as well as auction/market-based models.
This document provides an overview of scheduling mechanisms in cloud computing. It discusses task scheduling, gang scheduling based on performance and cost evaluation, and resource scheduling. For task scheduling, it describes classifying tasks based on quality of service parameters and MapReduce level scheduling. It then explains two gang scheduling algorithms - Adaptive First Come First Serve (AFCFS) and Largest Job First Serve (LJFS) - and how they are used to evaluate performance and cost. Finally, it briefly discusses resource scheduling and factors that affect scheduling mechanisms in cloud computing like efficiency, fairness, costs, and communication patterns.
A model for run time software architecture adaptationijseajournal
Since the global demand for software systems and constantly changing environments and systems is
increasing, the adaptability of software systems is of significant importance. Due to the architecture of
software system is a high-level view of the system and makes the modifiability possible at an overall level,
the adaptability of the software can be considered an effective approach to adapt software systems by
changing architecture configuration. In this study, the architecture configuration is modified through xADL
language which is a software architecture description language with a high flexibility. Software
architecture reconfiguration is done based on existing rules of rule-based system, which are written with
respect to three strategies of load balancing, fixed bandwidth and fixed latency. The proposed model of the
study is simulated based on samples of client-server system, video conferencing system and students’
grading system. The proposed model can be used in all types of architecture, include Client Server
Architecture, Service Oriented Architecture and etc.
Context sensitive indexes for performance optimization of sql queries in mult...avinash varma sagi
This document proposes context-sensitive indexes to optimize SQL query performance in multi-tenant and multi-application database environments. Current database architectures require indexes to be considered for all queries on a table, posing challenges for query optimization. The proposal is for applications and tenants to define their own indexes on shared tables to optimize their queries, while keeping indexes isolated from other applications and tenants for optimization purposes. The document provides background on challenges with mixed workloads and motivation for the proposal, which could lead to better optimized query processing and improved performance and scalability.
Software size estimation at early stages of project development holds great significance to meet
the competitive demands of software industry. Software size represents one of the most
interesting internal attributes which has been used in several effort/cost models as a predictor
of effort and cost needed to design and implement the software. The whole world is focusing
towards object oriented paradigm thus it is essential to use an accurate methodology for
measuring the size of object oriented projects. The class point approach is used to quantify
classes which are the logical building blocks in object oriented paradigm. In this paper, we
propose a class point based approach for software size estimation of On-Line Analytical
Processing (OLAP) systems. OLAP is an approach to swiftly answer decision support queries
based on multidimensional view of data. Materialized views can significantly reduce the
execution time for decision support queries. We perform a case study based on the TPC-H
benchmark which is a representative of OLAP System. We have used a Greedy based approach
to determine a good set of views to be materialized. After finding the number of views, the class
point approach is used to estimate the size of an OLAP System The results of our approach are
validated.
Software size estimation at early stages of project development holds great significance to meet the competitive demands of software industry. Software size represents one of the most
interesting internal attributes which has been used in several effort/cost models as a predictor of effort and cost needed to design and implement the software. The whole world is focusing
towards object oriented paradigm thus it is essential to use an accurate methodology for measuring the size of object oriented projects. The class point approach is used to quantify classes which are the logical building blocks in object oriented paradigm. In this paper, we propose a class point based approach for software size estimation of On-Line Analytical
Processing (OLAP) systems. OLAP is an approach to swiftly answer decision support queries based on multidimensional view of data. Materialized views can significantly reduce the
execution time for decision support queries. We perform a case study based on the TPC-H benchmark which is a representative of OLAP System. We have used a Greedy based approach
to determine a good set of views to be materialized. After finding the number of views, the class point approach is used to estimate the size of an OLAP System The results of our approach are validated.
Building scalable and resilient web applications on the cloud platformEdwin Orori
The document discusses building scalable and resilient web applications on Google Cloud Platform. It states that applications need to be able to seamlessly scale up and down based on demand fluctuations and remain resilient by continuing operations even if some resources fail. Google Cloud services like Autoscaler and Compute Engine help make resources adjustable as needed. The paper will provide details on using Google Cloud Platform to build architectures that are both resilient and scalable.
Context-Sensitive Indexes for Performance Optimization of SQL Queries in Mult...Arjun Sirohi
This document discusses context-sensitive indexes in relational database management systems to optimize SQL query performance in multi-tenant and multi-application environments. It proposes allowing applications, tenants, and users to define their own indexes on shared database tables to optimize queries specific to them, while keeping these indexes isolated from other queries for optimization purposes. Currently, database optimizers consider all indexes on referenced tables uniformly, regardless of purpose, leading to suboptimal performance. The proposal aims to address this by making indexes sensitive to the context and origin of queries, improving optimization and response times for complex workloads on shared databases and schemas.
THE EFFECT OF THE RESOURCE CONSUMPTION CHARACTERISTICS OF CLOUD APPLICATIONS ...ijccsa
Auto scaling is a service provided by the cloud service provider that allows provision of temporary
resources to the subscriber’s systems to prevent overloading. So far, many methods of auto scaling have
been proposed and applied. Among them, solutions based on low-level metrics are commonly used in
industry systems. Resource statistics are the basis for detecting overloading situation and making
additional resources in a timely manner. However, the effectiveness of these methods depends very much
on the accuracy of the overload calculation from low-level metrics. Overloading is mentioned in solutions
that usually favor a shortage of CPU resources. However, the demand for resources comes from the
application running on that each application has the characteristics of demanding different resource types,
with different CPU, memory, I/O ratios so it can not just be statistically on CPU consumption. The point of
view here is that even though based on low level resources, the source for calculation and forecasting is the
characteristic of the resource needs of the application. In this paper, we will develop an empirical model to
assess the effect of the application's resource consumption characteristics on the efficiency of the lowmetricauto scaling solutions and propose an auto scaling solution that is calculated based on statistics of
different types of resources. The results of the simulations show that the proposed solution based on
multiple resources is more positive.
Similar to APPLICATION LEVEL SCHEDULING(APPLES) IN GRID WITH QUALITY OF SERVICE (QOS) (20)
11th International Conference on Computer Science, Engineering and Informati...ijgca
11th International Conference on Computer Science, Engineering and Information
Technology (CSEIT 2024) will provide an excellent international forum for sharing knowledge
and results in theory, methodology and applications of Computer Science, Engineering and
Information Technology. The Conference looks for significant contributions to all major fields of
the Computer Science and Information Technology in theoretical and practical aspects. The aim
of the conference is to provide a platform to the researchers and practitioners from both academia
as well as industry to meet and share cutting-edge development in the field.
SERVICE LEVEL AGREEMENT BASED FAULT TOLERANT WORKLOAD SCHEDULING IN CLOUD COM...ijgca
Cloud computing is a concept of providing user and application oriented services in a virtual environment.
Users can use the various cloud services as per their requirements dynamically. Different users have
different requirements in terms of application reliability, performance and fault tolerance. Static and rigid
fault tolerance policies provide a consistent degree of fault tolerance as well as overhead. In this research
work we have proposed a method to implement dynamic fault tolerance considering customer
requirements. The cloud users have been classified in to sub classes as per the fault tolerance requirements.
Their jobs have also been classified into compute intensive and data intensive categories. The varying
degree of fault tolerance has been applied consisting of replication and input buffer. From the simulation
based experiments we have found that the proposed dynamic method performs better than the existing
methods.
SERVICE LEVEL AGREEMENT BASED FAULT TOLERANT WORKLOAD SCHEDULING IN CLOUD COM...ijgca
Cloud computing is a concept of providing user and application oriented services in a virtual environment.
Users can use the various cloud services as per their requirements dynamically. Different users have
different requirements in terms of application reliability, performance and fault tolerance. Static and rigid
fault tolerance policies provide a consistent degree of fault tolerance as well as overhead. In this research
work we have proposed a method to implement dynamic fault tolerance considering customer
requirements. The cloud users have been classified in to sub classes as per the fault tolerance requirements.
Their jobs have also been classified into compute intensive and data intensive categories. The varying
degree of fault tolerance has been applied consisting of replication and input buffer. From the simulation
based experiments we have found that the proposed dynamic method performs better than the existing
methods.
11th International Conference on Computer Science, Engineering and Informatio...ijgca
11th International Conference on Computer Science, Engineering and Information Technology (CSEIT 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Computer Science, Engineering and Information Technology. The Conference looks for significant contributions to all major fields of the Computer Science and Information Technology in theoretical and practical aspects. The aim of the conference is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the following areas, but are not limited to.
Load balancing functionalities are crucial for best Grid performance and utilization. Accordingly,this
paper presents a new meta-scheduling method called TunSys. It is inspired from the natural phenomenon of
heat propagation and thermal equilibrium. TunSys is based on a Grid polyhedron model with a spherical
like structure used to ensure load balancing through a local neighborhood propagation strategy.
Furthermore, experimental results compared to FCFS, DGA and HGA show encouraging results in terms
of system performance and scalability and in terms of load balancing efficiency.
11th International Conference on Computer Science and Information Technology ...ijgca
11th International Conference on Computer Science and Information Technology (CSIT 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Computer Science and Information Technology. The Conference looks for significant contributions to all major fields of the Computer Science and Information Technology in theoretical and practical aspects. The aim of the conference is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
AN INTELLIGENT SYSTEM FOR THE ENHANCEMENT OF VISUALLY IMPAIRED NAVIGATION AND...ijgca
Technological advancement has brought the masses unprecedented convenience, but unnoticed by many, a
population neglected through the age of technology has been the visually impaired population. The visually
impaired population has grown through ages with as much desire as everyone else to adventure but lack
the confidence and support to do so. Time has transported society to a new phase condensed in big data,
but to the visually impaired population, this quick-pace living lifestyle, along with the unpredictable nature
of natural disaster and COVID-19 pandemic, has dropped them deeper into a feeling of disconnection from
the society. Our application uses the global positioning system to support the visually impaired in
independent navigation, alerts them in face of natural disasters, and reminds them to sanitize their devices
during the COVID-19 pandemic
13th International Conference on Data Mining & Knowledge Management Process (...ijgca
13th International Conference on Data Mining & Knowledge Management Process (CDKP 2024) provides a forum for researchers who address this issue and to present their work in a peer-reviewed forum.
Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the following areas, but are not limited to these topics only.
Call for Papers - 15th International Conference on Wireless & Mobile Networks...ijgca
15th International Conference on Wireless & Mobile Networks (WiMoNe 2023) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Wireless & Mobile computing Environment. Current information age is witnessing a dramatic use of digital and electronic devices in the workplace and beyond. Wireless, Mobile Networks & its applications had received a significant and sustained research interest in terms of designing and deploying large scale and high performance computational applications in real life. The aim of the conference is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
Call for Papers - 4th International Conference on Big Data (CBDA 2023)ijgca
4th International Conference on Big Data (CBDA 2023) will act as a major forum for the presentation of innovative ideas, approaches, developments, and research projects in the areas of Big Data. It will also serve to facilitate the exchange of information between researchers and industry professionals to discuss the latest issues and advancement in the area of Big Data.
Call for Papers - 15th International Conference on Computer Networks & Commun...ijgca
15th International Conference on Computer Networks & Communications (CoNeCo 2023) looks for significant contributions to the Computer Networks & Communications for Wired and Wireless Networks in theoretical and practical aspects. Original papers are invited on Computer Networks, Network Protocols and Wireless Networks, Data Communication Technologies, and Network Security. The goal of this Conference is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
Call for Papers - 15th International Conference on Computer Networks & Commun...ijgca
15th International Conference on Computer Networks & Communications (CoNeCo 2023) looks for significant contributions to the Computer Networks & Communications for Wired and Wireless Networks in theoretical and practical aspects. Original papers are invited on Computer Networks, Network Protocols and Wireless Networks, Data Communication Technologies, and Network Security. The goal of this Conference is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
Call for Papers - 9th International Conference on Cryptography and Informatio...ijgca
9th International Conference on Cryptography and Information Security (CRIS 2023) provides a forum for researchers who address this issue and to present their work in a peer-reviewed forum. It aims to bring together scientists, researchers and students to exchange novel ideas and results in all aspects of cryptography, coding and Information security.
Call for Papers - 9th International Conference on Cryptography and Informatio...ijgca
9th International Conference on Cryptography and Information Security (CRIS 2023) provides a forum for researchers who address this issue and to present their work in a peer-reviewed forum. It aims to bring together scientists, researchers and students to exchange novel ideas and results in all aspects of cryptography, coding and Information security.
Call for Papers - 4th International Conference on Machine learning and Cloud ...ijgca
4th International Conference on Machine learning and Cloud Computing (MLCL 2023) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Cloud computing. The aim of the conference is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
Call for Papers - 11th International Conference on Data Mining & Knowledge Ma...ijgca
11th International Conference on Data Mining & Knowledge Management Process (DKMP 2023) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Data Mining and knowledge management process. The goal of this conference is to bring together researchers and practitioners from academia and industry to focus on understanding Modern data mining concepts and establishing new collaborations in these areas.
Call for Papers - 4th International Conference on Blockchain and Internet of ...ijgca
4th International Conference on Blockchain and Internet of Things (BIoT 2023) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Blockchain and Internet of Things. The Conference looks for significant contributions to all major fields of the Blockchain and Internet of Things in theoretical and practical aspects.
Call for Papers - International Conference IOT, Blockchain and Cryptography (...ijgca
The 4th International Conference on Cloud, Big Data and Web Services (CBW 2023) will take place from March 25-26, 2023 in Sydney, Australia. The conference aims to facilitate the exchange of innovative ideas and research related to cloud computing, big data, and web services. Authors are invited to submit papers by February 18, 2023 on topics including cloud platforms, big data analytics, and web service models and architectures. Selected papers will be published in related journals.
Call for Paper - 4th International Conference on Cloud, Big Data and Web Serv...ijgca
4th International Conference on Cloud, Big Data and Web Services (CBW 2023) will act as a major forum for the presentation of innovative ideas, approaches, developments, and research projects in the areas of Cloud, Big Data and Web services. It will also serve to facilitate the exchange of information between researchers and industry professionals to discuss the latest issues and advancement in the area of Cloud, Big Data and web services.
Call for Papers - International Journal of Database Management Systems (IJDMS)ijgca
The International Journal of Database Management Systems (IJDMS) is a bi monthly open access peer-reviewed journal that publishes articles which contributenew results in all areas of the database management systems & its applications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on understanding Modern developments in this filed and establishing new collaborations in these areas.
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
artificial intelligence and data science contents.pptxGauravCar
What is artificial intelligence? Artificial intelligence is the ability of a computer or computer-controlled robot to perform tasks that are commonly associated with the intellectual processes characteristic of humans, such as the ability to reason.
› ...
Artificial intelligence (AI) | Definitio
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
APPLICATION LEVEL SCHEDULING(APPLES) IN GRID WITH QUALITY OF SERVICE (QOS)
1. International Journal of Grid Computing & Applications (IJGCA) Vol.5, No.2, June 2014
DOI: 10.5121/ijgca.2014.5201 1
APPLICATION LEVEL SCHEDULING
(APPLES) IN GRID WITH QUALITY OF
SERVICE (QOS)
CH V T E V Laxmi1
, Dr. K.Somasundaram2
1,Research scholar, Karpagam University, Department of Computer
Science Engineering, Visakhapatnam, Andhra Pradesh,India.,
2,Research Supervisor, Karpagam University, Professor,
Department of Computer Science and Engineering, Jaya Engineering College, CTH
Road, Prakash Nagar, Thiruninravur,Thiruvallur - Dist, Tamilnadu,
Abstract: Grid computing is a form of distributed computing that involves coordinating and sharing
computational power, data storage and network across dynamic and geographically dispersed
organizations [6]. In the computational grid the fundamental key management is resource and workload
management services such as discovering of resources, monitoring and scheduling the resources. In this
research paper, we approach the problem of grid scheduling through grid scheduling with Quality of
Service (QoS). An Application-Level scheduling system (AppLeS) is applied in the grid computing to
measure the performance of the application on a specific site resource and utilizes this information to make
resource selection and scheduling decisions. In this paper we proposed architecture for Application Level
Scheduling system with a resource manager and we also proposed Page fault frequency replacement
algorithm (PFFR) for the Application Level Scheduling System as it might exhibit “thrashing”.
Keywords: Application Level Scheduling, Grid Computing, Grid scheduling, Resource discovery.
1.INTRODUCTION
Grid computing is a paradigm which is used to provide solutions for engineering sciences,
industry and commerce. Grid computing is the collection of computer resources from multiple
locations to reach a common goal. A grid Computing can be considered as distributed systems
with non-interactive workloads which involves a large number of files. Grids are generally
constructed by using general-purpose grid middleware software libraries.
Due to increasing in the number of applications, the utilization of Grid Infrastructure has
drastically improved to meet the need of computational, storage and other needs. As a single site
cannot simply meet all the resource needs of today’s demanding applications, therefore using
distributed resources can bring many advantages to the users of applications. It can be an efficient
management of heterogeneous, geographically distributed and dynamically available resource by
deploying in Grid systems.[6] However the Grid environment can be effectively used through its
schedulers, which acts as localized resource brokers.
The Grid computing job can be split into many small tasks. The responsibility of the scheduler is
selecting the resources and scheduling the jobs in such way that the user and applications
requirements are met, in terms of overall execution time and cost of the resources which are
utilized in the scheduling process. [1]
2. International Journal of Grid Computing & Applications (IJGCA) Vol.5, No.2, June 2014
2
In grid computing, researchers implemented and evaluated six different policies, to demonstrate
how the simulator is capable of doing, as well as helped them to understand the dynamics of grid
computing system.[4]
2.RELATED WORK
2.1 Grid Scheduling With QoS
Condor, SGE, PBS and LSF are the four systems which are widely used for grid-based resource
mange and job scheduling. One major problem with the four systems is their lack of Quality of
Service (QoS) support in scheduling jobs; such a system should take the following issues into
account when scheduling jobs:
• Job characteristics
• Market-based scheduling model
• Planning in scheduling
• Rescheduling
• Scheduling optimization
• Performance prediction
2.2 Application Level Scheduling (AppLeS)
AppLeS [2] are an adaptive application-level scheduling system that can be applied to the Grid.
Each application submitted to the Grid can have its own AppLeS. The design philosophy of
AppLeS is
Figure1 Architecture of AppLeS
that all aspects of system performance and utilization are experienced from the perspective of an
application using the system. To achieve application performance, AppLeS measures the
performance of the application on a specific site resource and utilizes this information to make
resource selection and scheduling decisions. Figure 1 shows the architecture of AppLeS.
3. International Journal of Grid Computing & Applications (IJGCA) Vol.5, No.2, June 2014
3
The AppLeS components are:
• Network Weather Service (NWS) [3]: Dynamic gathering of information of system state
and forecasting of resource loads.
• User specifications: This is the information about user criterion for aspects such as
performance, execution constraints and specific request for implementation.
• Model: This is a repository of default models, populated by similar classes of applications
and specific applications that can be used for performance estimation, planning and
resource selection.
• Resource selector: Choose and filter different resource combinations.
• Planner: Planner is used to generate the description of the resource dependent schedule
from a given resource combination.
• Performance estimator: performance estimator is used to estimate candidate schedules
according to the user's performance metric.
• Coordinator: This component chooses the “best” schedule.
• Actuator: To schedule on the target resource management system this component
implements the "best" schedule. When Application Level scheduling system (AppLeS) is
used, the following steps are performed.
• The user provides information to AppLeS via a Heterogeneous Application Template
(HAT) and user specifications. The HAT provides information for the structure,
characteristics and implementation of an application and its tasks.
• The coordinator uses this information to filter out infeasible/ possibly bad schedules.
• The resource selector identifies promising sets of resources, and prioritizes them based on
the logical “distance” between resources.
• The planner computes a potential schedule for each viable resource configuration.
• The performance estimator evaluates each schedule in terms of the user’s performance
objective.
• The coordinator chooses the best schedule and then implements it with the actuator.
AppLeS differs from other scheduling systems in that the resource selection and scheduling
decisions are based on the specific needs and exhibited performance characteristics of an
application. AppLeS targets parallel master–slave applications. Condor, SGE, PBS and LSF do
not take the application-level attributes into account when scheduling a job. Note that AppLeS is
not a resource management system, it is a Grid application-level scheduling system.
2.3Scheduling in Grid Application Development Software (GrADS)
AppLeS focus on per-job scheduling. Each application has its own AppLeS. When scheduling a
job, AppLeS assumes that there is only one job to use the resources. AppLes does not have
resource managers which can negotiate with applications to balance their interests, which is
generating the problem in AppLeS. In the absence of these negotiating mechanisms in the Grid
computing leading to various problems, which focuses on improvement of the performance of
individual AppLeS. However, there will be many AppLeS agents in a system simultaneously,
each working on behalf of its own application. All of the AppLeS agents may identify the same
resources as “best” for their applications and seek to use them simultaneously which is a worst-
case scenario. When targeted resources are no longer available, then they might seek to
reschedule their application on another resource. In this way, multiple unconstrained AppLeS
might exhibit “thrashing” behavior and achieve good performance neither for their own
applications nor from the system’s perspective. This is called the Bushel of AppLeS Problem.
4. International Journal of Grid Computing & Applications (IJGCA) Vol.5, No.2, June 2014
4
Grid Application Development Software (GrADS) project [4] provides comprehensive
programming environment in order to incorporates application characteristics and their
requirements in designing of the application.
3. PROPOSED APPROACH
Application Level Scheduling System (AppLeS) focuses on per-job scheduling and each
application has its own AppLeS. When scheduling a job, AppLeS assumes that there is only one
job to use the resources. One problem with AppLeS is that it does not have resource managers to
manage the resources. The absence of these negotiating mechanisms in the grid can lead to fall in
the performance.
However, there will be many AppLeS agents in a system simultaneously, each working on behalf
of its own application. All the AppLeS agents may identify the same resources as “best” for their
applications and seek to use them simultaneously, which leads to worst case scenario. When the
targeted resources are not available, they will reschedule their application to another resource. In
this way, multiple unconstrained AppLeS might exhibit “thrashing” behavior and achieve good
decisions.
The main aim of this project is to provide integrated grid application development solution which
incorporates certain activities like compilation, scheduling, staging of binaries and data,
application launching and monitoring during execution time. In GrADS, the Meta scheduler
receives candidate schedules of various application level schedulers and implements scheduling
policies for balancing different applications as show in Figure 2.
Figure 2 Job scheduling in GrADS
We proposed architecture of Application Level Scheduling (AppLeS) with resource manager in
Figure 5, as the AppLeS does not have resource manger which is leading to the fall of
performance. In a system there can be many AppLes agents which are simultaneously working on
behalf of its own application. All AppLeS agents may identify the same resources as best for their
applications and seek to use them simultaneously, which leads to worse-case scenario, and if the
targeted resources are no longer available, all has to reschedule their application on another
resource. In this way AppLeS might exhibit “thrashing” behaviour and will not achieve good
performance neither for their own nor from the system’s perspective.
Therefore we proposed Page Fault Frequency Replacement Algorithm (PFFR) to overcome the
thrashing behaviour of the Application Level Scheduling in Gird (AppLeS). We can implement
5. International Journal of Grid Computing & Applications (IJGCA) Vol.5, No.2, June 2014
5
Page Fault Frequency algorithm in the proposed architecture of Application Level Scheduling
(AppLeS) with resource manager and we proposed Page Fault Frequency Replacement (PFFR)
algorithm in the proposed architecture of Application Level Scheduling (AppLeS) with resource
manager, to overcome the thrashing behaviour of the Grid AppLeS and to improve performance
of the systems and to improve the proper usage of the available resources.
3.1Thrashing
If the number of process submitted to the CPU for execution, the CPU utilization will also
increases. But increasing the process continuously at certain time the CPU utilization falls
sharply, the CPU treated this overload and sometimes it reaches to zero. This situation is said to
be “Thrashing”, Figure 3 demonstrating the thrashing behaviour. The term thrashing denotes that
excessive overhead and severe performance degradation or collapse caused by too much paging.
Thrashing inevitably turns a shortage of memory space into a surplus of processor time.
For example the main memory consisting of 5 jobs initially at that time the page fault rate is 0.6,
after few seconds add the 5 jobs to memory, then the rate will increase to 0.8, after some time add
a another 5 jobs, then the page fault rate drops suddenly to 0.1 or 0.2. Sometimes it may be reach
to zero. This unexpected situation is said to be “thrashing” which is leaded in the Application
Level scheduling (AppLeS).
Figure 3 Thrashing
3.2Resource Manager
The resource manager’s idea is generic, in that there is a basic behavioural pattern expected from
each one of them. The resource managers can be implemented as object-oriented type hierarchy.
A base class can be defined that would characterize all the behaviour except for some details.
For example, an audio speaker, which we defined as a resource manager, we inherit behaviour of
the base class for the generic resource manager and the details of audio speaker resource we
define in the subclass. Therefore we can say that, for allocating resources, the generic part of the
resource manager is the key mechanism and resource specific behaviours are determined by their
policies.
The resource manager accepts requests from process and allocates units of its resources.
To determine the criteria to allocate the resources to processes, request() function will execute
resource specific policy algorithm. For example the resource manager policy of an audio speaker
6. International Journal of Grid Computing & Applications (IJGCA) Vol.5, No.2, June 2014
6
might forbid sharing of the resources, as multiple processes may play sound to the speaker at the
same time. It may also restrict the process that can be allocated to control the audio speaker. For
example a processes which is owned by a particular user, may restrict the use of the audio speaker
policy. Another example is that the resource manager may pre-empt the speaker from other
processes, when a process executing its function and might it has high priority over a process that
is playing music from the CD-ROM device. All resource managers have the general form shown
in the Figure 4. A process requests units of resources in each case. If the resource manger
allocates the resource, then the process continues to run. Otherwise, blocked processes are placed
in a pool, to await allocation. After the allocation, the process is removed from the pool and made
ready to run.
Figure 4 Resource Manager
In this paper we proposed architecture of Application Level Scheduling (AppLeS) with resource
manager, as the AppLeS does not have resource manger which is leading to the fall of
performance. To get the information about the resources, the resource manager gives permits to
the applications to create, delete, open, modify and write the resources. A resource can be treated
as data of any kind which are stored in a defined format in the resource file. The Resource
Manager provides functions for the proper resource management and it also keeps track of
resources present in the memory.
7. International Journal of Grid Computing & Applications (IJGCA) Vol.5, No.2, June 2014
7
Figure 5 Architecture of Application Level Scheduling (AppLeS) with resource manager
Figure 5 implements Resource manager in the Application Level Scheduling (AppLeS)
architecture. The AppLeS architecture does not have resource management system, which leads
to the fall in the performance level. However, there can be many AppLeS agents in a system
simultaneously, each working on behalf of its own application. A worst-case is that all the
AppLeS agents may identify the same resources as “best” for their applications and seek to use
them simultaneously, as there is absence of Resource manager in the AppLeS architecture.
Therefore the problem can solve by implementing Resource Manager in the Application Level
Scheduling (AppLeS) architecture
Each resource manager maintains a resource descriptor, which is a data structure for the
resources, it is managing the details of the resource descriptor depend on the resources and the
grid scheduler. Table 1 represents the kind of information we can see in a resource descriptor.
8. International Journal of Grid Computing & Applications (IJGCA) Vol.5, No.2, June 2014
8
Field Description
Internal resource name An internal name for the
resource used by the grid
scheduler
Total units The number of units of this
resource type configured
into the system
Available units The number of units
currently available
List of available units The set of available units of
this resource type that are
available for use by
processes.
List of blocked
processes
The list of processes that
have a pending request for
units of this resource type.
Table 1 Representing information of resource descriptor
3.3The Page Fault frequency replacement algorithm (PFFR)
The Page Fault Frequency Replacement (PFFR) Algorithm uses the measured page fault
frequency as the basic parameter for the memory allocation decision process. The main aim of
PFFR is to prevent thrashing by allocating or deallocating frames as required, we assume that a
high page fault frequency indicates that a process is running inefficiently because it is short of
page frames. A low page fault frequency, on the other hand, indicates that a further increase in the
number of allocated page frames will not considerably improve the efficiency and, in fact, might
result in waste of memory space. Therefore, to improve system performance (e.g., space-time
product) one or more page frames could be freed.
The basic policy of the PFFR Algorithm is:
Whenever the page fault frequency rises above a given critical page fault frequency level P, all
referenced pages which were not in the main memory, therefore causing page faults, and are
brought into the main memory without replacing any pages. This results in an increase in the
number of allocated page frames which usually reduces the page fault frequency. On the other
hand, once the page fault frequency falls below P, page frames may be freed. The same operation
will be repeated whenever the page fault frequency rises above P again. Once a process is
removed from the main memory this information can be used to schedule this process for the
next, time quantum. In general, a process will be put on the processor queue only if there are
enough available page frames in the pool. The information about the memory space of each
process can also be used to decide which process has to be removed from the main memory if the
page frame pool becomes empty. There are many ways for the supervisor to make use of the
information about program behavior provided by the PFF Algorithm.
3.2.1 Implementation of the Page Fault Frequency Replacement Algorithm (PFFR)
PFFR algorithm is very simple to implement. We need only a clock in the CPU to measure the
time between page faults of every process. This clock measures the process (or virtual) time of
9. International Journal of Grid Computing & Applications (IJGCA) Vol.5, No.2, June 2014
9
each process. The current process time is recorded in the process' state word. To determine which
pages are residing in the main memory page table entry can be used. For those paging systems
that have a USE-BIT feature this feature can be used to determine those pages which have been
referenced during the time interval since the last page fault occurred. Whenever a page fault
occurs the USE-BITs are reset and the supervisor determine whether the process is operating
below the critical page fault frequency level P. For this purpose the time of the last page fault has
to be stored. If the last page fault occurred more than T=I/P msec ago, the USE-BITs are used to
determine which pages have to be removed from the main memory.
3.2.2 Case Study and Experimental Analysis
Memory requirements may differ from one process to another process, by allocating too few
frames to a process may lead to process thrashing, to prevent thrashing, the main aim of the PFFR
algorithm is to allocate or deallocate frames as required. The Table 2 represents the PFFR
Algorithm Behavior
Table 2 PFFR Algorithm Behavior
Algorithm of Page Fault Frequency Replacement (PFFR)
Step1: Initially set every frame reference bit as 0.
step2: set its frame's reference bit, whenever a page is referenced.
step3: compare the IFT(inter-fault time) with a certain threshold, when a page fault occurs.
step4: reset all reference bits, if IFT<threshold, and then allocate a new frame to the process
Step5: If IFT ≥ threshold, reallocate all frames whose reference bit is not set and allocate a new
frame for the faulting page, and reset all reference bits.
An experimental example with threshold value with 3.
As the number of frames allocated to a process is dynamic in this algorithm, while performing the
analysis, use the mean number of frames in use,per reference.
Reference
#
1 2 3 4 5 6 7 8 9 10 11 12
Page
referenced
1 2 3 4 1 2 5 1 2 3 4 5
Frames
* =
reference
bit set
_ =
faulting
page
1 1 1 1 *1 *1 1 *1 *1 1 1 1
2 2 2 2 *2 2 2 *2 2 2 2
3 3 3 3
4 4 4
5 5 5
3 3 3
4 4
5
10. International Journal of Grid Computing & Applications (IJGCA) Vol.5, No.2, June 2014
10
In the above described example, there are 39 frames in user over 12 page references, therefore the
mean number of frames is 39/12=3.25
Analysis for the above example:
12 page references
8 page faults
Page faults per number of frames = 8/3.25 ≈ 2.4615
4. CONCLUSION
This research paper carries out survey on Grid scheduling from various studies of different
researchers in grid computing. It includes the problem of grid scheduling through grid scheduling
with Quality of Service (QoS). In this research paper, we approached the problem of grid
scheduling through grid scheduling with QoS. There are many grid scheduling system which are
lack of Quality of Service (QoS) in scheduling jobs. Application Level Scheduling (AppLeS) is
an adaptive application-level scheduling system which can be applied to the Grid. AppLeS
measures the performance of the application on a specific site resource and utilizes this
information to make resource selection and scheduling decisions. This paper focuses on designing
of new architecture for AppLeS with a Resource Manager. In this research of grid scheduling
with QoS, we proposed new approach for AppLeS which might exhibit “thrashing” through the
Page Fault Frequency Replacement (PFFR) Algorithm, which uses the measured page fault
frequency as the basic parameter for the memory allocation decision process. This PFF
Replacement Algorithm allocates memory, according to the dynamically changing memory
requirements of each process. It does not require prior knowledge of program behavior and can be
applied to programs of different types and sizes.
REFERENCES
[1] Condor, http://www.cs.wisc.edu/condor/.
[2] Dail H.,Berman F., and Casanova, H. A Decoupled Scheduling Approach for Grid Application
Development Environments. Journal of Parallel Distributed Computing, 63(5): 505 524 (2003).
[3] Figueira M., Hayes J., Obertelli G., Schopf J., Shao G., Smallen S.,Spring N., Su A. and
Zagorodnov D. Adaptive Computing on the Grid Using AppLeS. IEEE Transactions on Parallel and
Distributed Systems, 14(4):369–382 (2003).
[4] Feras Hanandeh,Mutaz Khazaaleh,Hamidah Ibrahim,and Rohaya Latip "CFS: A New Dynamic
Replication Strategy for Data Grids" The International Arab Journal of Information Technology, Vol.
9, No. 1, January 2012
[5] Hamscher V., Schwiegelshohn U., Streit A. and Yahyapour R. Evaluation of Job-Scheduling
Strategies for Grid Computing. GRID 2000, 191–202, 17–20 December 2000, Bangalore, India.
Lecture Notes in Computer Science, Springer-Verlag.
[6] Ian F., and Carl K., “The Grid: Blueprint for a New Computing Infrastructure,” Elsevier Inc.,
Singapore, Second Edition, 2004.
[7] NWS, http://nws.cs.ucsb.edu/.
[8] Raksha S., Vishnu K.S., Manoj K M., Prachet B., A Survey of Job Scheduling and Resource
Management in Grid Computing World Academy of Science, Engineering and Technology 40 2010