In cloud computing resources are considered as services hence utilization of the resources in an efficient way is done by using task scheduling and load balancing. Quality of service is an important factor to measure the trustiness of the cloud. Using quality of service in task scheduling will address the problems of security in cloud computing. This paper studied quality of service based task scheduling algorithms and the parameters used for scheduling. By comparing the results the efficiency of the algorithm is measured and limitations are given. We can improve the efficiency of the quality of service based task scheduling algorithms by considering these factors arriving time of the task, time taken by the task to execute on the resource and the cost in use for the communication.
This document summarizes various SQL operators and built-in functions. It describes arithmetic, relational, logical, and string operators. It also discusses different types of built-in functions including character, numeric, date, aggregate/group, conversion, and general functions. Examples are provided to demonstrate how each operator and function works.
This document introduces data structures and their classifications. It defines data structure as a structured way of organizing data in a computer so it can be used efficiently. Data structures are classified as simple, linear, and non-linear. Linear structures like arrays, stacks, and queues store elements in a sequence while non-linear structures like trees and graphs have non-sequential relationships. The document discusses common operations on each type and provides examples of different data structures like linked lists, binary trees, and graphs. It concludes by noting data structures should be selected based on the nature of the data and requirements of operations.
A subquery, also known as a nested query or subselect, is a SELECT query embedded within the WHERE or HAVING clause of another SQL query. The data returned by the subquery is used by the outer statement in the same way a literal value would be used. ... A subquery must return only one column.
For more information visit https://tutsmaster.org/
YouTube Link: https://youtu.be/giJimUEkI7U
**Java, J2EE & SOA Certification Training - https://www.edureka.co/java-j2ee-training-course **
This Edureka PPT will provide you with detailed knowledge about Linked Lists in Java and along with it, This PPT will also cover some examples of Linked Lists in Java, in order to provide you with a deep understanding of their functionality. This PPT will cover the following topics:
What is a Linked List?
Types of Linked Lists
Features of Linked Lists
Methods in Linked Lists
Array v/s Linked List
Complete Java Playlist: http://bit.ly/2XcYNH5
Complete Blog Series: http://bit.ly/2YoabkT
Follow us to never miss an update in the future.
YouTube: https://www.youtube.com/user/edurekaIN
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
Castbox: https://castbox.fm/networks/505?country=in
The document discusses modelling and evaluation in machine learning. It defines what models are and how they are selected and trained for predictive and descriptive tasks. Specifically, it covers:
1) Models represent raw data in meaningful patterns and are selected based on the problem and data type, like regression for continuous numeric prediction.
2) Models are trained by assigning parameters to optimize an objective function and evaluate quality. Cross-validation is used to evaluate models.
3) Predictive models predict target values like classification to categorize data or regression for continuous targets. Descriptive models find patterns without targets for tasks like clustering.
4) Model performance can be affected by underfitting if too simple or overfitting if too complex,
The document discusses linear and non-linear data structures. It defines a data structure as a way of organizing data to be used effectively. Linear data structures like arrays, stacks, queues, and linked lists arrange data sequentially, allowing single traversal. Non-linear structures like trees and graphs arrange data hierarchically, requiring multiple traversals. Linear structures are easier to implement but use memory inefficiently, while non-linear structures use memory efficiently but are harder to implement. Examples and properties of various linear and non-linear data structures are provided.
This document discusses AVL trees, which are height-balanced binary search trees. It defines AVL trees, explains why they are useful by comparing insertion performance to regular binary search trees, and covers balance factors, rotations, and the insertion algorithm. Key points made include that AVL trees have logarithmic time complexity for operations through self-balancing, and maintain an extra balance factor field for each node. Various example questions related to building AVL trees from data are also provided.
The program accepts 5 items from the command line and stores them in a Vector. It then demonstrates deleting an item, adding an item at a specified position, adding an item at the end, and printing the Vector contents. The Vector implements a dynamic array that can hold any type of objects and any number of elements. It is contained in the java.util package and is synchronized.
This document summarizes various SQL operators and built-in functions. It describes arithmetic, relational, logical, and string operators. It also discusses different types of built-in functions including character, numeric, date, aggregate/group, conversion, and general functions. Examples are provided to demonstrate how each operator and function works.
This document introduces data structures and their classifications. It defines data structure as a structured way of organizing data in a computer so it can be used efficiently. Data structures are classified as simple, linear, and non-linear. Linear structures like arrays, stacks, and queues store elements in a sequence while non-linear structures like trees and graphs have non-sequential relationships. The document discusses common operations on each type and provides examples of different data structures like linked lists, binary trees, and graphs. It concludes by noting data structures should be selected based on the nature of the data and requirements of operations.
A subquery, also known as a nested query or subselect, is a SELECT query embedded within the WHERE or HAVING clause of another SQL query. The data returned by the subquery is used by the outer statement in the same way a literal value would be used. ... A subquery must return only one column.
For more information visit https://tutsmaster.org/
YouTube Link: https://youtu.be/giJimUEkI7U
**Java, J2EE & SOA Certification Training - https://www.edureka.co/java-j2ee-training-course **
This Edureka PPT will provide you with detailed knowledge about Linked Lists in Java and along with it, This PPT will also cover some examples of Linked Lists in Java, in order to provide you with a deep understanding of their functionality. This PPT will cover the following topics:
What is a Linked List?
Types of Linked Lists
Features of Linked Lists
Methods in Linked Lists
Array v/s Linked List
Complete Java Playlist: http://bit.ly/2XcYNH5
Complete Blog Series: http://bit.ly/2YoabkT
Follow us to never miss an update in the future.
YouTube: https://www.youtube.com/user/edurekaIN
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
Castbox: https://castbox.fm/networks/505?country=in
The document discusses modelling and evaluation in machine learning. It defines what models are and how they are selected and trained for predictive and descriptive tasks. Specifically, it covers:
1) Models represent raw data in meaningful patterns and are selected based on the problem and data type, like regression for continuous numeric prediction.
2) Models are trained by assigning parameters to optimize an objective function and evaluate quality. Cross-validation is used to evaluate models.
3) Predictive models predict target values like classification to categorize data or regression for continuous targets. Descriptive models find patterns without targets for tasks like clustering.
4) Model performance can be affected by underfitting if too simple or overfitting if too complex,
The document discusses linear and non-linear data structures. It defines a data structure as a way of organizing data to be used effectively. Linear data structures like arrays, stacks, queues, and linked lists arrange data sequentially, allowing single traversal. Non-linear structures like trees and graphs arrange data hierarchically, requiring multiple traversals. Linear structures are easier to implement but use memory inefficiently, while non-linear structures use memory efficiently but are harder to implement. Examples and properties of various linear and non-linear data structures are provided.
This document discusses AVL trees, which are height-balanced binary search trees. It defines AVL trees, explains why they are useful by comparing insertion performance to regular binary search trees, and covers balance factors, rotations, and the insertion algorithm. Key points made include that AVL trees have logarithmic time complexity for operations through self-balancing, and maintain an extra balance factor field for each node. Various example questions related to building AVL trees from data are also provided.
The program accepts 5 items from the command line and stores them in a Vector. It then demonstrates deleting an item, adding an item at a specified position, adding an item at the end, and printing the Vector contents. The Vector implements a dynamic array that can hold any type of objects and any number of elements. It is contained in the java.util package and is synchronized.
1. The document discusses AVL trees, which are self-balancing binary search trees. It provides examples of inserting values into an initially empty AVL tree, showing the tree after each insertion and any necessary rotations to maintain balance.
2. Deletion from an AVL tree is more complex than insertion, as it may require rotations at each level to restore balance, with a worst case of log2N rotations. The document outlines the deletion procedure and provides an example requiring multiple rotations.
This document discusses and compares linear search and binary search algorithms. Linear search sequentially compares an element to all elements in a data set to find a match, with average complexity of O(N/2). Binary search works on a sorted data set, comparing the target element to the middle element first and then either the left or right subset, with average complexity of O(logN). Binary search is faster but requires pre-sorting while linear search can work on unsorted data. Examples and pseudocode are provided for both algorithms.
The symbol table is used throughout the compiler to store information about program entities like classes, instances, methods and variables. It has two main components - a name table to uniquely identify names, and an entity table with an entry for each program entity. The main symbol table operations are insert to add a new name, and lookup to find a name. Other functions initialize and finalize scopes when entering or exiting blocks. The symbol table incrementally collects information and transforms the entire program into a table that is used by various compiler phases.
The document provides an introduction to classification techniques in machine learning. It defines classification as assigning objects to predefined categories based on their attributes. The goal is to build a model from a training set that can accurately classify previously unseen records. Decision trees are discussed as a popular classification technique that recursively splits data into more homogeneous subgroups based on attribute tests. The document outlines the process of building decision trees, including selecting splitting attributes, stopping criteria, and evaluating performance on a test set. Examples are provided to illustrate classification tasks and building a decision tree model.
Input domain partitioning involves dividing the set of all possible inputs for a program into equivalence classes. This allows proving correctness by testing a finite number of test cases rather than all possible inputs. Key steps are identifying the input domain, equivalence classes, and combining classes while removing infeasible combinations. Interface-based and functionality-based input parameter modeling identify testable components, parameters, and partitions. Boundary value analysis targets errors at partition boundaries. The classification tree method generates test cases by combining representative classes from aspects of interest using combination rules.
Trees are hierarchical data structures composed of nodes connected by edges. A tree has a root node with child nodes below it. Leaf nodes have no children, while internal nodes have children. Binary trees restrict nodes to having 0, 1, or 2 children. Binary search trees organize nodes so that all left descendants of a node are less than or equal to the node and all right descendants are greater than or equal. Common tree operations include insertion, searching, and deletion.
Data mining Measuring similarity and desimilarityRushali Deshmukh
The document defines key concepts related to data including:
- Data is a collection of objects and their attributes. An attribute describes a property of an object.
- Attributes can be nominal, ordinal, interval, or ratio scales depending on their properties.
- Similarity and dissimilarity measures quantify how alike or different two objects are based on their attributes.
- Data is organized in a data matrix while dissimilarities are stored in a dissimilarity matrix.
Data structures allow for the effective organization and processing of data as a single unit. They involve determining how to logically represent data, choosing a data structure type, and developing operations to apply to the data. Common simple data structures include arrays and structures, while more complex structures include stacks, queues, linked lists, and trees. Key operations on data structures are insertion, deletion, searching, traversal, sorting, and merging.
This document describes binary search and provides an example of how it works. It begins with an introduction to binary search, noting that it can only be used on sorted lists and involves comparing the search key to the middle element. It then provides pseudocode for the binary search algorithm. The document analyzes the time complexity of binary search as O(log n) in the average and worst cases. It notes the advantages of binary search are its efficiency, while the disadvantage is that the list must be sorted. Applications mentioned include database searching and solving equations.
The document discusses SQL Group By, Order By, and Aliases. It explains that the Group By clause groups identical data, follows the WHERE clause, and precedes ORDER BY. ORDER BY sorts data in ascending or descending order specified by ASC or DESC. Aliases can temporarily rename tables or columns for brevity in a SELECT statement.
This document provides an overview of classification techniques. It defines classification as assigning records to predefined classes based on their attribute values. The key steps are building a classification model from training data and then using the model to classify new, unseen records. Decision trees are discussed as a popular classification method that uses a tree structure with internal nodes for attributes and leaf nodes for classes. The document covers decision tree induction, handling overfitting, and performance evaluation methods like holdout validation and cross-validation.
The document discusses regular expressions (regex), including what they are, common operations used in regex like concatenation and Kleene closure, how to convert regular expressions to nondeterministic finite automata (NFA), the differences between deterministic and nondeterministic finite automata, and some examples of using regex in programming and tools.
Integrity constraints are rules that help maintain data quality and consistency in a database. The main types of integrity constraints are:
1. Domain constraints specify valid values and data types for attributes to restrict what data can be entered.
2. Entity constraints require that each row have a unique identifier and prevent null values in primary keys.
3. Referential integrity constraints maintain relationships between tables by preventing actions that would invalidate links between foreign and primary keys.
4. Cascade rules extend referential integrity by automatically propagating updates or deletes from a primary table to its related tables.
The document discusses hashing techniques for data structures. It describes how hashing is used to store and retrieve records from a hash table using a key and hash function. When two keys hash to the same location (collision), different collision resolution strategies can be used like open addressing, separate chaining, and bucket hashing. Open addressing methods like linear probing and quadratic probing search for the next empty location to store collided records. Separate chaining stores collided records in linked lists at hash table locations.
The document discusses various tree data structures and algorithms related to binary trees. It begins with an introduction to different types of binary trees such as strict binary trees, complete binary trees, and extended binary trees. It then covers tree traversal algorithms including preorder, inorder and postorder traversal. The document also discusses representations of binary trees using arrays and linked lists. Finally, it explains algorithms for operations on binary search trees such as searching, insertion, deletion and rebalancing through rotations in AVL trees.
Machine learning models involve a bias-variance tradeoff, where increased model complexity can lead to overfitting training data (high variance) or underfitting (high bias). Bias measures how far model predictions are from the correct values on average, while variance captures differences between predictions on different training data. The ideal model has low bias and low variance, accurately fitting training data while generalizing to new examples.
This document introduces the Seaborn library for statistical data visualization in Python. It discusses how Seaborn builds on Matplotlib and Pandas to provide higher-level visualization functions. Specifically, it covers using distplot to create histograms and kernel density estimates, regplot for scatter plots and regression lines, and lmplot for faceted scatter plot grids. Examples are provided to illustrate customizing distplot, combining different plot elements, and using faceting controls in lmplot.
Multi-objective load balancing in cloud infrastructure through fuzzy based de...IAESIJAI
Cloud computing became a popular technology which influence not only
product development but also made technology business easy. The services
like infrastructure, platform and software can reduce the complexity of
technology requirement for any ecosystem. As the users of cloud-based
services increases the complexity of back-end technologies also increased.
The heterogeneous requirement of users in terms for various configurations
creates different unbalancing issues related to load. Hence effective load
balancing in a cloud system with reference to time and space become crucial
as it adversely affect system performance. Since the user requirement and
expected performance is multi-objective use of decision-making tools like
fuzzy logic will yield good results as it uses human procedure knowledge in
decision making. The overall system performance can be further improved by
dynamic resource scheduling using optimization technique like genetic
algorithm.
This document discusses different algorithms for task scheduling in cloud computing environments based on various quality of service (QoS) parameters. It summarizes several QoS-based scheduling algorithms including QDA, Improved Cost Based, PAPRIKA, ANT Colony, CMultiQoSSchedule, and SHEFT Workflow. It also provides a comparative table of these algorithms and discusses the various metrics considered by QoS-based scheduling algorithms like time, cost, makespan, trust, and resource utilization. The paper concludes that scheduling is an important factor for cloud environments and that existing algorithms can be improved by considering additional parameters like trust values, execution rates, and success rates.
1. The document discusses AVL trees, which are self-balancing binary search trees. It provides examples of inserting values into an initially empty AVL tree, showing the tree after each insertion and any necessary rotations to maintain balance.
2. Deletion from an AVL tree is more complex than insertion, as it may require rotations at each level to restore balance, with a worst case of log2N rotations. The document outlines the deletion procedure and provides an example requiring multiple rotations.
This document discusses and compares linear search and binary search algorithms. Linear search sequentially compares an element to all elements in a data set to find a match, with average complexity of O(N/2). Binary search works on a sorted data set, comparing the target element to the middle element first and then either the left or right subset, with average complexity of O(logN). Binary search is faster but requires pre-sorting while linear search can work on unsorted data. Examples and pseudocode are provided for both algorithms.
The symbol table is used throughout the compiler to store information about program entities like classes, instances, methods and variables. It has two main components - a name table to uniquely identify names, and an entity table with an entry for each program entity. The main symbol table operations are insert to add a new name, and lookup to find a name. Other functions initialize and finalize scopes when entering or exiting blocks. The symbol table incrementally collects information and transforms the entire program into a table that is used by various compiler phases.
The document provides an introduction to classification techniques in machine learning. It defines classification as assigning objects to predefined categories based on their attributes. The goal is to build a model from a training set that can accurately classify previously unseen records. Decision trees are discussed as a popular classification technique that recursively splits data into more homogeneous subgroups based on attribute tests. The document outlines the process of building decision trees, including selecting splitting attributes, stopping criteria, and evaluating performance on a test set. Examples are provided to illustrate classification tasks and building a decision tree model.
Input domain partitioning involves dividing the set of all possible inputs for a program into equivalence classes. This allows proving correctness by testing a finite number of test cases rather than all possible inputs. Key steps are identifying the input domain, equivalence classes, and combining classes while removing infeasible combinations. Interface-based and functionality-based input parameter modeling identify testable components, parameters, and partitions. Boundary value analysis targets errors at partition boundaries. The classification tree method generates test cases by combining representative classes from aspects of interest using combination rules.
Trees are hierarchical data structures composed of nodes connected by edges. A tree has a root node with child nodes below it. Leaf nodes have no children, while internal nodes have children. Binary trees restrict nodes to having 0, 1, or 2 children. Binary search trees organize nodes so that all left descendants of a node are less than or equal to the node and all right descendants are greater than or equal. Common tree operations include insertion, searching, and deletion.
Data mining Measuring similarity and desimilarityRushali Deshmukh
The document defines key concepts related to data including:
- Data is a collection of objects and their attributes. An attribute describes a property of an object.
- Attributes can be nominal, ordinal, interval, or ratio scales depending on their properties.
- Similarity and dissimilarity measures quantify how alike or different two objects are based on their attributes.
- Data is organized in a data matrix while dissimilarities are stored in a dissimilarity matrix.
Data structures allow for the effective organization and processing of data as a single unit. They involve determining how to logically represent data, choosing a data structure type, and developing operations to apply to the data. Common simple data structures include arrays and structures, while more complex structures include stacks, queues, linked lists, and trees. Key operations on data structures are insertion, deletion, searching, traversal, sorting, and merging.
This document describes binary search and provides an example of how it works. It begins with an introduction to binary search, noting that it can only be used on sorted lists and involves comparing the search key to the middle element. It then provides pseudocode for the binary search algorithm. The document analyzes the time complexity of binary search as O(log n) in the average and worst cases. It notes the advantages of binary search are its efficiency, while the disadvantage is that the list must be sorted. Applications mentioned include database searching and solving equations.
The document discusses SQL Group By, Order By, and Aliases. It explains that the Group By clause groups identical data, follows the WHERE clause, and precedes ORDER BY. ORDER BY sorts data in ascending or descending order specified by ASC or DESC. Aliases can temporarily rename tables or columns for brevity in a SELECT statement.
This document provides an overview of classification techniques. It defines classification as assigning records to predefined classes based on their attribute values. The key steps are building a classification model from training data and then using the model to classify new, unseen records. Decision trees are discussed as a popular classification method that uses a tree structure with internal nodes for attributes and leaf nodes for classes. The document covers decision tree induction, handling overfitting, and performance evaluation methods like holdout validation and cross-validation.
The document discusses regular expressions (regex), including what they are, common operations used in regex like concatenation and Kleene closure, how to convert regular expressions to nondeterministic finite automata (NFA), the differences between deterministic and nondeterministic finite automata, and some examples of using regex in programming and tools.
Integrity constraints are rules that help maintain data quality and consistency in a database. The main types of integrity constraints are:
1. Domain constraints specify valid values and data types for attributes to restrict what data can be entered.
2. Entity constraints require that each row have a unique identifier and prevent null values in primary keys.
3. Referential integrity constraints maintain relationships between tables by preventing actions that would invalidate links between foreign and primary keys.
4. Cascade rules extend referential integrity by automatically propagating updates or deletes from a primary table to its related tables.
The document discusses hashing techniques for data structures. It describes how hashing is used to store and retrieve records from a hash table using a key and hash function. When two keys hash to the same location (collision), different collision resolution strategies can be used like open addressing, separate chaining, and bucket hashing. Open addressing methods like linear probing and quadratic probing search for the next empty location to store collided records. Separate chaining stores collided records in linked lists at hash table locations.
The document discusses various tree data structures and algorithms related to binary trees. It begins with an introduction to different types of binary trees such as strict binary trees, complete binary trees, and extended binary trees. It then covers tree traversal algorithms including preorder, inorder and postorder traversal. The document also discusses representations of binary trees using arrays and linked lists. Finally, it explains algorithms for operations on binary search trees such as searching, insertion, deletion and rebalancing through rotations in AVL trees.
Machine learning models involve a bias-variance tradeoff, where increased model complexity can lead to overfitting training data (high variance) or underfitting (high bias). Bias measures how far model predictions are from the correct values on average, while variance captures differences between predictions on different training data. The ideal model has low bias and low variance, accurately fitting training data while generalizing to new examples.
This document introduces the Seaborn library for statistical data visualization in Python. It discusses how Seaborn builds on Matplotlib and Pandas to provide higher-level visualization functions. Specifically, it covers using distplot to create histograms and kernel density estimates, regplot for scatter plots and regression lines, and lmplot for faceted scatter plot grids. Examples are provided to illustrate customizing distplot, combining different plot elements, and using faceting controls in lmplot.
Multi-objective load balancing in cloud infrastructure through fuzzy based de...IAESIJAI
Cloud computing became a popular technology which influence not only
product development but also made technology business easy. The services
like infrastructure, platform and software can reduce the complexity of
technology requirement for any ecosystem. As the users of cloud-based
services increases the complexity of back-end technologies also increased.
The heterogeneous requirement of users in terms for various configurations
creates different unbalancing issues related to load. Hence effective load
balancing in a cloud system with reference to time and space become crucial
as it adversely affect system performance. Since the user requirement and
expected performance is multi-objective use of decision-making tools like
fuzzy logic will yield good results as it uses human procedure knowledge in
decision making. The overall system performance can be further improved by
dynamic resource scheduling using optimization technique like genetic
algorithm.
This document discusses different algorithms for task scheduling in cloud computing environments based on various quality of service (QoS) parameters. It summarizes several QoS-based scheduling algorithms including QDA, Improved Cost Based, PAPRIKA, ANT Colony, CMultiQoSSchedule, and SHEFT Workflow. It also provides a comparative table of these algorithms and discusses the various metrics considered by QoS-based scheduling algorithms like time, cost, makespan, trust, and resource utilization. The paper concludes that scheduling is an important factor for cloud environments and that existing algorithms can be improved by considering additional parameters like trust values, execution rates, and success rates.
Differentiating Algorithms of Cloud Task Scheduling Based on various Parametersiosrjce
Cloud computing is a new design structure for large, distributed data centers. Cloud computing
system promises to offer end user “pay as go” model. To meet the expected quality requirements of users, cloud
computing need to offer differentiated services to users. QoS differentiation is very important to satisfy
different users with different QoS requirements. In this paper, various QoS based scheduling algorithms,
scheduling parameters and the future scope of discussed algorithms have been studied. This paper summarizes
various cloud scheduling algorithms, findings of algorithms, scheduling factors, type of scheduling and
parameters considered
This document summarizes and compares various scheduling algorithms used in cloud computing environments. It begins with an introduction to cloud computing and the need for scheduling algorithms in cloud environments. It then describes several existing scheduling algorithms, including compromised-time-cost scheduling, particle swarm optimization-based heuristic, improved cost-based algorithm, resource-aware scheduling, innovative transaction intensive cost-constraint scheduling, scalable heterogeneous earliest-finish-time algorithm, and multiple QoS constrained scheduling strategy of multi-workflows. These algorithms aim to optimize metrics such as execution time, cost, deadline, load balancing, and quality of service. The document concludes by comparing the different scheduling strategies.
This document provides an overview of scheduling mechanisms in cloud computing. It discusses task scheduling, gang scheduling based on performance and cost evaluation, and resource scheduling. For task scheduling, it describes classifying tasks based on quality of service parameters and MapReduce level scheduling. It then explains two gang scheduling algorithms - Adaptive First Come First Serve (AFCFS) and Largest Job First Serve (LJFS) - and how they are used to evaluate performance and cost. Finally, it briefly discusses resource scheduling and factors that affect scheduling mechanisms in cloud computing like efficiency, fairness, costs, and communication patterns.
Score based deadline constrained workflow scheduling algorithm for cloud systemsijccsa
Cloud Computing is the latest and emerging trend in information technology domain. It offers utility- based
IT services to user over the Internet. Workflow scheduling is one of the major problems in cloud systems. A
good scheduling algorithm must minimize the execution time and cost of workflow application along with
QoS requirements of the user. In this paper we consider deadline as the major constraint and propose a
score based deadline constrained workflow scheduling algorithm that executes workflow within
manageable cost while meeting user defined deadline constraint. The algorithm uses the concept of score
which represents the capabilities of hardware resources. This score value is used while allocating
resources to various tasks of workflow application. The algorithm allocates those resources to workflow
application which are reliable and reduce the execution cost and complete the workflow application within
user specified deadline. The experimental results show that score based algorithm exhibits less execution
time and also reduces the failure rate of workflow application within manageable cost. All the simulations
have been done using CloudSim toolkit.
LOAD BALANCING ALGORITHM ON CLOUD COMPUTING FOR OPTIMIZE RESPONE TIMEijccsa
To improve the performance of cloud computing, there are many parameters and issues that we should consider, including resource allocation, resource responsiveness, connectivity to resources, unused resources exploration, corresponding resource mapping and planning for resource. The planning for the use of resources can be based on many kinds of parameters, and the service response time is one of them.
The users can easily figure out the response time of their requests, and it becomes one of the important QoSs. When we discover and explore more on this, response time can provide solutions for the distribution, the load balancing of resources with better efficiency. This is one of the most promising
research directions for improving the cloud technology. Therefore, this paper proposes a load balancing algorithm based on response time of requests on cloud with the name APRA (ARIMA Prediction of Response Time Algorithm), the main idea is to use ARIMA algorithms to predict the coming response time, thus giving a better way of effectively resolving resource allocation with threshold value. The experiment
result outcomes are potential and valuable for load balancing with predicted response time, it shows that prediction is a great direction for load balancing.
A Novel Dynamic Priority Based Job Scheduling Approach for Cloud EnvironmentIRJET Journal
The document proposes a new dynamic priority-based job scheduling algorithm for cloud environments to optimize the problem of starvation. It assigns priority to jobs based on criteria like CPU requirements, I/O requirements, and job criticality. The algorithm aims to reduce wait time, turnaround time, and increase throughput and CPU utilization. It was tested against the Shortest Job First algorithm in CloudSim simulation software. The results showed improvements in wait time, turnaround time, and total finish time compared to the SJF algorithm.
Cloud computing is the fastest emerging technology and a novel buzzword in the field of IT domain that offer distinct services, applications and focuses on providing sustainable, reliable, scalable and virtualized resources to its consumer. The main aim of cloud computing is to enhance the use of distributed resources to achieve higher throughput and resource utilization in large-scale computation problems. Scheduling affects the efficiency of cloud and plays a significant role in cloud computing to create high performance environment. The Quality of Service (QoS) requirements of user application define the scheduling of resources. Numbers of researchers have tried to solve these scheduling problems using different QoS based scheduling techniques. In this paper, a detail analysis of resource scheduling methodology is presented, with different types of scheduling based on soft computing techniques, their comparisons, benefits and results are discussed. Major finding of this paper helps researchers to decide suitable approach for scheduling user’s applications considering their QoS requirements.
Cloud service analysis using round-robin algorithm for qualityof-service awar...IJECEIAES
Round-robin (RR) is a process approach to sharing resources that requires each user to get a turn using them in an agreed order in cloud computing. It is suited for time-sharing systems since it automatically reduces the problem of priority inversion, which are low-priority tasks delayed. The time quantum is limited, and only a one-time quantum process is allowed in round-robin scheduling. The objective of this research is to improve the functionality of the current RR method for scheduling actions in the cloud by lowering the average waiting, turnaround, and response time. CloudAnalyst tool was used to enhance the RR technique by changing the parameter value in optimizing the high accuracy and low cost. The result presents the achieved overall min and max response times are 36.69 and 650.30 ms for running 300 min RR. The cost for the virtual machines (VMs) is identified from $0.5 to $3. The longer the time used, the higher the cost of the data transfer. This research is significant in improving communication and the quality of relationships within groups.
Effective and Efficient Job Scheduling in Grid ComputingAditya Kokadwar
The integration of remote and diverse resources and the increasing computational needs of Grand Challenges problems combined with the faster growth of the internet and communication technologies leads to the development of global computational grids. Grid computing is a prevailing technology, which unites underutilized resources in order to support sharing of resources and services distributed across numerous administrative region. An efficient and effective scheduling system is essentially required in order to achieve the promising capacity of grids. The main goal of scheduling is to maximize the resource utilization and minimize processing time and cost of the jobs. In this research, the objective is to prioritize the jobs based on execution cost and then allocate the resources with minimum cost by merging it with conventional job grouping strategy to provide the solution for better and more efficient job scheduling which is beneficial to both user and resource broker. The proposed scheduling approach in grid computing employs a dynamic cost-based job scheduling algorithm for making an efficient mapping of a job to available resources in the grid. It also improves communication to computation ratio (CCR) and utilization of available resources by grouping the user jobs before resource allocation.
IRJET- Time and Resource Efficient Task Scheduling in Cloud Computing Environ...IRJET Journal
This document summarizes a research paper that proposes a Task Based Allocation (TBA) algorithm to efficiently schedule tasks in a cloud computing environment. The algorithm aims to minimize makespan (completion time of all tasks) and maximize resource utilization. It first generates an Expected Time to Complete (ETC) matrix that estimates the time each task will take on different virtual machines. It then sorts tasks by length and allocates each task to the VM that minimizes its completion time, updating the VM wait times. The algorithm is evaluated using CloudSim simulation and is shown to reduce makespan, execution time and costs compared to random and first-come, first-served scheduling approaches.
Scheduling of Heterogeneous Tasks in Cloud Computing using Multi Queue (MQ) A...IRJET Journal
This document proposes a Multi Queue (MQ) task scheduling algorithm for heterogeneous tasks in cloud computing. It aims to improve upon the Round Robin and Weighted Round Robin algorithms by overcoming their drawbacks. The MQ algorithm splits tasks and resources into separate queues based on size/length and speed. Small tasks are scheduled on slower resources and large tasks on faster resources. The document compares the performance of MQ to Round Robin and Weighted Round Robin algorithms based on makespan, average resource utilization, and load balancing level using CloudSim simulations. The results show that MQ scheduling performs better than the other algorithms in most cases in terms of these metrics.
Scheduling Algorithm Based Simulator for Resource Allocation Task in Cloud Co...IRJET Journal
This document proposes a scheduling algorithm for allocating resources in cloud computing based on the Project Evaluation and Review Technique (PERT). It aims to address issues like starvation of lower priority tasks. The algorithm models task allocation as a directed acyclic graph and uses PERT to schedule critical and non-critical tasks, prioritizing higher priority tasks. The algorithm is evaluated against other scheduling methods and shows improvements in reducing completion time and optimizing resource allocation for all tasks.
Qo s aware scientific application scheduling algorithm in cloud environmentAlexander Decker
The document describes a QoS-aware scientific application scheduling algorithm for cloud environments. It proposes an algorithm that ranks tasks in a workflow and uses a user preference fitness function to select resources based on the user's desired quality of service, such as time and cost. The algorithm is compared to other similar works through several scenarios, and results show the proposed algorithm has better efficiency. Key aspects considered include task dependencies, data sizes, compute times, data transfer times, workflow makespan, resource costs and attributes.
Qo s aware scientific application scheduling algorithm in cloud environmentAlexander Decker
This document summarizes a research paper that proposes a scheduling algorithm for scientific applications in cloud environments. The algorithm aims to schedule tasks in workflows based on user preferences for quality of service (QoS), like time and cost. It ranks tasks and uses an UPFF function to select resources that meet the user's desired QoS. The algorithm is compared to other similar algorithms through scenarios, and results show it has better efficiency. The full paper provides more details on scientific workflows, cloud computing, related work on workflow scheduling algorithms, and defines the problem of scheduling tasks to resources while considering costs and times.
OPTIMIZED RESOURCE PROVISIONING METHOD FOR COMPUTATIONAL GRID ijgca
Grid computing is an accumulation of heterogeneous, dynamic resources from multiple administrative areas which are geographically distributed that can be utilized to reach a mutual end. Development of resource provisioning-based scheduling in large-scale distributed environments like grid computing brings in new requirement challenges that are not being believed in traditional distributed computing environments. Computational grid is applying the resources of many systems in a network to a single problem at the same time. Grid scheduling is the method by which work specified by some means is assigned to the resources that complete the work in the environment which cannot fulfill the user requirements considerably. The satisfaction of users while providing the resources might increase the beneficiary level of resource suppliers. Resource scheduling has to satisfy the multiple constraints specified by the user. The option of resource with the satisfaction of multiple constraints is the most tedious process. This trouble is solved by bringing out the particle swarm optimization based heuristic scheduling algorithm which attempts to select the most suitable resource from the set of available resources. The primary parameters that are taken in this work for selecting the most suitable resource are the makespan and cost. The experimental result shows that the proposed method yields optimal scheduling with the atonement of all user requirements.
Optimized Resource Provisioning Method for Computational Gridijgca
Grid computing is an accumulation of heterogeneous, dynamic resources from multiple administrative areas which are geographically distributed that can be utilized to reach a mutual end. Development of resource provisioning-based scheduling in large-scale distributed environments like grid computing brings in new requirement challenges that are not being believed in traditional distributed computing environments. Computational grid is applying the resources of many systems in a network to a single problem at the same time. Grid scheduling is the method by which work specified by some means is assigned to the resources that complete the work in the environment which cannot fulfill the user requirements considerably. The satisfaction of users while providing the resources might increase the beneficiary level of resource suppliers. Resource scheduling has to satisfy the multiple constraints specified by the user. The option of resource with the satisfaction of multiple constraints is the most tedious process. This trouble is solved by bringing out the particle swarm optimization based heuristic scheduling algorithm which attempts to select the most suitable resource from the set of available resources. The primary parameters that are taken in this work for selecting the most suitable resource are the makespan and cost. The experimental result shows that the proposed method yields optimal scheduling with the atonement of all user requirements
Scheduling Divisible Jobs to Optimize the Computation and Energy Costsinventionjournals
ABSTRACT : The important challenge in cloud computing environment is to design a scheduling strategy to handle jobs, and to process them in a heterogeneous environment with shared data centers. In this paper, we attempt to investigate a new analytical framework model that enables an existing private cloud data-center for scheduling jobs and minimizing the overall computation and energy cost together. Our model is based on Divisible Load Theory (DLT) model to derive closed-form solution for the load fractions to be assigned to each machines considering computation and energy cost. Our analysis also attempts to schedule the jobs such a way that cloud provider can gain maximum benefit for his service and Quality of Service (QoS) requirement user’s job. Finally, we quantify the performance of the strategies via rigorous simulation studies.
A Review on Scheduling in Cloud Computingijujournal
Cloud computing is the requirement based on clients that this computing which provides software,
infrastructure and platform as a service as per pay for use norm. The scheduling main goal is to achieve
the accuracy and correctness on task completion. The scheduling in cloud environment which enables the
various cloud services to help framework implementation. Thus the far reaching way of different type of
scheduling algorithms in cloud computing environment surveyed which includes the workflow scheduling
and grid scheduling. The survey gives an elaborate idea about grid, cloud, workflow scheduling to
minimize the energy cost, efficiency and throughput of the system.
Similar to Quality of Service based Task Scheduling Algorithms in Cloud Computing (20)
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Neural network optimizer of proportional-integral-differential controller par...IJECEIAES
Wide application of proportional-integral-differential (PID)-regulator in industry requires constant improvement of methods of its parameters adjustment. The paper deals with the issues of optimization of PID-regulator parameters with the use of neural network technology methods. A methodology for choosing the architecture (structure) of neural network optimizer is proposed, which consists in determining the number of layers, the number of neurons in each layer, as well as the form and type of activation function. Algorithms of neural network training based on the application of the method of minimizing the mismatch between the regulated value and the target value are developed. The method of back propagation of gradients is proposed to select the optimal training rate of neurons of the neural network. The neural network optimizer, which is a superstructure of the linear PID controller, allows increasing the regulation accuracy from 0.23 to 0.09, thus reducing the power consumption from 65% to 53%. The results of the conducted experiments allow us to conclude that the created neural superstructure may well become a prototype of an automatic voltage regulator (AVR)-type industrial controller for tuning the parameters of the PID controller.
An improved modulation technique suitable for a three level flying capacitor ...IJECEIAES
This research paper introduces an innovative modulation technique for controlling a 3-level flying capacitor multilevel inverter (FCMLI), aiming to streamline the modulation process in contrast to conventional methods. The proposed
simplified modulation technique paves the way for more straightforward and
efficient control of multilevel inverters, enabling their widespread adoption and
integration into modern power electronic systems. Through the amalgamation of
sinusoidal pulse width modulation (SPWM) with a high-frequency square wave
pulse, this controlling technique attains energy equilibrium across the coupling
capacitor. The modulation scheme incorporates a simplified switching pattern
and a decreased count of voltage references, thereby simplifying the control
algorithm.
A review on features and methods of potential fishing zoneIJECEIAES
This review focuses on the importance of identifying potential fishing zones in seawater for sustainable fishing practices. It explores features like sea surface temperature (SST) and sea surface height (SSH), along with classification methods such as classifiers. The features like SST, SSH, and different classifiers used to classify the data, have been figured out in this review study. This study underscores the importance of examining potential fishing zones using advanced analytical techniques. It thoroughly explores the methodologies employed by researchers, covering both past and current approaches. The examination centers on data characteristics and the application of classification algorithms for classification of potential fishing zones. Furthermore, the prediction of potential fishing zones relies significantly on the effectiveness of classification algorithms. Previous research has assessed the performance of models like support vector machines, naïve Bayes, and artificial neural networks (ANN). In the previous result, the results of support vector machine (SVM) were 97.6% more accurate than naive Bayes's 94.2% to classify test data for fisheries classification. By considering the recent works in this area, several recommendations for future works are presented to further improve the performance of the potential fishing zone models, which is important to the fisheries community.
Electrical signal interference minimization using appropriate core material f...IJECEIAES
As demand for smaller, quicker, and more powerful devices rises, Moore's law is strictly followed. The industry has worked hard to make little devices that boost productivity. The goal is to optimize device density. Scientists are reducing connection delays to improve circuit performance. This helped them understand three-dimensional integrated circuit (3D IC) concepts, which stack active devices and create vertical connections to diminish latency and lower interconnects. Electrical involvement is a big worry with 3D integrates circuits. Researchers have developed and tested through silicon via (TSV) and substrates to decrease electrical wave involvement. This study illustrates a novel noise coupling reduction method using several electrical involvement models. A 22% drop in electrical involvement from wave-carrying to victim TSVs introduces this new paradigm and improves system performance even at higher THz frequencies.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Bibliometric analysis highlighting the role of women in addressing climate ch...IJECEIAES
Fossil fuel consumption increased quickly, contributing to climate change
that is evident in unusual flooding and draughts, and global warming. Over
the past ten years, women's involvement in society has grown dramatically,
and they succeeded in playing a noticeable role in reducing climate change.
A bibliometric analysis of data from the last ten years has been carried out to
examine the role of women in addressing the climate change. The analysis's
findings discussed the relevant to the sustainable development goals (SDGs),
particularly SDG 7 and SDG 13. The results considered contributions made
by women in the various sectors while taking geographic dispersion into
account. The bibliometric analysis delves into topics including women's
leadership in environmental groups, their involvement in policymaking, their
contributions to sustainable development projects, and the influence of
gender diversity on attempts to mitigate climate change. This study's results
highlight how women have influenced policies and actions related to climate
change, point out areas of research deficiency and recommendations on how
to increase role of the women in addressing the climate change and
achieving sustainability. To achieve more successful results, this initiative
aims to highlight the significance of gender equality and encourage
inclusivity in climate change decision-making processes.
Voltage and frequency control of microgrid in presence of micro-turbine inter...IJECEIAES
The active and reactive load changes have a significant impact on voltage
and frequency. In this paper, in order to stabilize the microgrid (MG) against
load variations in islanding mode, the active and reactive power of all
distributed generators (DGs), including energy storage (battery), diesel
generator, and micro-turbine, are controlled. The micro-turbine generator is
connected to MG through a three-phase to three-phase matrix converter, and
the droop control method is applied for controlling the voltage and
frequency of MG. In addition, a method is introduced for voltage and
frequency control of micro-turbines in the transition state from gridconnected mode to islanding mode. A novel switching strategy of the matrix
converter is used for converting the high-frequency output voltage of the
micro-turbine to the grid-side frequency of the utility system. Moreover,
using the switching strategy, the low-order harmonics in the output current
and voltage are not produced, and consequently, the size of the output filter
would be reduced. In fact, the suggested control strategy is load-independent
and has no frequency conversion restrictions. The proposed approach for
voltage and frequency regulation demonstrates exceptional performance and
favorable response across various load alteration scenarios. The suggested
strategy is examined in several scenarios in the MG test systems, and the
simulation results are addressed.
Enhancing battery system identification: nonlinear autoregressive modeling fo...IJECEIAES
Precisely characterizing Li-ion batteries is essential for optimizing their
performance, enhancing safety, and prolonging their lifespan across various
applications, such as electric vehicles and renewable energy systems. This
article introduces an innovative nonlinear methodology for system
identification of a Li-ion battery, employing a nonlinear autoregressive with
exogenous inputs (NARX) model. The proposed approach integrates the
benefits of nonlinear modeling with the adaptability of the NARX structure,
facilitating a more comprehensive representation of the intricate
electrochemical processes within the battery. Experimental data collected
from a Li-ion battery operating under diverse scenarios are employed to
validate the effectiveness of the proposed methodology. The identified
NARX model exhibits superior accuracy in predicting the battery's behavior
compared to traditional linear models. This study underscores the
importance of accounting for nonlinearities in battery modeling, providing
insights into the intricate relationships between state-of-charge, voltage, and
current under dynamic conditions.
Smart grid deployment: from a bibliometric analysis to a surveyIJECEIAES
Smart grids are one of the last decades' innovations in electrical energy.
They bring relevant advantages compared to the traditional grid and
significant interest from the research community. Assessing the field's
evolution is essential to propose guidelines for facing new and future smart
grid challenges. In addition, knowing the main technologies involved in the
deployment of smart grids (SGs) is important to highlight possible
shortcomings that can be mitigated by developing new tools. This paper
contributes to the research trends mentioned above by focusing on two
objectives. First, a bibliometric analysis is presented to give an overview of
the current research level about smart grid deployment. Second, a survey of
the main technological approaches used for smart grid implementation and
their contributions are highlighted. To that effect, we searched the Web of
Science (WoS), and the Scopus databases. We obtained 5,663 documents
from WoS and 7,215 from Scopus on smart grid implementation or
deployment. With the extraction limitation in the Scopus database, 5,872 of
the 7,215 documents were extracted using a multi-step process. These two
datasets have been analyzed using a bibliometric tool called bibliometrix.
The main outputs are presented with some recommendations for future
research.
Use of analytical hierarchy process for selecting and prioritizing islanding ...IJECEIAES
One of the problems that are associated to power systems is islanding
condition, which must be rapidly and properly detected to prevent any
negative consequences on the system's protection, stability, and security.
This paper offers a thorough overview of several islanding detection
strategies, which are divided into two categories: classic approaches,
including local and remote approaches, and modern techniques, including
techniques based on signal processing and computational intelligence.
Additionally, each approach is compared and assessed based on several
factors, including implementation costs, non-detected zones, declining
power quality, and response times using the analytical hierarchy process
(AHP). The multi-criteria decision-making analysis shows that the overall
weight of passive methods (24.7%), active methods (7.8%), hybrid methods
(5.6%), remote methods (14.5%), signal processing-based methods (26.6%),
and computational intelligent-based methods (20.8%) based on the
comparison of all criteria together. Thus, it can be seen from the total weight
that hybrid approaches are the least suitable to be chosen, while signal
processing-based methods are the most appropriate islanding detection
method to be selected and implemented in power system with respect to the
aforementioned factors. Using Expert Choice software, the proposed
hierarchy model is studied and examined.
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...IJECEIAES
The power generated by photovoltaic (PV) systems is influenced by
environmental factors. This variability hampers the control and utilization of
solar cells' peak output. In this study, a single-stage grid-connected PV
system is designed to enhance power quality. Our approach employs fuzzy
logic in the direct power control (DPC) of a three-phase voltage source
inverter (VSI), enabling seamless integration of the PV connected to the
grid. Additionally, a fuzzy logic-based maximum power point tracking
(MPPT) controller is adopted, which outperforms traditional methods like
incremental conductance (INC) in enhancing solar cell efficiency and
minimizing the response time. Moreover, the inverter's real-time active and
reactive power is directly managed to achieve a unity power factor (UPF).
The system's performance is assessed through MATLAB/Simulink
implementation, showing marked improvement over conventional methods,
particularly in steady-state and varying weather conditions. For solar
irradiances of 500 and 1,000 W/m2
, the results show that the proposed
method reduces the total harmonic distortion (THD) of the injected current
to the grid by approximately 46% and 38% compared to conventional
methods, respectively. Furthermore, we compare the simulation results with
IEEE standards to evaluate the system's grid compatibility.
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...IJECEIAES
Photovoltaic systems have emerged as a promising energy resource that
caters to the future needs of society, owing to their renewable, inexhaustible,
and cost-free nature. The power output of these systems relies on solar cell
radiation and temperature. In order to mitigate the dependence on
atmospheric conditions and enhance power tracking, a conventional
approach has been improved by integrating various methods. To optimize
the generation of electricity from solar systems, the maximum power point
tracking (MPPT) technique is employed. To overcome limitations such as
steady-state voltage oscillations and improve transient response, two
traditional MPPT methods, namely fuzzy logic controller (FLC) and perturb
and observe (P&O), have been modified. This research paper aims to
simulate and validate the step size of the proposed modified P&O and FLC
techniques within the MPPT algorithm using MATLAB/Simulink for
efficient power tracking in photovoltaic systems.
Adaptive synchronous sliding control for a robot manipulator based on neural ...IJECEIAES
Robot manipulators have become important equipment in production lines, medical fields, and transportation. Improving the quality of trajectory tracking for
robot hands is always an attractive topic in the research community. This is a
challenging problem because robot manipulators are complex nonlinear systems
and are often subject to fluctuations in loads and external disturbances. This
article proposes an adaptive synchronous sliding control scheme to improve trajectory tracking performance for a robot manipulator. The proposed controller
ensures that the positions of the joints track the desired trajectory, synchronize
the errors, and significantly reduces chattering. First, the synchronous tracking
errors and synchronous sliding surfaces are presented. Second, the synchronous
tracking error dynamics are determined. Third, a robust adaptive control law is
designed,the unknown components of the model are estimated online by the neural network, and the parameters of the switching elements are selected by fuzzy
logic. The built algorithm ensures that the tracking and approximation errors
are ultimately uniformly bounded (UUB). Finally, the effectiveness of the constructed algorithm is demonstrated through simulation and experimental results.
Simulation and experimental results show that the proposed controller is effective with small synchronous tracking errors, and the chattering phenomenon is
significantly reduced.
Remote field-programmable gate array laboratory for signal acquisition and de...IJECEIAES
A remote laboratory utilizing field-programmable gate array (FPGA) technologies enhances students’ learning experience anywhere and anytime in embedded system design. Existing remote laboratories prioritize hardware access and visual feedback for observing board behavior after programming, neglecting comprehensive debugging tools to resolve errors that require internal signal acquisition. This paper proposes a novel remote embeddedsystem design approach targeting FPGA technologies that are fully interactive via a web-based platform. Our solution provides FPGA board access and debugging capabilities beyond the visual feedback provided by existing remote laboratories. We implemented a lab module that allows users to seamlessly incorporate into their FPGA design. The module minimizes hardware resource utilization while enabling the acquisition of a large number of data samples from the signal during the experiments by adaptively compressing the signal prior to data transmission. The results demonstrate an average compression ratio of 2.90 across three benchmark signals, indicating efficient signal acquisition and effective debugging and analysis. This method allows users to acquire more data samples than conventional methods. The proposed lab allows students to remotely test and debug their designs, bridging the gap between theory and practice in embedded system design.
Detecting and resolving feature envy through automated machine learning and m...IJECEIAES
Efficiently identifying and resolving code smells enhances software project quality. This paper presents a novel solution, utilizing automated machine learning (AutoML) techniques, to detect code smells and apply move method refactoring. By evaluating code metrics before and after refactoring, we assessed its impact on coupling, complexity, and cohesion. Key contributions of this research include a unique dataset for code smell classification and the development of models using AutoGluon for optimal performance. Furthermore, the study identifies the top 20 influential features in classifying feature envy, a well-known code smell, stemming from excessive reliance on external classes. We also explored how move method refactoring addresses feature envy, revealing reduced coupling and complexity, and improved cohesion, ultimately enhancing code quality. In summary, this research offers an empirical, data-driven approach, integrating AutoML and move method refactoring to optimize software project quality. Insights gained shed light on the benefits of refactoring on code quality and the significance of specific features in detecting feature envy. Future research can expand to explore additional refactoring techniques and a broader range of code metrics, advancing software engineering practices and standards.
Smart monitoring technique for solar cell systems using internet of things ba...IJECEIAES
Rapidly and remotely monitoring and receiving the solar cell systems status parameters, solar irradiance, temperature, and humidity, are critical issues in enhancement their efficiency. Hence, in the present article an improved smart prototype of internet of things (IoT) technique based on embedded system through NodeMCU ESP8266 (ESP-12E) was carried out experimentally. Three different regions at Egypt; Luxor, Cairo, and El-Beheira cities were chosen to study their solar irradiance profile, temperature, and humidity by the proposed IoT system. The monitoring data of solar irradiance, temperature, and humidity were live visualized directly by Ubidots through hypertext transfer protocol (HTTP) protocol. The measured solar power radiation in Luxor, Cairo, and El-Beheira ranged between 216-1000, 245-958, and 187-692 W/m 2 respectively during the solar day. The accuracy and rapidity of obtaining monitoring results using the proposed IoT system made it a strong candidate for application in monitoring solar cell systems. On the other hand, the obtained solar power radiation results of the three considered regions strongly candidate Luxor and Cairo as suitable places to build up a solar cells system station rather than El-Beheira.
An efficient security framework for intrusion detection and prevention in int...IJECEIAES
Over the past few years, the internet of things (IoT) has advanced to connect billions of smart devices to improve quality of life. However, anomalies or malicious intrusions pose several security loopholes, leading to performance degradation and threat to data security in IoT operations. Thereby, IoT security systems must keep an eye on and restrict unwanted events from occurring in the IoT network. Recently, various technical solutions based on machine learning (ML) models have been derived towards identifying and restricting unwanted events in IoT. However, most ML-based approaches are prone to miss-classification due to inappropriate feature selection. Additionally, most ML approaches applied to intrusion detection and prevention consider supervised learning, which requires a large amount of labeled data to be trained. Consequently, such complex datasets are impossible to source in a large network like IoT. To address this problem, this proposed study introduces an efficient learning mechanism to strengthen the IoT security aspects. The proposed algorithm incorporates supervised and unsupervised approaches to improve the learning models for intrusion detection and mitigation. Compared with the related works, the experimental outcome shows that the model performs well in a benchmark dataset. It accomplishes an improved detection accuracy of approximately 99.21%.
Generative AI Use cases applications solutions and implementation.pdfmahaffeycheryld
Generative AI solutions encompass a range of capabilities from content creation to complex problem-solving across industries. Implementing generative AI involves identifying specific business needs, developing tailored AI models using techniques like GANs and VAEs, and integrating these models into existing workflows. Data quality and continuous model refinement are crucial for effective implementation. Businesses must also consider ethical implications and ensure transparency in AI decision-making. Generative AI's implementation aims to enhance efficiency, creativity, and innovation by leveraging autonomous generation and sophisticated learning algorithms to meet diverse business challenges.
https://www.leewayhertz.com/generative-ai-use-cases-and-applications/
Digital Twins Computer Networking Paper Presentation.pptxaryanpankaj78
A Digital Twin in computer networking is a virtual representation of a physical network, used to simulate, analyze, and optimize network performance and reliability. It leverages real-time data to enhance network management, predict issues, and improve decision-making processes.
This study Examines the Effectiveness of Talent Procurement through the Imple...DharmaBanothu
In the world with high technology and fast
forward mindset recruiters are walking/showing interest
towards E-Recruitment. Present most of the HRs of
many companies are choosing E-Recruitment as the best
choice for recruitment. E-Recruitment is being done
through many online platforms like Linkedin, Naukri,
Instagram , Facebook etc. Now with high technology E-
Recruitment has gone through next level by using
Artificial Intelligence too.
Key Words : Talent Management, Talent Acquisition , E-
Recruitment , Artificial Intelligence Introduction
Effectiveness of Talent Acquisition through E-
Recruitment in this topic we will discuss about 4important
and interlinked topics which are
A high-Speed Communication System is based on the Design of a Bi-NoC Router, ...DharmaBanothu
The Network on Chip (NoC) has emerged as an effective
solution for intercommunication infrastructure within System on
Chip (SoC) designs, overcoming the limitations of traditional
methods that face significant bottlenecks. However, the complexity
of NoC design presents numerous challenges related to
performance metrics such as scalability, latency, power
consumption, and signal integrity. This project addresses the
issues within the router's memory unit and proposes an enhanced
memory structure. To achieve efficient data transfer, FIFO buffers
are implemented in distributed RAM and virtual channels for
FPGA-based NoC. The project introduces advanced FIFO-based
memory units within the NoC router, assessing their performance
in a Bi-directional NoC (Bi-NoC) configuration. The primary
objective is to reduce the router's workload while enhancing the
FIFO internal structure. To further improve data transfer speed,
a Bi-NoC with a self-configurable intercommunication channel is
suggested. Simulation and synthesis results demonstrate
guaranteed throughput, predictable latency, and equitable
network access, showing significant improvement over previous
designs
Tools & Techniques for Commissioning and Maintaining PV Systems W-Animations ...Transcat
Join us for this solutions-based webinar on the tools and techniques for commissioning and maintaining PV Systems. In this session, we'll review the process of building and maintaining a solar array, starting with installation and commissioning, then reviewing operations and maintenance of the system. This course will review insulation resistance testing, I-V curve testing, earth-bond continuity, ground resistance testing, performance tests, visual inspections, ground and arc fault testing procedures, and power quality analysis.
Fluke Solar Application Specialist Will White is presenting on this engaging topic:
Will has worked in the renewable energy industry since 2005, first as an installer for a small east coast solar integrator before adding sales, design, and project management to his skillset. In 2022, Will joined Fluke as a solar application specialist, where he supports their renewable energy testing equipment like IV-curve tracers, electrical meters, and thermal imaging cameras. Experienced in wind power, solar thermal, energy storage, and all scales of PV, Will has primarily focused on residential and small commercial systems. He is passionate about implementing high-quality, code-compliant installation techniques.
Determination of Equivalent Circuit parameters and performance characteristic...pvpriya2
Includes the testing of induction motor to draw the circle diagram of induction motor with step wise procedure and calculation for the same. Also explains the working and application of Induction generator
Accident detection system project report.pdfKamal Acharya
The Rapid growth of technology and infrastructure has made our lives easier. The
advent of technology has also increased the traffic hazards and the road accidents take place
frequently which causes huge loss of life and property because of the poor emergency facilities.
Many lives could have been saved if emergency service could get accident information and
reach in time. Our project will provide an optimum solution to this draw back. A piezo electric
sensor can be used as a crash or rollover detector of the vehicle during and after a crash. With
signals from a piezo electric sensor, a severe accident can be recognized. According to this
project when a vehicle meets with an accident immediately piezo electric sensor will detect the
signal or if a car rolls over. Then with the help of GSM module and GPS module, the location
will be sent to the emergency contact. Then after conforming the location necessary action will
be taken. If the person meets with a small accident or if there is no serious threat to anyone’s
life, then the alert message can be terminated by the driver by a switch provided in order to
avoid wasting the valuable time of the medical rescue team.
2. IJECE ISSN: 2088-8708
Quality of Service Based Task Scheduling Algorithms in Cloud Computing (Sirisha Potluri)
1089
2. QoS BASED TASK SCHEDULING ALGORITHMS
2.1. QoS guided Min-Min heuristic for Grid task scheduling
In grid computing the task scheduling algorithm should address the issues like security, QoS and
central control over data to get high throughput form the system. As shown in Figure 1, this algorithm is a
QoS guided based on general adaptive scheduling characteristics. The task may or may not have QoS; task
with no QoS can be executed on any resource. The task having high QoS can be executed on a resource with
high QoS. So the task with low QoS can be executed on a resource with high QoS resulting resources with
low QoS remains idle [7].
QoS is affects the effectiveness of the computing environment, for ex: Assume the network is
having high bandwidth and if the scheduler assigns the task which does not require high bandwidth,
meanwhile the tasks requiring high bandwidth have to wait in the queue. By considering QoS factors in
scheduling gives a better scheduling algorithm [8]. In this algorithm instead of mapping the tasks to the hosts,
here mapping is done first for the tasks with high QoS.
Figure 1. QoS guided Min-Min heuristic for Grid task scheduling
Results: Parameters considered: Make span and Expected time of completion. Advantage: Reduced
the make span and used the bandwidth parameter. Disadvantage: Poor load balancing and QoS factors are not
considered
2.2. Job scheduling algorithm based on Berger model in cloud computing
Cloud computing is a combination of parallel and grid computing. Virtualization hides all the
differences between different physical devices present on the cloud. The main entities in cloud computing are
users, resource providers and scheduling system [9]. The scheduling factor for this algorithm is fairness
constraint [10].
1. QoS based task classification: The user’s tasks can be classified based on QoS parameters like
Completion time and Bandwidth.
2. Fairness constraint: The fairness of resource makes the cloud to provide reasonable resources which are
available to execute the user’s tasks.
3. General expectations constraint: The tasks require the resources to complete it. Because of different
characteristics of the user’s applications the tasks are having QoS preferences. In local structure, based
on the general expectations we can optimize the selection process of the resources.
4. Description of tasks and resources: Cloud computing uses the technology called virtualization to use the
resources. The scheduling in cloud computing is implemented in application layer and virtual machine
layer. Scheduling is to map a task to the resource.
5. Tasks and resource mapping: Keeping QoS parameters in mind the user’s tasks are mapped to the
resources. The ratio of expectations of resources and the actual allocations gives the justice.
6. Completion time: The completion time of a task is the combination of waiting time, execution time and
sending time.
7. Bandwidth: It is very useful in applications where frequent communication is happening.
8. Integrated general expectation: If the task needs many QoS requirements then integrated general
expectation is suitable.
As shown in Figure 2, The algorithm is stated as follows:
1. According to Quality of Service classification, the tasks having general expectation constraint acts as the
constraints of fairness to select and allocate a resource are established.
3. ISSN: 2088-8708
IJECE Vol. 7, No. 2, April 2017 : 1088 – 1095
1090
2. Virtual machine selects the better resources to run the task by taking parameterized task characteristics
and general expectation constraint.
3. Calculate user satisfaction, accordingly adjust the model.
Figure 2. Job scheduling based on Berger model
Results: Parameters considered: QoS, fairness completion time. Advantage: Improved the
performance and task execution. Disadvantage: Rescheduling of tasks at each level will increase the
complexity of the algorithm and takes more time
2.3. Improved cost based algorithm for task scheduling in cloud computing
This algorithm is useful to measure the computation performance and cost of resources. Virtual
machines are used to run the applications because the resources are distributed virtually in cloud computing.
Some application may require more CPU time and some applications may require more memory to store the
data. Resources are used to perform each task. To measure the cost of the application measuring CPU cost,
Input cost, Output cost, memory cost are required [11]. Using Customer Facilitated Cost-based Scheduling
(CFCSC) algorithm we can balance the load and cost. This algorithm uses cost function to reduce input,
output and monetary costs [12]. As shown in Figure 3 the algorithm can be stated as follows:
Take Ti is set of n independent tasks and Rj is set of m computing resources where
i={1,2,3,……..n}, j={1,2,3,……..m}. The processing capacity of a resource can be measured as MIPS and
the size of the task can be measured as MI. The total computing time can be calculated as
TTtottime(R)=TTexe(R)+TTcommtime(R). Where TTtottime(R) represents total time, TTexe(R) represents the total
computation time of all resources and TTcommtime(R) represents total communication time.
Figure 3. Improved cost based algorithm for task scheduling in cloud computing
4. IJECE ISSN: 2088-8708
Quality of Service Based Task Scheduling Algorithms in Cloud Computing (Sirisha Potluri)
1091
Results: Parameters considered: Cost, make span. Advantage: Improved the computation and
reduced resource cost. Disadvantage: This algorithm not considered the dynamic cloud environment and
mainly focused on cost.
2.4. RASA
RASA algorithm considers distribution and characteristics of scalability of grid resources [13]. This
algorithm is taking the advantages of Min-min and Max-min task scheduling algorithms and covering their
disadvantages. As shown in Figure 4, this algorithm calculates the completion time of the tasks by taking all
available resources. Then it applies Max-min and Min-min algorithms alternatively, for small tasks it uses
Min-min and for large ones it applies Max-min to avoid the delays in large tasks execution [14]. The
algorithm builds a matrix called M. Mij represents the completion time of the task. Ti task and Rj resources.
Figure 4. RASA
Results: Parameters considered: Make span. Advantage: Reduced the make span. Disadvantage:
More emphasis can be given to QoS attributes; algorithm should consider the heterogeneous environments.
2.5. A QoS based Predictive Max-Min, Min-Min switcher algorithm for job scheduling in a grid
The history information about the execution of tasks is taken into consideration to predict the
performance of the resources available. As shown in Figure 5, this algorithm selects the best algorithm
between QoS Min-Min and QoS Max-min by taking length of the job into consideration. Based on this
decision, the tasks having high QoS are mapped first then jobs with low QoS are mapped. The algorithm
calculates the standard deviation of all unassigned jobs. Then it will identify the position where the difference
between the completions times of two consecutive jobs. If the difference is more than standard deviation and
if it is present in the first half contains the number of jobs more than the number of short jobs. Then min-min
algorithm outperforms max-min, so min-min is selected to map the jobs. If the position is present in the
second half means max-min outperforms the min-min, so max-min is selected to map the jobs. If the position
is not existed then is compared with threshold value, if it is less means min-min is used otherwise max-min is
used [15]. Improved met-tasking algorithm in grid computing schedules the tasks based suffrage value. Based
on suffrage value min-min or max-min algorithm is implemented. This met-tasking algorithm uses these
parameters: flow time, make span, resource utilization [16].
5. ISSN: 2088-8708
IJECE Vol. 7, No. 2, April 2017 : 1088 – 1095
1092
Figure 5. A QoS based Predictive Max-Min, Min-Min switcher algorithm for job scheduling in a grid
Results: Parameters considered: Make span. Advantage: Improved performance with QoS.
Disadvantage: Switching results in taking more time and involving more cost
2.6. Towards improving QoS-guided scheduling in grids
As shown in Figure 6, to resolve the problem of scheduling the tasks in heterogeneous systems this
algorithm is giving two optimization schemes based on QoS Min-Min scheduling technique. Those are
named as Make span optimization Rescheduling and Resource optimization rescheduling [17].
Make span optimization rescheduling (MOR): The aim of this technique to improve the make span
to achieve the better performance. Resource optimization rescheduling (ROR): The aim of this technique to
improve the resource optimization to achieve the better performance.
Figure 6. Towards improving QoS-guided scheduling in grids
6. IJECE ISSN: 2088-8708
Quality of Service Based Task Scheduling Algorithms in Cloud Computing (Sirisha Potluri)
1093
Results: Parameters considered: Make span. Advantage: Improved the make span and by using
rescheduling reduced the resource need. Disadvantage: This algorithm results poor load balancing for
dynamic cloud environment and can be improved by considering QoS attributes
2.7. A grid task scheduling algorithm based on QoS priority grouping
As shown in Figure 7, this algorithm is used to group the grid tasks based on the QoS. This
algorithm uses deadline property of the task, tasks acceptation rate and make span of the computing systems.
n is the number of tasks in the grid environment, so the tasks can be grouped into n groups. Using Sufferage
algorithm the tasks are grouped in descending order of their QoS [18]. The algorithm is as follows:
Figure 7. A grid task scheduling algorithm based on QoS priority grouping
Results: Parameters used: Acceptance rate, completion time. Advantage: Reduced the make span.
Disadvantage: Can be improved by considering Make span, QoS attributes like consistency to improve the
efficiency and hence complexity can be reduced.
2.8. A Task Scheduling Algorithm based on QoS-Driven in Cloud Computing
As shown in Figure 8, this algorithm computes the priority of the tasks on different services and
executes the task on the machine which can complete it as soon as possible. This algorithm uses dual fairness
constraint. RRank means a reliability priority rank is used in this algorithm to estimate the priority of the
task. In this model tasks are not directly mapped instead they were collected and stored in a queue. The tasks
having higher priority will be executed first and task should be completed in less time as soon as
possible [19].
Figure 8. A Task Scheduling Algorithm based on QoS-Driven in Cloud Computing
7. ISSN: 2088-8708
IJECE Vol. 7, No. 2, April 2017 : 1088 – 1095
1094
Parameters used: Priority of task and minimum completion time. Results: Improved performance
and load balancing is implemented. Disadvantage: This algorithm should consider the failure occurs in a
machine and dynamic cloud environment
3. RESULTS AND ANALYSIS
Task Scheduling is the primary and key issue in cloud computing environment. The existing
algorithms are based on Quality of service and the limitations are mentioned I Table 1:
Table 1. The existing algorithms are based on Quality of service and the limitations
S.No Algorithm Limitations
1 QoS guided Min-Min heuristic for Grid task
scheduling
Load balancing is poor and QoS attributes are not used in
algorithm
2 Job scheduling algorithm based on Berger
model in cloud computing
Rescheduling of tasks at each level will increase the complexity
of the algorithm and takes more time
3 Improved cost based algorithm for task
scheduling in cloud computing
This algorithm not considered the dynamic cloud environment
and mainly focused on cost.
4 RASA More emphasis can be given to QoS attributes; algorithm should
consider the heterogeneous environments.
5 A QoS based Predictive Max-Min, Min-Min
switcher algorithm for job scheduling in a grid
Switching results in taking more time and involving more cost
6 Towards improving QoS-guided scheduling in
grids
This algorithm results poor load balancing and can be improved
by considering QoS attributes
7 A grid task scheduling algorithm based on QoS
priority grouping
Can be improved by considering Make span, QoS attributes like
consistency to improve the efficiency and hence complexity can
be reduced
8 A Task Scheduling Algorithm based on QoS-
Driven in Cloud Computing
This algorithm should consider the failure occurs in a machine
and dynamic cloud environment
4. CONCLUSION
As cloud services are increasing day by day. To meet the on demand service and to maintain
efficiency of load balancing and task scheduling of resources many algorithms are proposed. The existing
task scheduling algorithms are studied in this paper. . We can improve the efficiency of the quality of service
based task scheduling algorithms by considering these factors arriving time of the task, time taken by the task
to execute on the resource and the cost in use for the communication.
REFERENCES
[1] A. Kumar, “World of Cloud Computing & Security,” International Journal of Cloud Computing and Services
Science (IJ-CLOSER), vol/issue: 1(2), pp. 53-58, 2012.
[2] R. Buyya, “Introduction to the IEEE Transactions on cloud computing,” IEEE Transactions on Cloud Computing,
vol/issue: 1(1), 2013.
[3] S. Pal, et al., “Efficient Architectural Framework for Cloud Computing,” International Journal of Cloud
Computing and Services Science (IJ-CLOSER), vol/issue: 1(2), pp. 66-73, 2012.
[4] R. Buyya, et al., “Cloud computing and emerging IT platforms: Vision, hype, and reality for delivering computing
as the 5th utility,” Future Generation Computer Systems, vol. 25, pp. 599-616, 2009.
[5] Vijindra, et al., “Survey on Scheduling Issues in Cloud Computing,” International conference on modeling
optimization and computing (ICMOG-2012), Procedia Engineering, vol. 38, pp. 2881-2888, 2012.
[6] V. S. Rathor, et al., “Survey on Load Balancing Through Virtual Machine Scheduling in Cloud Computing
Environment,” International Journal of Cloud Computing and Services Science (IJ-CLOSER), vol/issue: 3(1), pp.
37-43, 2014.
[7] H. E. Xiaoshan, et al., “QoS Guided Min-Min Heuristic for Grid Task Scheduling,” Journal of Computer Science
and Technology, vol/issue: 18(4), pp. 442–451, 2003.
[8] S. S. Chauhan, et al., “QoS Guided Heuristic Algorithms for Grid Task Scheduling,” International Journal of
Computer Applications, vol/issue: 2(9), 2010.
[9] B. Xu, et al., “Job scheduling algorithm based on Berger model in cloud environment,” Advances in Engineering
Software, vol. 42, pp. 419–425, 2011.
[10] D. S. Kalra, et al., “Differentiating Algorithms of Cloud Task Scheduling Based on various Parameters,” IOSR
Journal of Computer Engineering (IOSR-JCE), vol/issue: 17(6), pp. 35-38, 2015.
[11] S. Selvarani, et al., “Improved Cost-Based Algorithm For Task Scheduling In Cloud Computing,” IEEE Xplore,
2010.
8. IJECE ISSN: 2088-8708
Quality of Service Based Task Scheduling Algorithms in Cloud Computing (Sirisha Potluri)
1095
[12] D. I. G. Amalarethinam, et al., “Customer Facilitated Cost-based Scheduling (CFCSC) in Cloud,” International
Conference on Information and Communication Technologies (ICICT 2014), Procedia Computer Science, vol. 46,
pp. 660 – 667, 2015.
[13] S. Parsa, et al., “RASA: A New Task Scheduling Algorithm in Grid Environment,” World Applied Sciences
Journal, vol. 7, pp. 152-160, 2009.
[14] S. Parsa, et al., “RASA: A New Grid Task Scheduling Algorithm,” International Journal of Digital Content
Technology and its Applications, vol/issue: 3(4), 2009.
[15] M. Singh, et al., “A QoS based predictive Max-Min, Min-Min switcher algorithm for job scheduling in a grid,”
Information Technology Journal, vol/issue: 7(8), pp. 1176-1181, 2008.
[16] N. M. Reda, “An Improved Sufferage Meta-Task Scheduling Algorithm in Grid Computing Systems,”
International Journal of Advanced Research, vol/issue: 3(10), pp. 123 -129, 2015.
[17] C. H. Hsu, et al., “Towards Improving QoS-Guided Scheduling in Grids,” The Third ChinaGrid Annual
Conference, 2008.
[18] F. Dong, et al., “A Grid Task Scheduling Algorithm Based on QoS Priority Grouping,” Grid and Cooperative
Computing, (GCC), Fifth International Conference, 2006.
[19] X. Wu, et al., “A Task Scheduling Algorithm based on QoS-Driven in Cloud Computing,” First International
Conference on Information Technology and Quantitative Management, Procedia Computer Science, vol. 17, pp.
1162-1169, 2013.
BIOGRAPHIES OF AUTHORS
Sirisha Potluri
Research Scholar,
Department of CSE,
KL University,
Green Fields, Vaddeswaram, Guntur, Andhra Pradesh 522502, India,
Email: sirisha.vegunta@gmail.com
Dr. Katta Subba Rao
Professor,
Department of CSE,
KL University,
Green Fields, Vaddeswaram, Guntur, Andhra Pradesh 522502,India,
Email:subbarao_cse@kluniversity.in