This document presents a model of interdependent scheduling games (ISG) where multiple players each have tasks to schedule independently but the tasks may have dependencies on each other across players. The document provides an analysis of computational problems related to ISGs including welfare maximization, computing best responses, Nash equilibria existence, and complexity results. Key results are that welfare maximization is NP-hard even for uniform rewards and two tasks per player, but can be solved in polynomial time for a single player. Best responses can also be computed efficiently in some cases but are hard in others.
Min-based qualitative possibilistic networks are one of the effective tools for a compact representation of decision problems under uncertainty. The exact approaches for computing decision based on possibilistic networks are limited by the size of the possibility distributions.
Generally, these approaches are based on possibilistic propagation algorithms. An important step in the computation of the decision is the transformation of the DAG into a secondary structure, known as the junction trees. This transformation is known to be costly and represents a difficult problem. We propose in this paper a new approximate approach for the computation
of decision under uncertainty within possibilistic networks. The computing of the optimal optimistic decision no longer goes through the junction tree construction step. Instead, it is performed by calculating the degree of normalization in the moral graph resulting from the merging of the possibilistic network codifying knowledge of the agent and that codifying its preferences.
Modern, large scale data analysis typically involves the use of massive data stored on different computers that do not share the same file system. Computing complex statistical quantities, such as those that characterize spatial or temporal statistical dependence, requires information that crosses the boundaries imposed by this partitioning of the data. To leverage the information in these distributed data sets, analysts are faced with a trade-off between various costs (e.g., computational, transmission, and even the cost building an appropriate data system infrastructure) and inferential uncertainties (bias, variance, etc.) in the estimates produced by the analysis. In this talk we introduce a framework for quantifying this trade-off by optimizing over both statistical and data system design aspects of the problem. We illustrate with a simple example, and discuss how it may be extended to more complex settings.
Selection of most economical green building out of n alternatives approach ...eSAT Journals
Abstract
The concept of green building are now very effective tool to an engineer for construction of a new building and plays a vital role to influence his decision towards saving of water & electricity, providing healthier spaces, and generate less quantity of wastes during constructional period[3]. The quality and quantity of materials are directly gives the output efficiency in respect of the economy as well as positive environmental condition of a green building. But it is often found that total cost of building and total environmental impact values (TEIV) (inside and outside) are not same for all buildings constructed in various places due to fluctuation of market rate from place to place[4]. Thus to define a most economical green building out of n-alternatives, total cost of the building and it’s TEIV are very essential factors for assessment and making rank among them. But it is not a easy job because most of the data are not always crisp or numeric rather linguistic and hedges like ‘high reflective roof coating’, ‘bad orientation’, ‘poor sanitation’, ‘very good environmental quality’, ‘cheap materials’, ‘good drainage system’, ‘heavy rainfall’, ‘high energy consumption’, etc. to list a few only out of infinity. All these data are fuzzy in nature thus evaluation of many objects here is not possible with numerical valued descriptions[1]. All experts’ perception towards giving his decision depends wholly on his neural network functions which fluctuate according to the nature of function of dendrite and axon. Thus every decision-maker hesitates more or less on every evaluation activity which needs to be eliminated. The fuzzy logic has now proved worldwide as a tremendous tool to tackle this situation. This paper presents a fuzzy modelling for selection of most economical green building (GB) out of n-alternatives more precisely.
Keywords: attributes, fuzzy decision, TEIV, vague fuzzy EIA, etc.
The Ordered Weighted Averaging (OWA) operator was introduced by Yager [57] to provide a method for aggregating inputs that lie between the max and min operators. In this article two variants of probabilistic extensions the OWA operator-POWA and FPOWA (introduced by Merigo [26] and [27]) are considered as a basis of our generalizations in the environment of fuzzy uncertainty (parts II and III of this work), where different monotone measures (fuzzy measure) are used as uncertainty measures instead of the probability measure. For the identification of “classic” OWA and new operators (presented in parts II and III) of aggregations, the Information Structure is introduced where the incomplete available information in the general decision-making system is presented as a condensation of uncertainty measure, imprecision variable and objective function of weights.
A review on various optimization techniques of resource provisioning in cloud...IJECEIAES
Cloud computing is the provision of IT resources (IaaS) on-demand using a pay as you go model over the internet.It is a broad and deep platform that helps customers builds sophisticated, scalable applications. To get the full benefits, research on a wide range of topics is needed. While resource over-provisioning can cost users more than necessary, resource under provisioning hurts the application performance. The cost effectiveness of cloud computing highly depends on how well the customer can optimize the cost of renting resources (VMs) from cloud providers. The issue of resource provisioning optimization from cloud-consumer potential is a complicated optimization issue, which includes much uncertainty parameters. There is a much research avenue available for solving this problem as it is in the real-world. Here, in this paper we provide details about various optimization techniques for resource provisioning.
SOLVING OPTIMAL COMPONENTS ASSIGNMENT PROBLEM FOR A MULTISTATE NETWORK USING ...ijmnct
Optimal components assignment problem subject to system reliability, total lead-time, and total cost
constraints is studied in this paper. The problem is formulated as fuzzy linear problem using fuzzy
membership functions. An approach based on genetic algorithm with fuzzy optimization to sole the
presented problem. The optimal solution found by the proposed approach is characterized by maximum
reliability, minimum total cost and minimum total lead-time. The proposed approach is tested on different
examples taken from the literature to illustrate its efficiency in comparison with other previous methods
Min-based qualitative possibilistic networks are one of the effective tools for a compact representation of decision problems under uncertainty. The exact approaches for computing decision based on possibilistic networks are limited by the size of the possibility distributions.
Generally, these approaches are based on possibilistic propagation algorithms. An important step in the computation of the decision is the transformation of the DAG into a secondary structure, known as the junction trees. This transformation is known to be costly and represents a difficult problem. We propose in this paper a new approximate approach for the computation
of decision under uncertainty within possibilistic networks. The computing of the optimal optimistic decision no longer goes through the junction tree construction step. Instead, it is performed by calculating the degree of normalization in the moral graph resulting from the merging of the possibilistic network codifying knowledge of the agent and that codifying its preferences.
Modern, large scale data analysis typically involves the use of massive data stored on different computers that do not share the same file system. Computing complex statistical quantities, such as those that characterize spatial or temporal statistical dependence, requires information that crosses the boundaries imposed by this partitioning of the data. To leverage the information in these distributed data sets, analysts are faced with a trade-off between various costs (e.g., computational, transmission, and even the cost building an appropriate data system infrastructure) and inferential uncertainties (bias, variance, etc.) in the estimates produced by the analysis. In this talk we introduce a framework for quantifying this trade-off by optimizing over both statistical and data system design aspects of the problem. We illustrate with a simple example, and discuss how it may be extended to more complex settings.
Selection of most economical green building out of n alternatives approach ...eSAT Journals
Abstract
The concept of green building are now very effective tool to an engineer for construction of a new building and plays a vital role to influence his decision towards saving of water & electricity, providing healthier spaces, and generate less quantity of wastes during constructional period[3]. The quality and quantity of materials are directly gives the output efficiency in respect of the economy as well as positive environmental condition of a green building. But it is often found that total cost of building and total environmental impact values (TEIV) (inside and outside) are not same for all buildings constructed in various places due to fluctuation of market rate from place to place[4]. Thus to define a most economical green building out of n-alternatives, total cost of the building and it’s TEIV are very essential factors for assessment and making rank among them. But it is not a easy job because most of the data are not always crisp or numeric rather linguistic and hedges like ‘high reflective roof coating’, ‘bad orientation’, ‘poor sanitation’, ‘very good environmental quality’, ‘cheap materials’, ‘good drainage system’, ‘heavy rainfall’, ‘high energy consumption’, etc. to list a few only out of infinity. All these data are fuzzy in nature thus evaluation of many objects here is not possible with numerical valued descriptions[1]. All experts’ perception towards giving his decision depends wholly on his neural network functions which fluctuate according to the nature of function of dendrite and axon. Thus every decision-maker hesitates more or less on every evaluation activity which needs to be eliminated. The fuzzy logic has now proved worldwide as a tremendous tool to tackle this situation. This paper presents a fuzzy modelling for selection of most economical green building (GB) out of n-alternatives more precisely.
Keywords: attributes, fuzzy decision, TEIV, vague fuzzy EIA, etc.
The Ordered Weighted Averaging (OWA) operator was introduced by Yager [57] to provide a method for aggregating inputs that lie between the max and min operators. In this article two variants of probabilistic extensions the OWA operator-POWA and FPOWA (introduced by Merigo [26] and [27]) are considered as a basis of our generalizations in the environment of fuzzy uncertainty (parts II and III of this work), where different monotone measures (fuzzy measure) are used as uncertainty measures instead of the probability measure. For the identification of “classic” OWA and new operators (presented in parts II and III) of aggregations, the Information Structure is introduced where the incomplete available information in the general decision-making system is presented as a condensation of uncertainty measure, imprecision variable and objective function of weights.
A review on various optimization techniques of resource provisioning in cloud...IJECEIAES
Cloud computing is the provision of IT resources (IaaS) on-demand using a pay as you go model over the internet.It is a broad and deep platform that helps customers builds sophisticated, scalable applications. To get the full benefits, research on a wide range of topics is needed. While resource over-provisioning can cost users more than necessary, resource under provisioning hurts the application performance. The cost effectiveness of cloud computing highly depends on how well the customer can optimize the cost of renting resources (VMs) from cloud providers. The issue of resource provisioning optimization from cloud-consumer potential is a complicated optimization issue, which includes much uncertainty parameters. There is a much research avenue available for solving this problem as it is in the real-world. Here, in this paper we provide details about various optimization techniques for resource provisioning.
SOLVING OPTIMAL COMPONENTS ASSIGNMENT PROBLEM FOR A MULTISTATE NETWORK USING ...ijmnct
Optimal components assignment problem subject to system reliability, total lead-time, and total cost
constraints is studied in this paper. The problem is formulated as fuzzy linear problem using fuzzy
membership functions. An approach based on genetic algorithm with fuzzy optimization to sole the
presented problem. The optimal solution found by the proposed approach is characterized by maximum
reliability, minimum total cost and minimum total lead-time. The proposed approach is tested on different
examples taken from the literature to illustrate its efficiency in comparison with other previous methods
SOLVING OPTIMAL COMPONENTS ASSIGNMENT PROBLEM FOR A MULTISTATE NETWORK USING ...ijmnct
Optimal components assignment problem subject to system reliability, total lead-time, and total cost constraints is studied in this paper. The problem is formulated as fuzzy linear problem using fuzzy membership functions. An approach based on genetic algorithm with fuzzy optimization to sole the presented problem. The optimal solution found by the proposed approach is characterized by maximum reliability, minimum total cost and minimum total lead-time. The proposed approach is tested on different examples taken from the literature to illustrate its efficiency in comparison with other previous methods.
Momentum and Energy are fundamental concepts in Classical Mechanics. In this presentation, we give to both concepts a much wider scope of applications (Economics, Stock Markets ...)
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
Proposing a scheduling algorithm to balance the time and cost using a genetic...Editor IJCATR
Grid computing is a hardware and software infrastructure and provides affordable, sustainable, and reliable access. Its aim is
to create a supercomputer using free resources. One of the challenges to the Grid computing is scheduling problem which is regarded
as a tough issue. Since scheduling problem is a non-deterministic issue in the Grid, deterministic algorithms cannot be used to improve
scheduling.
In this paper, a combination of genetic algorithms and binary gravitational attraction is used for scheduling problem solving, where the
reduction in the duty performance timing and cost-effective use of simultaneous resources are investigated. In this case, the user
determines the execution time parameter and cost-effective use of resources. In this algorithm, a new approach that has led to a
balanced load of resources is used in the selection of resources. Experimental results reveals that our proposed algorithm in terms of
cost-time and selection of the best resource has reached better results than other algorithm.
SOLVING OPTIMAL COMPONENTS ASSIGNMENT PROBLEM FOR A MULTISTATE NETWORK USING ...ijmnct
Optimal components assignment problem subject to system reliability, total lead-time, and total cost constraints is studied in this paper. The problem is formulated as fuzzy linear problem using fuzzy membership functions. An approach based on genetic algorithm with fuzzy optimization to sole the presented problem. The optimal solution found by the proposed approach is characterized by maximum reliability, minimum total cost and minimum total lead-time. The proposed approach is tested on different examples taken from the literature to illustrate its efficiency in comparison with other previous methods.
Momentum and Energy are fundamental concepts in Classical Mechanics. In this presentation, we give to both concepts a much wider scope of applications (Economics, Stock Markets ...)
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
Proposing a scheduling algorithm to balance the time and cost using a genetic...Editor IJCATR
Grid computing is a hardware and software infrastructure and provides affordable, sustainable, and reliable access. Its aim is
to create a supercomputer using free resources. One of the challenges to the Grid computing is scheduling problem which is regarded
as a tough issue. Since scheduling problem is a non-deterministic issue in the Grid, deterministic algorithms cannot be used to improve
scheduling.
In this paper, a combination of genetic algorithms and binary gravitational attraction is used for scheduling problem solving, where the
reduction in the duty performance timing and cost-effective use of simultaneous resources are investigated. In this case, the user
determines the execution time parameter and cost-effective use of resources. In this algorithm, a new approach that has led to a
balanced load of resources is used in the selection of resources. Experimental results reveals that our proposed algorithm in terms of
cost-time and selection of the best resource has reached better results than other algorithm.
Con el objetivo de que las compañías ganen eficiencia al manejar su flujo de efectivo, hemos innovado en el mercado dominicano ofreciendo este sistema de bóveda remota.
Supermercados, estaciones de combustible, tiendas al por menor y otros negocios con un alto volumen de efectivo son los principales clientes de esta novedosa solución.
A sat encoding for solving games with energy objectivescsandit
Recently, a reduction from the problem of solving parity games to the satisfiability problem in
propositional logic (SAT) have been proposed in [5], motivated by the success of SAT solvers in
symbolic verification. With analogous motivations, we show how to exploit the notion of energy
progress measure to devise a reduction from the problem of energy games to the satisfiability
problem for formulas of propositional logic in conjunctive normal form.
Multiprocessor scheduling of dependent tasks to minimize makespan and reliabi...ijfcstjournal
Algorithms developed for scheduling applications on heterogeneous multiprocessor system focus on a
single objective such as execution time, cost or total data transmission time. However, if more than one
objective (e.g. execution cost and time, which may be in conflict) are considered, then the problem becomes
more challenging. This project is proposed to develop a multiobjective scheduling algorithm using
Evolutionary techniques for scheduling a set of dependent tasks on available resources in a multiprocessor
environment which will minimize the makespan and reliability cost. A Non-dominated sorting Genetic
Algorithm-II procedure has been developed to get the pareto- optimal solutions. NSGA-II is a Elitist
Evolutionary algorithm, and it takes the initial parental solution without any changes, in all iteration to
eliminate the problem of loss of some pareto-optimal solutions.NSGA-II uses crowding distance concept to
create a diversity of the solutions.
Using particle swarm optimization to solve test functions problemsriyaniaes
In this paper the benchmarking functions are used to evaluate and check the particle swarm optimization (PSO) algorithm. However, the functions utilized have two dimension but they selected with different difficulty and with different models. In order to prove capability of PSO, it is compared with genetic algorithm (GA). Hence, the two algorithms are compared in terms of objective functions and the standard deviation. Different runs have been taken to get convincing results and the parameters are chosen properly where the Matlab software is used. Where the suggested algorithm can solve different engineering problems with different dimension and outperform the others in term of accuracy and speed of convergence.
A SAT ENCODING FOR SOLVING GAMES WITH ENERGY OBJECTIVEScscpconf
Recently, a reduction from the problem of solving parity games to the satisfiability problem in
propositional logic (SAT) have been proposed in [5], motivated by the success of SAT solvers in
symbolic verification. With analogous motivations, we show how to exploit the notion of energy
progress measure to devise a reduction from the problem of energy games to the satisfiability
problem for formulas of propositional logic in conjunctive normal form.
AN INTEGER-LINEAR ALGORITHM FOR OPTIMIZING ENERGY EFFICIENCY IN DATA CENTERSijfcstjournal
Nowadays, to meet the enormous computational requests, energy consumption, the largest part which is
related to idle resources, is strictly increased as a great part of a data center's budget. So, minimizing
energy consumption is one of the most important issues in the field of green computing. In this paper, we
present a mathematical model formed as integer-linear programming which minimizes energy consumption
and maximizes user’s satisfaction, simultaneously. However, migration variables, as principal decision
variables of the model, can be relaxed to continuous activities in some practical problems. This constraint
relaxation helps a decision maker to find faster solutions that are usually good approximations for
optimum. Near feasible solutions (infeasible solutions that are desirably close to the feasible region) have
been investigated as another relaxation considering the kind of solutions. For this purpose, we initially
present a measure to evaluate the amount of infeasibility of solutions and then let the model consider an
extended region including solutions with remissible infeasibility, if necessary.
An Effective PSO-inspired Algorithm for Workflow Scheduling IJECEIAES
The Cloud is a computing platform that provides on-demand access to a shared pool of configurable resources such as networks, servers and storage that can be rapidly provisioned and released with minimal management effort from clients. At its core, Cloud computing focuses on maximizing the effectiveness of the shared resources. Therefore, workflow scheduling is one of the challenges that the Cloud must tackle especially if a large number of tasks are executed on geographically distributed servers. This entails the need to adopt an effective scheduling algorithm in order to minimize task completion time (makespan). Although workflow scheduling has been the focus of many researchers, a handful efficient solutions have been proposed for Cloud computing. In this paper, we propose the LPSO, a novel algorithm for workflow scheduling problem that is based on the Particle Swarm Optimization method. Our proposed algorithm not only ensures a fast convergence but also prevents getting trapped in local extrema. We ran realistic scenarios using CloudSim and found that LPSO is superior to previously proposed algorithms and noticed that the deviation between the solution found by LPSO and the optimal solution is negligible.
Solving Scheduling Problems as the Puzzle Games Using Constraint Programmingijpla
Constraint programming (CP) is one of the most effective techniques for solving practical operational
problems. The outstanding feature of the method is a set of constraints affecting a solution of a problem
can be imposed without a need to explicitly defining a linear relation among variables, i.e. an equation.
Nevertheless, the challenge of paramount importance in using this technique is how to present the
operational problem in a solvable Constraint Satisfaction Problem (CSP) model. The problem modelling is
problem independent and could be an exhaustive task at the beginning stage of problem solving,
particularly when the problem is a real-world practical problem. This paper investigates the application of
a simple grid puzzle game when a player attempts to solve practical scheduling problems. The examination
scheduling and logistic fleet scheduling are presented as operational games. The game‘s rules are set up
based on the operational practice. CP is then applied to solve the defined puzzle and the results show the
success of the proposed method. The benefit of using a grid puzzle as the model is that the method can
amplify the simplicity of CP in solving practical problems.
AN INTEGER-LINEAR ALGORITHM FOR OPTIMIZING ENERGY EFFICIENCY IN DATA CENTERSijfcstjournal
Nowadays, to meet the enormous computational requests, energy consumption, the largest part which is
related to idle resources, is strictly increased as a great part of a data center's budget. So, minimizing
energy consumption is one of the most important issues in the field of green computing. In this paper, we
present a mathematical model formed as integer-linear programming which minimizes energy consumption
and maximizes user’s satisfaction, simultaneously. However, migration variables, as principal decision
variables of the model, can be relaxed to continuous activities in some practical problems. This constraint
relaxation helps a decision maker to find faster solutions that are usually good approximations for
optimum. Near feasible solutions (infeasible solutions that are desirably close to the feasible region) have
been investigated as another relaxation considering the kind of solutions. For this purpose, we initially
present a measure to evaluate the amount of infeasibility of solutions and then let the model consider an
extended region including solutions with remissible infeasibility, if necessary.
Biogeography Based Optimization (BBO) is a new evolutionary algorithm for global optimization that was introduced in
2008. BBO is an application of biogeography to evolutionary algorithms. Biogeography is the study of the distribution of biodiversity
over space and time. It aims to analyze where organisms live, and in what abundance. BBO has certain features in common with other population-based optimization methods. Like GA and PSO, BBO can share information between solutions. This makes BBO applicable to many of the same types of problems that GA and PSO are used for, including unimodal, multimodal and deceptive functions. This paper explains the methodology of application of BBO algorithm for the constrained task scheduling problems.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
This paper analyses the optimal power system planning with DGs used as real and reactive power compensator. Recently planning of DG placement reactive power compensation are the major problems in distribution system. As the requirement in the power is more the DG placement becomes important. When planned to make the DG placement, cost analysis becomes as a major concern. And if the DGs operate as reactive power compensator it is most helpful in power quality maintenance. So, this paper deals with the optimal power system planning with renewable DGs which can be used as a reactive power compensators. The problem is formulated and solved using popular meta-heuristic techniques called cuckoo search algorithm (CSA) and particle swarm optimization (PSO). the comparative results are presented.
This paper discusses the possible applications of particle swarm optimization (PSO) in the Power system. One of the problems in Power System is Economic Load dispatch (ED). The discussion is carried out in view of the saving money, computational speed – up and expandability that can be achieved by using PSO method. The general approach of the method of this paper is that of Dynamic Programming Method coupled with PSO method. The feasibility of the proposed method is demonstrated, and it is compared with the lambda iterative method in terms of the solution quality and computation efficiency. The experimental results show that the proposed PSO method was indeed capable of obtaining higher quality solutions efficiently in ED problems.
MULTIPROCESSOR SCHEDULING AND PERFORMANCE EVALUATION USING ELITIST NON DOMINA...ijcsa
Task scheduling plays an important part in the improvement of parallel and distributed systems. The problem of task scheduling has been shown to be NP hard. The time consuming is more to solve the problem in deterministic techniques. There are algorithms developed to schedule tasks for distributed environment, which focus on single objective. The problem becomes more complex, while considering biobjective.This paper presents bi-objective independent task scheduling algorithm using elitist Nondominated
sorting genetic algorithm (NSGA-II) to minimize the makespan and flowtime. This algorithm generates pareto global optimal solutions for this bi-objective task scheduling problem. NSGA-II is implemented by using the set of benchmark instances. The experimental result shows NSGA-II generates efficient optimal schedules.
Proposing a New Job Scheduling Algorithm in Grid Environment Using a Combinat...Editor IJCATR
Grid computing is a hardware and software infrastructure and provides affordable, sustainable, and reliable access. Its aim is
to create a supercomputer using free resources. One of the challenges to the Grid computing is scheduling problem which is regarded
as a tough issue. Since scheduling problem is a non-deterministic issue in the Grid, deterministic algorithms cannot be used to improve
scheduling. In this paper, a combination of imperialist competition algorithm (ICA) and gravitational attraction is used for to address the
problem of independent task scheduling in a grid environment, with the aim of reducing the makespan and energy. Experimental results
compare ICA with other algorithms and illustrate that ICA finds a shorter makespan and energy relative to the others. Moreover, it
converges quickly, finding its optimum solution in less time than the other algorithms.
Load balancing functionalities are crucial for best Grid performance and utilization. Accordingly,this paper presents a new meta-scheduling method called TunSys. It is inspired from the natural phenomenon of heat propagation and thermal equilibrium. TunSys is based on a Grid polyhedron model with a spherical like structure used to ensure load balancing through a local neighborhood propagation strategy. Furthermore, experimental results compared to FCFS, DGA and HGA show encouraging results in terms of system performance and scalability and in terms of load balancing efficiency.
International Journal of Grid Computing & Applications (IJGCA)ijgca
Service-oriented computing is a popular design methodology for large scale business computing systems. Grid computing enables the sharing of distributed computing and data resources such as processing, networking and storage capacity to create a cohesive resource environment for executing distributed applications in service-oriented computing. Grid computing represents more business-oriented orchestration of pretty homogeneous and powerful distributed computing resources to optimize the execution of time consuming process as well. Grid computing have received a significant and sustained research interest in terms of designing and deploying large scale and high performance computational in e-Science and businesses. The objective of the journal is to serve as both the premier venue for presenting foremost research results in the area and as a forum for introducing and exploring new concepts.
International Journal of Grid Computing & Applications (IJGCA)
schedule
1. Interdependent Scheduling Games
Andr´es Abeliuk1,2
, Haris Aziz1,3
, Gerardo Berbeglia1,2
, Serge Gaspers1,3
, Petr Kalina4
,
Simon Mackenzie1,3
, Nicholas Mattei1,3
, Paul Stursberg5
, Pascal Van Hentenryck1,6
and
Toby Walsh1,3
1
National ICT Australia (NICTA)
2
University of Melbourne, Australia
3
University of New South Wales, Australia
4
Czech Technical University, Czech Republic
5
Technische Universit¨at M¨unchen, Germany
6
Australian National University, Australia
July 25, 2015
Abstract
We propose a model of interdependent scheduling games in which each player has a set
of tasks that they schedule independently. Each of these tasks only begin to accrue reward
for the player when all predecessor tasks, which may or may not be in the set of tasks for the
player, have been completed. This model, where players have interdependent tasks, is motivated
by the problems faced in planning and coordinating large-scale infrastructures, e.g., restoring
electricity and gas to residents after a natural disaster or providing medical care and supplies
after a disaster. We undertake a detailed game-theoretic analysis of this setting, in particular
consider the issues of welfare maximization, computing best responses, Nash dynamics, and
existence and computation of Nash equilibria.
1 Introduction
In many large, complex systems there are often multiple stakeholders each looking to maximize
their own utility. In some cases, the objectives of the individual stakeholders may be at odds
with some global measure of utility or social welfare. Our primary motivation for this work is
drawn from situations where companies and governments need to coordinate to restore coordinated
infrastructure after major disruption due to disasters and other outside forces (e.g. Cavdaroglu et
al. [2013]; Coffrin et al. [2012]). In many of these cases, one utility may be attempting to maximize
a private objective function that is at odds with the global objective: to restore all services to the
communities affected as quickly as possible. Other examples of these interdependent scheduling
games (ISG) are coordinating multiple providers for humanitarian logistics over multiple states
or regions and the coordination of interdependent supply chain networks which may involve ports,
inland terminals, railway, and truck operators Van Hentenryck et al. [2010]; Simon et al. [2012].
Consider, for instance, the joint restoration of an electrical power network and a gas network
after a natural disaster. These networks typically feature interdependencies: the gas supply may
be needed to run the electrical plant for a whole town. From the community or government’s
perspective, the overall goal is to reduce the size of the (gas and electrical power) blackout. However,
1
2. Complexity Reference
ISG Welfare
NP-complete for uniform rewards, two tasks each Theorem 1
in P for one player Theorem 4
ISG Best Response
in P for uniform rewards Theorem 5
NP-complete for conflict-free schedules Theorem 6
Table 1: Summary of results for interdependent scheduling games (ISG).
each operator may have their own objective of reducing the blackout size in its own network; without
regard for the global view Coffrin et al. [2012].
Informally we have a set of players, each of which has a set of tasks they want to achieve and
each task provides its own reward. The individual players tasks may have dependencies among
their own set of tasks and among the set of tasks of all other players, of which the player is aware.
The distinguishing feature of the games is that dependency of some task t on another task t does
not mean that t cannot be completed without completing t , but that the player can only start
accruing reward for the completion of task t once task t has been completed. In contrast, in most
traditional scheduling games, a task cannot be scheduled unless it is in the right time window
and when its dependencies have been fulfilled. In the games we consider, once a task is activated,
the player continues to gather reward for every time step in which the task is active. These soft
constraints make the scheduling problems easier in some cases and more difficult in others. Our
model captures, for example, settings in power restoration where interdependencies exist between
the gas and electrical network. The electric company may be able to restore the lines to individual
homes, but until the gas company can supply gas to the main generator, no electricity will flow.
Once the power is flowing, the electric company receives its reward (income) from those customers
who are now receiving power.
Contributions We present a scheduling model which represents dependencies among tasks and
has utility functions which capture scenarios in power restoration after natural disasters. The model
raises intriguing mechanism design questions regarding the efficiency of the system.
We also present computational and equilibrium existence and non-existence results. We show
that welfare maximization is NP-hard even when the rewards are uniform, i.e. the reward for all
tasks for all players is the same, and each player has two tasks. On the other hand, for one player,
finding a welfare maximizing schedule can be solved in polynomial time. Therefore, for multiple
players, even when rewards are uniform, the structure of dependencies already induces significant
complexity to the problem of welfare maximization.
We also consider the problem where a player may have an incentive to change its intended
schedule so as to get better total utility. For uniform rewards, a best response can be computed
in linear time. For general rewards, and only inter-player dependencies, a best response can be
computed in polynomial time. On the other hand, finding a conflict free best response schedule
in which no task are inactive is NP-hard. We show that for uniform rewards and two players,
best responses can cycle. Despite the possibility of cycles, we show that a pure Nash equilibrium
always exists in this setting. On the other hand, if the rewards are not uniform, then a pure Nash
equilibrium may not exist. Some of our computational results are summarized in Table 1.
2
3. 2 Related Work
We consider the problem of welfare maximization in ISG’s, which is equivalent to finding the
optimal global schedule. This is a well-studied problem for classic scheduling models. A review
of the scheduling literature can be found in Brucker and Brucker [2007]. We also consider game-
theoretic issues that are not common in scheduling theory such as best response dynamics and
Nash equilibria. These concepts are uniquely applicable in our setting as the players may or may
not have incentives to cooperate towards the optimal global welfare.
There is a vast literature in scheduling problems across a large number of domains. Most of
these problems are focused on allocating scarce resources to multiple tasks in order to maximize an
objective function or minimize total time. This has been a key area of study in computer science
over the last several decades. Unfortunately, many practical problems in this area are NP-hard Lee
et al. [1997]. Scheduling situations in which agents compete for common processing resources were
introduced by Agnetis et al. Agnetis et al. [2000, 2004] and Baker and Smith Baker and Smith
[2003]. The most traditional approach in multi-agent scheduling is to consider a single centralized
agent optimizing the whole domain.
It is only recently that decentralized scheduling mechanisms have been proposed. Agnetis et
al. Agnetis et al. [2007] considers auction and bargaining models, which are useful when several
agents have to negotiate for processing resources on the basis of their scheduling performance.
Scheduling auctions typically divide the schedule horizon into time slots, and these time slots are
auctioned among the players. The bargaining approach considers two agents that have to negotiate
upon possible schedules. Abeliuk et al. Abeliuk et al. [2015] considers a two player bargaining
mechanism for any setting where the utility of one player does not depend on the actions taken
by the other player. Thus, their results are applicable in ISG’s with two players and inter-player
dependencies are only one way. For further discussion of literature on mechanism design for non-
cooperative scheduling games see the surveys Heydenreich et al. [2007]; Christodoulou et al. [2004];
Angel et al. [2006].
In our setting specifically, we do not deal with shared resources between the agents, the most
related literature we find is on multi-agent project-scheduling. Each project is composed of a set of
activities, with precedence relations between the activities and each one requiring specific amounts
of local and shared (among projects) resources. The aim is to complete all the project activities
minimizing each project schedule length. A first mechanism design approach in multi-agent project-
scheduling proposed by Confessore et al. Confessore et al. [2007], uses a decentralized mechanism
using combinatorial auctions. Recently, Briand et al. Briand and Billaut [2011] take a first step in
analyzing game theoretical properties such as the existence of and computing Nash equilibrium as
well as detailing the price of anarchy for multi-agent project-scheduling.
3 Setup
A directed graph G is a pair (V, E), where V is a finite set of vertices and E ⊆ V × V is a set of
directed arcs where (u, v) ∈ V means there is a directed edge from u to v in G. We will always
assume that G is acyclic, i.e. there is no set of edges {(v1, v2), (v2, v3), . . . , (vn, v1)} ⊆ E. We say
that G is transitive if (u, v), (v, w) ∈ E implies that (u, w) ∈ E. The transitive closure of a graph
G = (V, E) is a graph G = (V, E ) such that ∀u, v ∈ V, (u, v) ∈ E whenever there exists a path
from u to v in G. The in-neighborhood of a vertex v is defined as N−
G (v) = {u ∈ V : (u, v) ∈ E},
the set of vertices with edges directed at v. The out-neighborhood of v are defined accordingly as
N+
G (v) = {u ∈ V : (v, u) ∈ E}, the set of vertices with edges directed from v. The neighborhood of
3
4. v is defined as NG(v) = N−
G (v) ∪ N+
G (v).
An interdependent scheduling game (ISG) is a tuple ((T1, . . . , Tk), G, r). Here, we have k players
(alternatively called agents) and each of them has a disjoint set of tasks Ti, 1 ≤ i ≤ k. We denote the
set of all tasks by T := k
i=1 Ti. We assume without loss of generality, that |T1| = · · · = |Tk| =: q,
and for all the problems analyzed in this paper we assume that each task has a duration of one unit
of time. The dependency relation is represented by the transitive directed graph G = (T, E). The
explicit dependencies graph GE is a possibly non-transitive directed acyclic graph used to express
the dependencies for specific classes of games. In case GE is given then the dependency relation
G corresponds to the transitive closure of GE. For each task v ∈ T, there is a reward r(v) ≥ 0.
Occasionally we will consider the more restrictive case of uniform rewards where each task has a
reward value of 1, i.e., for all v ∈ T, r(v) = 1.
To schedule their tasks, each player i selects a permutation πi of Ti, the set of all permutations
available to player i is denoted by Πi. For a given schedule and a given time step, a task v is active
during that time step if all tasks in N−
G [v] are scheduled at or before that time step. Given task v
we denote by t(v) the time step when v is scheduled and by a(v) ≥ t(v) the time when v becomes
active, i.e., t(v) = π−1(v) and a(v) = min{t : ∀w ∈ N−
G [v] t(w) ≤ t}. At each time step, all active
tasks v generate the reward r(v). Thus, for a schedule π = (π1, . . . , πk), the total reward of player
i is
Ri(π) =
|Ti|
j=1 v∈acti(j,π)
r(v).
Here acti(j, π) = {v ∈ Ti : a(v) ≤ j}, thus the welfare of a schedule π is k
i=1 Ri(π).
Certain subsets of the set Πi of permutations available to player i will be called schedule con-
figurations. This notation will be convenient is some of the proofs, since it allows us to denote the
set of all schedules with a subset of specific tasks being scheduled to specific time-slots. A schedule
configuration for player i, denoted by c = (c1, ..., cq) ∈ ({∗} ∪ Ti)q, is defined as follows:
c = (c1, ..., cq) := {πi ∈ Πi|∀cj = ∗ : πi(j) = cj} ,
where πi(j) = ∗ represents that slot j’s tasks is unspecified. We will be slightly imprecise at times
and use a permutation πi and the complete schedule configuration with cj = πi(j) for all j ∈ [q]
interchangeably, although the latter is in fact the singleton set containing only πi.
We often represent the games in a graphical form as shown in Figure 1. Player i’s set of tasks
Ti corresponds to the set of tasks (nodes) in the i-th row. The numbers in the nodes indicate
the reward extracted from the individual tasks. For ease of presentation, we often omit arrows
that correspond to the implicit dependencies (the transitive closure) of the tasks. It is therefore
important to remember that dependencies are transitive, i.e. a task will only activate when all its
predecessors are active.
Note that this representation is unambiguous. Task v can be uniquely identified by r(v) and
the edges in NG(v). Tasks v, w with r(v) = r(w) and NG(v) = NG(w) can be freely interchanged
within any particular solution to the game without affecting the utilities of the other players.
4
5. Player 1 10 1 1
Player 2 1 100 100
Player 1 1 1 10
Player 2100 100 1
Figure 1: An example of an ISG with two players and 3 tasks each. Both of the two 100 utility tasks
belonging to Player 2 are dependent upon a task belonging to Player 1. For the depicted schedule on
the left, Player 1 gets reward 3(10)+2(1)+1 = 33 while Player 2 gets reward 3(1)+2(100)+100 =
303. For the depicted schedule on the right, Player 1 gets reward 3(1)+2(1)+10 = 15 while Player
2 gets reward 3(100) + 2(100) + 1 = 501.
4 Utilitarian welfare maximization
We first consider the problem of utilitarian welfare maximization. To be clear, when speaking of
individual welfare we refer to the utility associated to a player; when considering social welfare, or
simply welfare, the notion is associated to all players as a whole. We can now formally define the
problem.
ISG Welfare
Input: An ISG ((T1, . . . , Tk), G, r) and an integer w.
Question: Is there a schedule π for the ISG such that
k
i=1 Ri(π) ≥ w?
Firstly, we show that welfare maximization is NP-hard for the surprisingly restricted case where
agents have uniform rewards.
Theorem 1. ISG Welfare is NP-hard, even when the rewards are uniform, r(v) = 1 for all
v ∈ T, and each player has two tasks.
Proof. We give a reduction from the NP-hard Min 2SAT problem [Kohli et al., 1994].
Min 2SAT
Input: A propositional 2CNF formula F where each clause
has exactly two literals, and an integer k.
Question: Is there an assignment to the variables of F such that
at most k clauses are satisfied?
For each variable x in F, create a player Px with tasks Tx = {x, ¬x}. For each clause c in F,
create a player Pc with tasks Tc = {c1, c2}. For each clause c = ( 1 ∨ 2), the precedence graph
contains arcs (c1, c2), ( 1, c1), and ( 2, c1). The rewards are uniform, and we set w = 3n + 3m − k,
where n and m are the number of variables and clauses of F, respectively.
It remains to prove that F has an assignment satisfying at most k clauses if and only if the
ISG has a schedule generating a reward of at least w. For the forward direction, suppose F has an
assignment α : var(F) → {0, 1} satisfying at most k clauses. Consider the schedule where, for each
variable x, the player Px schedules first the literal of x that is set to false by α, i.e., x is scheduled
before ¬x iff α(x) = 0. Additionally, for each clause c, the task c1 is scheduled before c2. This
schedule generates a reward of 3 for each variable: a reward of 1 at the first time step and a reward
of 2 at the second time step. For a satisfied clause c, the schedule generates a reward of 2: at
the first time step no reward is generated since the literal satisfying the clause is scheduled at the
second time step and there is an arc from that literal to c1, and a reward of 2 is generated at the
5
6. second time step. For an unsatisfied clause c, the schedule generates a reward of 3: since neither
literal satisfies the clause, both literals are scheduled at the first time step. Thus, the total reward
generated for this schedule is at least 3n + 3m − k.
For the reverse direction, let π be a schedule generating a reward of at least w. Consider the
assignment α : var(F) → {0, 1} with α(x) = 0 iff player Px schedules x at the first time step. Note
that at the second time step, each player generates a reward of 2. Also, each player corresponding
to a variable generates an additional reward of 1 at the first time step since his tasks have in-degree
0. So, at least 3n + 3m − k − (3n + 2m) = m − k additional clause players generate a reward of 1 at
the first time step. But, for each such clause c, c1 is scheduled before c2 and both literals occurring
in c are scheduled at the first time step, which means that the assignment α sets these literals to
false. Therefore, α does not satisfy c. We conclude that α satisfies at most k clauses.
Definition 1. A schedule is conflict-free if no task is inactive (agent is not getting a reward from
the task) when the task has been scheduled.
For the uniform rewards case, it easily follows that any conflict-free schedule is a welfare maxi-
mizing schedule. This property does not hold in the case of non uniform rewards. An example is
shown in the following theorem.
Theorem 2. Even if a conflict-free schedule exists, the welfare maximizing schedule might not be
conflict-free.
Proof. Consider the figure below. The schedule in the left sub-figure is conflict free while the
schedule on the right has a conflict. Despite the conflict, the right schedule has a higher social
welfare as the two r = 100 tasks become active simultaneously in step two, providing more total
reward.
Player 1 1 1 1
Player 2 1 100 100
Welfare = 309.
Player 1 1 1 1
Player 2 1100 100
Welfare = 407.
For the right sub-figure, the schedule is indeed welfare maximizing: Both tasks with reward 100
require two tasks of player 1 to become active. Hence, none of the two tasks can become active
before timestep 2. The schedule shown is the only schedule by which both tasks with reward 100
become active in timestep 2, therefore it is welfare-maximizing.
Additionally, if one wants to compute the optimal schedule among the conflict-free schedules
with not necessarily uniform rewards, the constraint of conflict-freeness makes the problem of
welfare maximization NP-hard even for one agent. Note that dependency constraints in ISG can
be unmet. However, when there is only one player, the welfare maximizing schedule must be a
conflict-free schedule. Below we generalized this observation.
Proposition 1. For one player and non uniform rewards, the welfare maximizing schedule must
be a conflict-free schedule.
Proof. Assume this is not the case, then there must be a schedule π such that π(j) π(k) and
j > k. Given the precedence constraint, task π(k) can only became active after task π(j) is
scheduled, so swapping them can only increase the utility of the player.
6
7. Theorem 3. For one player and non uniform rewards, welfare maximization is NP-hard.
Proof. We give a reduction from the NP-hard Single machine weighted completion time
problem [Lenstra and Rinnooy Kan, 1978].
Single machine weighted completion time
Input: A set of jobs Ji ∈ J. Each job has a weight wi and a
processing time pi = 1. Precedence constraints, such
that if i j, then Jj cannot be scheduled before Ji.
An integer k.
Question: Is there an ordering of the jobs such that i∈J wiCi ≤
k?, where Ci is the completion time of Ji.
For each job Ji ∈ J, create task ti with reward ri = wi and consider the same precedence graph
as the one given for jobs. We set w = (|J| + 1) i∈J wi − k.
By Proposition 1, without loss of generality, we can assume that any schedule for ISG’s with
one player are conflict-free schedules. It remains to prove that there is an ordering π of jobs with
a weighted completion time of at most k if and only if the ISG has a conflict-free schedule π
generating a reward of at least w.
Let π = π , then Ci is the completion time of both, job Ji and task ti given ordering π. Given that
π is a conflict-free schedule, the contribution of ti to the objective function is (|T| + 1 − Ci) ri. Thus,
R(π) = i∈T (|T| + 1 − Ci) ri = (|T|+1) i∈T ri− i∈T riCi. But, i∈T riCi = i∈J wiCi, which
corresponds to the weighted completion time of ordering π. Therefore, R(π) ≥ w ⇔ i∈J wiCi ≤ k,
which concludes the proof.
On the other hand, when restricting our attention to uniform rewards, we show that a welfare
maximization schedule can be found in polynomial time when there is only one player.
Theorem 4. For one player ISG and uniform rewards, welfare maximization can be solved in
O(|T| log |T|) time.
Proof. The problem can be solved as follows. Consider any topological ordering of the transitive
graph. Take the leaf node with the lowest reward v and place it at the end. Repeat the process.
We now argue that the algorithm results in a welfare maximizing schedule. Assume that there
exists a schedule π with a higher welfare and where a leaf node with the lowest reward in not
placed at the end. Then either the node placed at the end is a leaf node v with a higher reward
or a node v which is not a leaf node. In the first case, the welfare of π would increase if the
leaf node v with the higher award is swapped with the leaf node v with the lowest reward. In the
second case, v is not a leaf node and is placed at the end, which means that there is an arc going
backwards to some node v∗. Now consider the modification of π in which v∗ is inserted after v .
Task v∗ and all its successors did not become active before the time when v was scheduled (v∗’s
new position) as they depended on v by transitivity, hence their rewards remain unchanged. On
the other hand, all tasks between the original positions of v∗ and v are now scheduled one time
step earlier, thus their rewards can only increase.
5 Best Responses
We now consider the problem of computing best responses, where players care about their own
utility. We can now formally define the Best Response problem.
7
8. ISG Best Response
Input: An ISG ((T1, . . . , Tk), G, r), schedule profile π−i and
target utility W.
Question: Is there a schedule πi for the ISG such that
Ri(π−i, πi) ≥ W?
Theorem 5. For ISG with uniform rewards, there exists a linear-time algorithm to compute the
best response.
Proof. Dependencies between tasks of the other players are irrelevant, as their only influence on
decisions on the responder side is already captured in the transitive closure of the dependency
structure.
We give a greedy algorithm that solves the problem optimally. Starting from the first time
step, the algorithm successively schedules a maximal task according to the dependency structure,
minimizing the time that a task remains inactive after it was scheduled.
We observe the following: (i) At no point can it be beneficial to schedule a non-maximal
task. (ii) At each point, scheduling the maximal task with smallest time-to-activation is the best
response. Where by time-to-activation of a task v at time step t, we mean max{0, a(v) − t}.
Regarding (i) , suppose a non-maximal task is scheduled prior to its predecessor. Swapping the
two tasks can only increase the responders utility (by the same argument as in Theorem 4).
Regarding (ii) , suppose that, among the maximal tasks, a task a with with larger time-
to-activation is scheduled instead of another task b with smaller time-to-activation. If task b is
scheduled before a’s activation time and since b will become active before a, then swapping the
two tasks will increase utility by at least the difference in their activation times. In the case that
task b is scheduled after a’s activation time, if there are any successors of a scheduled before task
b and after a’s activation time, then swapping tasks a and b will make a’s successors inactive, thus
possible deteriorating the utility. To avoid this, an increase of the utility can be obtained with the
following transformation: (1) move b to a’s position, (2) move task a to the position corresponding
to a’s activation time and (3) shift towards the end by one all tasks between a’s activation time
and b’s original position. Tasks shifted are now scheduled one time step later, but since task a is
scheduled in its activation time, then the utility can not decrease.
Thus, an algorithm that always schedules the maximal element with smallest time-to-activation
computes the best response.
In the more general setting, with non-uniform rewards, let us consider only conflict-free best
response schedules. Lead by the observation that dependencies from the non-responder side induce
release dates on the tasks of the responding player, we can show NP-hardness of the problem.
Theorem 6. Computing the best conflict-free response is NP-hard.
Proof. A reduction from the single machine weighted completion time with release dates can be
obtained by adapting Theorem 3. Noticing that conflict-freenes is attained if and only if release
date constraints are met, the procedure can be carried out in a straightforward way and lead to
essentially the same construction.
However, as already seen in Theorem 2, best responses not necessarily satisfy all exterior de-
pendencies, even though there could exist a conflict-free schedule. So, it remains an open question
whether best responses in the general setting can be computed in polynomial time.
We show that even for ISG with uniform rewards, best responses can cycle.
8
9. Theorem 7. Even for ISG with uniform rewards, best responses can cycle.
Proof. Figures 3 shows that best responses can cycle.
c a d b
d a c b
(a) Player 2 best response
d b c a
d a c b
(b) Player 1 responds
d b c a
c d b a
(c) Player 2 responds
c a d b
c d b a
(d) Player 1 responds
Figure 3: A best response cycle.
The above example can be suitably modified to show the following:
Theorem 8. Even for ISG with strictly increasing rewards, better responses can cycle.
6 Nash equilibria
In this section we consider pure Nash equilibria in ISGs. We first show that ISGs with uniform
rewards always admit a pure Nash equilibrium (PNE). In addition we show that a social welfare
optimal schedule is also a PNE and we provide bounds for the price of anarchy of ISGs with uniform
rewards. For the general setting of ISGs, we show that a PNE is no longer guaranteed to exist.
6.1 ISGs with uniform rewards
Theorem 9. For any ISG with uniform rewards, a maximum welfare schedule profile is in PNE.
Proof. In the case of uniform rewards, maximizing the utility of a player is equivalent to minimizing
the time of tasks not active after they are scheduled. So, if all players have a conflict-free schedule,
then the profile is already in PNE.
If at least one of the schedules has a non active task scheduled, we will show this is the best
that player can do. By Theorem 5 we consider schedules where all the tasks of the same player
preserve the ordering induced by the transitive dependencies. Suppose player i has at least one
non-active task v in πi. Consider the following transformation: postponing task v until it becomes
active and shifting all other tasks earlier. Postponing the inactive task has no effects on the other
players, and tasks that where scheduled earlier can only activate earlier tasks of the other players.
Thus, this transformation can only improve the utility of all players. Therefore, task v cannot be
postponed or we would have found a better social welfare, contradicting our assumption. Finally,
permutations of active tasks cannot cause any improvement as they all have the same reward, so
player i has no strictly better response other than πi.
Corollary 1. For any ISG with uniform rewards, there is always a PNE profile.
9
10. There may be more than one PNE profile in ISG’s with uniform rewards. It is thus natural to
ask how bad the price of anarchy can be.
Proposition 2. For any ISG with uniform rewards, a PNE profile does not necessarily maximize
welfare.
Proof. Figure 4 shows a PNE which is not maximizing welfare solution. By bringing forward the
last task of player 1, we get a PNE that achieves the maximal welfare.
Player 1 1 1 1
Player 2 1 1 1
Figure 4: Game with a pure Nash equilibrium which does not maximize welfare.
Theorem 10. The price of anarchy of ISGs with uniform rewards is at least k(n+1)
n+2k−1, given k
players with n tasks each.
Proof. We extend the idea of Figure 4 to multiple players and tasks. Consider the tasks of all
players, except player 1, are dependent on a task t, belonging to player 1. Thus, as in Figure 4,
the worst PNE is obtained by scheduling task t at the end, as opposed to the PNE achieved when
task t is at the beginning, which results in the maximizing welfare schedule. The ratio between the
best and the worst schedule, given k players with n tasks each, is given by
kn(n+1)
2
n(n+1)
2 + (k − 1)n
=
k(n + 1)
n + 2k − 1
.
If we fix the number of players k, the highest value of the ratio is limn→∞
k(n+1)
n+2k−1 = k. Similarly,
when fixing the number of tasks n, then limk→∞
k(n+1)
n+2k−1 = n+1
2 . This motivates the following
theorem.
Theorem 11. The price of anarchy of ISG with uniform rewards is at most n+1
2 , with n the
number of tasks per player.
Proof. Any PNE profile can not be worse than if all tasks activate at time step n. Thus, NE ≥ kn.
Any schedule can not be better than if there is no precedence constraints. Thus, the optimum
schedule OPT ≤ kn(n+1)
2 . Therefore,
PoA =
OPT
NE
≤
kn(n+1)
2
kn
=
n + 1
2
.
6.2 ISGs with non uniform rewards
Finally, we show that an ISG with two players does not always admit a pure Nash equilibrium.
10
11. Player 1 1 4 3 2
Player 2 2 4 1 3
Figure 5: Game not admitting a Pure Nash Equilibrium.
Theorem 12. An ISG with two players and non uniform rewards does not always admit a pure
Nash equilibrium.
Proof. Consider the explicit game given by Figure 5. Let’s consider the game admits a PNE.
Notice that any best response of player 1 must satisfy that task 4, being the highest reward task,
is scheduled immediately after task 1. Therefore, any possible best response of player 1 has to
adopt one of the following schedule configurations: (i) π1 ∈ (1, 4, ∗, ∗), (ii) π1 ∈ (∗, 1, 4, ∗) or (iii)
π1 ∈ (∗, ∗, 1, 4).
In a similar way, task 4 of player 2, for any best response of player 2, must be scheduled as
soon as possible. These observations narrow the number of possible PNE configurations, which are
explored below.
• Case (i) Player’s 2 best response, given any schedule of the form π1 ∈ (1, 4, ∗, ∗) is π2 =
(2, 4, 1, 3). However, such a schedule triggers a player 1’s best response of π1 = (3, 1, 4, 2),
which take us to case (ii).
• Case (ii) Player’s 2 best response, given any schedule of the form π1 ∈ (∗, 1, 4, ∗) is π2 =
(1, 3, 4, 2). However, such a schedule triggers a player 1’s best response of π1 = (1, 4, 2, 3),
which is an instance of case (i). This leads to a cycle of best responses.
• Case (iii) Player’s 2 best response, given any schedule of the form π1 ∈ (∗, ∗, 1, 4) is π2 ∈
{(2, 1, 3, 4), (1, 3, 2, 4)}. However, such schedules triggers a player 1’s best response of π1 =
(3, 1, 4, 2) if π2 = (2, 1, 3, 4), or π1 = (1, 4, 3, 2) in the other case. Both schedules being an
instance of case (ii) or (i), respectively.
Therefore, for any schedule π1, there is no schedule π2, such that (π1, π2) is a PNE.
We conjecture that the above example is minimal, in the sense that no other instance with
less than 4 tasks per player, does not admit a PNE. The same goes when considering the number
of explicit dependencies. We leave as an open problem the computational complexity of checking
whether there exists a pure Nash equilibrium schedule profile.
7 Conclusions
This paper introduced a class of interdependent scheduling games which were motivated by appli-
cations in large-scale power restoration, humanitarian logistics, and integrated supply-chains. We
addressed computational complexity issues of the centralized social maximizing welfare problem, as
well as in the decentralized, non cooperative game version of the framework. For the latter setting,
we focused our attention to the Nash equilibrium solution concept.
The model we have presented leads to intriguing mechanism design issues where we want to
incentivize players to report their valuations for their tasks truthfully and for the mechanism
to satisfy desirable welfare properties. Some of our NP-hardness results even hold under sever
11
12. restrictions of the model. It will be interesting to undertake a parametrized complexity analysis of
the problems as well as to consider approximation algorithms. Known approximation algorithms for
traditional interdependent scheduling games (with hard dependencies and non-accruing rewards)
cannot be directly applied to the model we have presented. It is also important to generalize
interdependent scheduling games to integer time durations.
Acknowledgments
NICTA is funded by the Australian Government through the Department of Communications and
the Australian Research Council through the ICT Centre of Excellence Program.
References
Andres Abeliuk, Gerardo Berbeglia, and Pascal Van Hentenryck. A bargaining mechanism for
one-way games. In Proceedings of the Twenty-Fourth International joint conference on Artificial
Intelligence (IJCAI), 2015.
Alessandro Agnetis, Pitu B Mirchandani, Dario Pacciarelli, and Andrea Pacifici. Nondominated
schedules for a job-shop with two competing users. Computational & Mathematical Organization
Theory, 6(2):191–217, 2000.
Allesandro Agnetis, Pitu B Mirchandani, Dario Pacciarelli, and Andrea Pacifici. Scheduling prob-
lems with two competing agents. Operations Research, 52(2):229–242, 2004.
Alessandro Agnetis, Dario Pacciarelli, and Andrea Pacifici. Combinatorial models for multi-agent
scheduling problems. Multiprocessor Scheduling, page 21, 2007.
Eric Angel, Evripidis Bampis, and Fanny Pascual. Truthful algorithms for scheduling selfish tasks
on parallel machines. Theoretical Computer Science, 369(1):157–168, 2006.
Kenneth R Baker and J Cole Smith. A multiple-criterion model for machine scheduling. Journal
of Scheduling, 6(1):7–16, 2003.
Cyril Briand and J Billaut. Cooperative project scheduling with controllable processing times: a
game theory framework. In Emerging Technologies & Factory Automation (ETFA), 2011 IEEE
16th Conference on, pages 1–7. IEEE, 2011.
Peter Brucker and P Brucker. Scheduling algorithms, volume 3. Springer, 2007.
Burak Cavdaroglu, Erik Hammel, John E Mitchell, Thomas C Sharkey, and William A Wallace.
Integrating restoration and scheduling decisions for disrupted interdependent infrastructure sys-
tems. Annals of Operations Research, 203(1):279–294, 2013.
George Christodoulou, Elias Koutsoupias, and Akash Nanavati. Coordination mechanisms. In
Automata, Languages and Programming, pages 345–357. Springer, 2004.
Carleton Coffrin, Pascal Van Hentenryck, and Russell Bent. Last-mile restoration for multiple
interdependent infrastructures. In AAAI, 2012.
Giuseppe Confessore, Stefano Giordani, and Silvia Rismondo. A market-based multi-agent system
model for decentralized multi-project scheduling. Annals of Operations Research, 150(1):115–135,
2007.
12
13. Birgit Heydenreich, Rudolf M¨uller, and Marc Uetz. Games and mechanism design in machine
schedulingan introduction. Production and Operations Management, 16(4):437–454, 2007.
Rajeev Kohli, Ramesh Krishnamurti, and Prakash Mirchandani. The minimum satisfiability prob-
lem. SIAM J. Discrete Math., 7(2):275–283, 1994.
Chung-Yee Lee, Lei Lei, and Michael Pinedo. Current trends in deterministic scheduling. Annals
of Operations Research, 70:1–41, 1997.
Jan Karel Lenstra and AHG Rinnooy Kan. Complexity of scheduling under precedence constraints.
Operations Research, 26(1):22–35, 1978.
Dov Monderer and Lloyd S Shapley. Potential games. Games and Economic Behavior, 14(1):124–
143, 1996.
Ben Simon, Carleton Coffrin, and Pascal Van Hentenryck. Randomized adaptive vehicle decompo-
sition for large-scale power restoration. Springer, 2012.
Pascal Van Hentenryck, Russell Bent, and Carleton Coffrin. Strategic planning for disaster recovery
with stochastic last mile distribution. In Integration of AI and OR techniques in constraint
programming for combinatorial optimization problems, pages 318–333. Springer, 2010.
13