The document summarizes a lecture on handling interval solutions for cooperative interval games. It discusses interval games where coalition payoffs are given as intervals rather than precise values due to uncertainty. It introduces interval solution concepts like the interval imputation set and interval core that assign interval payoff vectors. It also discusses allocation rules like bankruptcy rules that can transform interval payoff allocations into precise payoff vectors when the realized coalition payoff becomes known. Specifically, it outlines a one-stage procedure that uses bankruptcy rules to determine precise payoffs consistent with an interval allocation and realized payoff.
Economic and Operations Research Situations with Interval DataSSA KPI
This document provides an outline and introduction for a lecture on cooperative interval games given at the 6th Summer School AACIMP in Kyiv, Ukraine from August 8-20, 2011. The lecture covers topics such as cooperative interval games, classes of cooperative interval games, economic and operations research situations with interval data, and solution concepts for interval games like the interval imputation set and interval core. It is based on research from the speaker's PhD dissertation on cooperative interval games and related published papers. The motivation is to generalize classical cooperative game theory to account for interval uncertainty in rewards and costs.
Coalitional Games with Interval-Type Payoffs: A SurveySSA KPI
This document summarizes a lecture on cooperative interval games given at the 6th Summer School AACIMP in Kyiv, Ukraine from August 8-20, 2011. The lecture introduced cooperative interval games as a generalization of classical cooperative games to allow for interval-valued payoffs. Key concepts discussed include the selection-based core, strong balance property, and interval-valued solution concepts like the interval imputation set and interval core. Economic and operations research applications of interval games were also mentioned.
The document summarizes a lecture on cooperative interval games. It introduces interval games where payoffs are intervals rather than precise values. It discusses interval solutions like the interval core, which extends the classical core concept to interval games. It also defines properties like convexity and concavity for interval games. Examples are provided to illustrate interval game concepts.
The document summarizes a lecture on cooperative game theory under interval uncertainty given at the 6th Summer School AACIMP in Kyiv, Ukraine from August 8-20, 2011. The lecture introduced cooperative interval games where payoffs are uncertain and defined as intervals rather than single values. It defined concepts such as the core set and imputation set for interval games, and showed that a game has a non-empty core if and only if it is strongly balanced. The lecture also discussed two-person interval games and economic applications of interval game theory.
This document summarizes a lecture on operations research games and their applications. It introduces concepts from cooperative game theory like the core and Shapley value. It then discusses how several operations research situations can be modeled as cooperative games, including market situations modeled as "big boss games" where one player has veto power. Examples are given of modeling a treasure hunt scenario as a big boss game.
1) Mathematicians use statistical models to predict future trends and values based on historical data. Both continuous and discrete univariate and multivariate models are explored.
2) Specific models examined include the Ornstein-Uhlenbeck process, Euler-Maruyama and Milstein schemes for numerical approximations of continuous processes, and autoregressive AR(p) models for discrete processes.
3) The models are fitted to inflation rate data to predict future inflation values based on parameter estimation techniques like maximum likelihood estimation. Model outputs like predicted values and distributions are examined.
This document summarizes key concepts from a PhD dissertation on uncertainty in deep learning:
1) There are two types of uncertainties - epistemic uncertainty from lack of knowledge that decreases with more data, and aleatoric uncertainty from inherent noise that cannot be reduced. Deep learning models need to estimate both to provide predictive uncertainty.
2) Variational inference allows approximating intractable Bayesian posteriors by minimizing the KL divergence between an approximating distribution and the true posterior. Dropout can be seen as a Bayesian approximation where weights follow a Bernoulli distribution.
3) With dropout as a variational distribution, predictive uncertainty in regression is estimated from multiple stochastic forward passes, with aleatoric uncertainty from noise and epistem
The document discusses techniques for visual recognition using feature learning, including sparse coding and deep architectures. It summarizes approaches like bag-of-words models using vector quantization and spatial pyramid matching. It then discusses moving beyond these approaches by learning representations from data using sparse coding and deep learning methods to obtain better image classification performance.
Economic and Operations Research Situations with Interval DataSSA KPI
This document provides an outline and introduction for a lecture on cooperative interval games given at the 6th Summer School AACIMP in Kyiv, Ukraine from August 8-20, 2011. The lecture covers topics such as cooperative interval games, classes of cooperative interval games, economic and operations research situations with interval data, and solution concepts for interval games like the interval imputation set and interval core. It is based on research from the speaker's PhD dissertation on cooperative interval games and related published papers. The motivation is to generalize classical cooperative game theory to account for interval uncertainty in rewards and costs.
Coalitional Games with Interval-Type Payoffs: A SurveySSA KPI
This document summarizes a lecture on cooperative interval games given at the 6th Summer School AACIMP in Kyiv, Ukraine from August 8-20, 2011. The lecture introduced cooperative interval games as a generalization of classical cooperative games to allow for interval-valued payoffs. Key concepts discussed include the selection-based core, strong balance property, and interval-valued solution concepts like the interval imputation set and interval core. Economic and operations research applications of interval games were also mentioned.
The document summarizes a lecture on cooperative interval games. It introduces interval games where payoffs are intervals rather than precise values. It discusses interval solutions like the interval core, which extends the classical core concept to interval games. It also defines properties like convexity and concavity for interval games. Examples are provided to illustrate interval game concepts.
The document summarizes a lecture on cooperative game theory under interval uncertainty given at the 6th Summer School AACIMP in Kyiv, Ukraine from August 8-20, 2011. The lecture introduced cooperative interval games where payoffs are uncertain and defined as intervals rather than single values. It defined concepts such as the core set and imputation set for interval games, and showed that a game has a non-empty core if and only if it is strongly balanced. The lecture also discussed two-person interval games and economic applications of interval game theory.
This document summarizes a lecture on operations research games and their applications. It introduces concepts from cooperative game theory like the core and Shapley value. It then discusses how several operations research situations can be modeled as cooperative games, including market situations modeled as "big boss games" where one player has veto power. Examples are given of modeling a treasure hunt scenario as a big boss game.
1) Mathematicians use statistical models to predict future trends and values based on historical data. Both continuous and discrete univariate and multivariate models are explored.
2) Specific models examined include the Ornstein-Uhlenbeck process, Euler-Maruyama and Milstein schemes for numerical approximations of continuous processes, and autoregressive AR(p) models for discrete processes.
3) The models are fitted to inflation rate data to predict future inflation values based on parameter estimation techniques like maximum likelihood estimation. Model outputs like predicted values and distributions are examined.
This document summarizes key concepts from a PhD dissertation on uncertainty in deep learning:
1) There are two types of uncertainties - epistemic uncertainty from lack of knowledge that decreases with more data, and aleatoric uncertainty from inherent noise that cannot be reduced. Deep learning models need to estimate both to provide predictive uncertainty.
2) Variational inference allows approximating intractable Bayesian posteriors by minimizing the KL divergence between an approximating distribution and the true posterior. Dropout can be seen as a Bayesian approximation where weights follow a Bernoulli distribution.
3) With dropout as a variational distribution, predictive uncertainty in regression is estimated from multiple stochastic forward passes, with aleatoric uncertainty from noise and epistem
The document discusses techniques for visual recognition using feature learning, including sparse coding and deep architectures. It summarizes approaches like bag-of-words models using vector quantization and spatial pyramid matching. It then discusses moving beyond these approaches by learning representations from data using sparse coding and deep learning methods to obtain better image classification performance.
The document outlines a framework for developing operator-adapted wavelets for efficient error estimation and adaptive refinement in finite element analysis. It proposes representing the solution as a telescopic sum of two-level errors, which correspond to projections onto complementary wavelet spaces. The framework involves constructing stable wavelet bases for these spaces and using them to obtain compressed representations of the two-level errors. This allows accurate estimation of errors and goal quantities without explicitly computing intermediate solutions.
Information topology, Deep Network generalization and Consciousness quantific...Pierre BAUDOT
1. The document discusses using information topology and cohomology to quantify patterns and statistical interactions in complex data.
2. It introduces information cohomology, which defines information functions on simplicial complexes and studies their additive and multiplicative properties.
3. Information functions include entropy, mutual information, and conditional mutual information. Independence of random variables can be characterized topologically as certain information functions equalling zero.
The document discusses the method of multiplicities, which is a technique for combinatorics using algebra. It involves finding a polynomial that vanishes on a set with high multiplicity. This is applied to problems in list decoding of Reed-Solomon codes, bounding the size of Kakeya sets, and constructing randomness extractors. Specifically, the method is used to improve bounds on list decoding, show that certain Kakeya sets must be large, and allow extraction of more randomness from weak sources. Propagating multiplicities of derivatives allows tighter analysis of these problems.
CVPR2010: Advanced ITinCVPR in a Nutshell: part 5: Shape, Matching and Diverg...zukun
The document discusses using divergence measures like the Jensen-Shannon divergence to align multiple point sets represented as probability density functions. It motivates using the JS divergence by modeling point sets as mixtures of density functions, and shows how the likelihood ratio between models leads to the JS divergence. It then formulates the problem of group-wise point set registration as minimizing the JS divergence between density functions, combined with a regularization term. Experimental results on aligning multiple 3D hippocampus point sets are also presented.
A discussion on sampling graphs to approximate network classification functionsLARCA UPC
The problem of network classification consists on assigning a finite set of labels to the nodes of the graphs; the underlying assumption is that nodes with the same label tend to be connected via strong paths in the graph. This is similar to the assumptions made by semi-supervised learning algorithms based on graphs, which build an artificial graph from vectorial data. Such semi-supervised algorithms are based on label propagation principles and their accuracy heavily relies on the structure (presence of edges) in the graph.
In this talk I will discuss ideas of how to perform sampling in the network graph, thus sparsifying the structure in order to apply semi-supervised algorithms and compute efficiently the classification function on the network. I will show very preliminary experiments indicating that the sampling technique has an important effect on the final results and discuss open theoretical and practical questions that are to be solved yet.
A copula model to analyze minimum admission scores Mariela Fernández
This document discusses using a copula model to analyze minimum admission scores. It introduces the motivation to set minimum scores efficiently using E[Language|Mathematics ≥ m0] and E[Mathematics|Language ≥ l0]. It then provides an overview of copula theory and defines the asymmetric cubic section copula used in the model. The document applies the copula model to admission data from 2010-2011 and examines the results, including final remarks on future work.
The document proposes a new model for solving Weighted Constraint Satisfaction Problems (WCSPs) using a Hopfield neural network approach. It formulates WCSPs as a 0-1 quadratic programming problem subject to linear constraints, which can be solved using the Hopfield neural network. The model was tested on benchmark WCSP instances and was able to find optimal solutions, with the same time complexity as other known methods. The approach recognizes optimal solutions for WCSPs by minimizing an original energy function using the Hopfield neural network.
This document summarizes generative models like VAEs and GANs. It begins with an introduction to information theory, defining key concepts like entropy and maximum likelihood estimation. It then explains generative models as estimating the joint distribution P(X,Y) compared to discriminative models estimating P(Y|X). VAEs are discussed as maximizing the evidence lower bound (ELBO) to estimate the latent variable distribution P(Z|X), allowing generation of new X values. GANs are also covered, defining their minimax game between a generator G and discriminator D, with G learning to generate samples resembling the real data distribution Pemp.
This document discusses modeling heterogeneity using structural varying coefficient models in the presence of endogeneity. It begins with preliminaries on causality, statistical data analysis, and nonparametric methods. It then discusses how heterogeneity can be modeled using varying coefficient models, some existing methods for estimating varying coefficient models, and examples of applications in economics. It also discusses how instrumental variable estimation can be extended to allow for heterogeneous treatment effects using varying coefficient models.
This document discusses macrocanonical models for texture synthesis. It begins by introducing the goal of texture synthesis and providing a brief history. It then describes the parametric question of combining randomness and structure in images. Specifically, it discusses maximizing entropy under geometric constraints. The document goes on to discuss links to statistical physics, defining microcanonical and macrocanonical models. It focuses on studying the macrocanonical model, describing how to find optimal parameters through gradient descent and how to sample from the model using Langevin dynamics. The document provides examples of texture synthesis and compares results to other methods.
1. The document presents Plug-and-Play priors for Bayesian imaging using Langevin-based sampling methods.
2. It introduces the Bayesian framework for image restoration and discusses challenges in modeling the prior.
3. A Plug-and-Play approach is proposed that uses an implicit prior defined by a denoising network in conjunction with Langevin sampling, termed PnP-ULA. Experiments demonstrate its effectiveness on image deblurring and inpainting tasks.
Kernelization algorithms for graph and other structure modification problemsAnthony Perez
The document discusses kernelization algorithms for graph modification problems. It begins by introducing graph modification problems, which take as input a graph and property and output the minimum number of modifications to the graph to satisfy the property. It then discusses using parameterized complexity to more efficiently solve NP-hard graph modification problems. In particular, it covers the concept of kernels, which are polynomial-time algorithms that reduce an instance to an equivalent instance of size bounded by a function of the parameter. The document provides an overview of generic reduction rules and the concept of branches that can be applied to graph modification problems. It also introduces the specific problem of proper interval completion and known results about its parameterized complexity.
An integer linear programming formulation and branch-and-cut algorithm for th...Hernán Berinsky
This document presents an integer linear programming formulation and branch-and-cut algorithm for the Capacitated m-Ring-Star Problem (CmRSP). The CmRSP involves finding minimum cost rings and connections to visit customers while respecting capacity constraints. The formulation is solved using a branch-and-cut algorithm with valid inequalities and heuristic separation routines. Computational results on benchmark instances show the algorithm outperforms CPLEX by solving larger instances to proven optimality faster and with smaller optimality gaps. Future work involves improving heuristics, branching strategies, and developing new valid inequalities and relaxations.
This document discusses conditional random fields (CRFs), a discriminative structured prediction framework. CRFs model the conditional probability of labels given observations, allowing dependencies between labels and arbitrary features of the input. This is in contrast to hidden Markov models, which are generative and make strong independence assumptions. CRFs can capture long-range dependencies and are discriminatively trained to directly optimize the prediction task. Empirical results show CRFs outperform HMMs and other models on tasks involving higher-order dependencies in synthetic and real-world data like part-of-speech tagging.
1. The document discusses the author's research in three areas: graph-based clustering methods, approximate Bayesian computation (ABC), and Bayesian computation using empirical likelihood.
2. For graph-based clustering, the author presents asymptotic results for spectral clustering as the number of data points and bandwidth approach infinity.
3. For ABC, the author discusses sequential ABC algorithms and challenges of model choice and high-dimensional summary statistics. Machine learning methods are proposed to analyze simulated ABC data.
4. For empirical likelihood, the author proposes using it for Bayesian computation when the likelihood is intractable and simulation is infeasible, as it provides correct confidence intervals unlike composite likelihoods.
Quantitative Propagation of Chaos for SGD in Wide Neural NetworksValentin De Bortoli
The document discusses quantitative analysis of stochastic gradient descent (SGD) for training wide neural networks. It presents two different regimes - a deterministic regime where the limiting dynamics is described by an ordinary differential equation, and a stochastic regime where the limiting dynamics is a stochastic differential equation. Experiments on MNIST classification show that the stochastic regime with larger step sizes exhibits better regularization properties. The analysis provides insights into the behavior of neural network training as the number of neurons becomes large.
The document presents an overview of multistrategy learning, which aims to develop learning systems that integrate multiple inferential and computational strategies, such as empirical induction, explanation-based learning, deduction, and genetic algorithms. It describes representative multistrategy learning systems and their applications in domains like knowledge acquisition, planning, scheduling, and decision making. The systems are able to learn from a combination of examples, background knowledge, and inferences to develop more comprehensive models than single strategy learning approaches.
CVPR2010: Advanced ITinCVPR in a Nutshell: part 4: additional slideszukun
This document discusses probability density function estimation using isocontours and its applications to image registration and filtering. It proposes estimating densities from image intensities using the areas enclosed by isocontours rather than histograms. This density estimation technique is applied to mutual information-based image registration and anisotropic neighborhood filtering.
This document is an introduction to statistical machine learning presented by Christfried Webers from NICTA and The Australian National University in 2011. It covers topics such as polynomial curve fitting, probability theory, probability densities, and expectations and covariances. It provides examples of fitting polynomial curves to data, comparing models of different orders, testing models on new data, using regularization to constrain model complexity, and concepts from probability theory like the sum and product rules. Figures and tables of coefficients are included to illustrate the concepts.
This document provides an overview of the course "Statistical Learning Theory and Applications" being taught at MIT in the spring of 2003. The course will cover supervised learning theory and algorithms including regularization networks and support vector machines. It will explore applications of learning from examples in various domains including bioinformatics, computer vision, and text classification. The course will take a multidisciplinary approach, exploring learning from the perspectives of mathematics, algorithms, and neuroscience. Students will complete problem sets and a final project, and participation will be part of the grading.
Human: Thank you for the summary. Can you summarize the document in 2 sentences or less?
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive function. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
The document outlines a framework for developing operator-adapted wavelets for efficient error estimation and adaptive refinement in finite element analysis. It proposes representing the solution as a telescopic sum of two-level errors, which correspond to projections onto complementary wavelet spaces. The framework involves constructing stable wavelet bases for these spaces and using them to obtain compressed representations of the two-level errors. This allows accurate estimation of errors and goal quantities without explicitly computing intermediate solutions.
Information topology, Deep Network generalization and Consciousness quantific...Pierre BAUDOT
1. The document discusses using information topology and cohomology to quantify patterns and statistical interactions in complex data.
2. It introduces information cohomology, which defines information functions on simplicial complexes and studies their additive and multiplicative properties.
3. Information functions include entropy, mutual information, and conditional mutual information. Independence of random variables can be characterized topologically as certain information functions equalling zero.
The document discusses the method of multiplicities, which is a technique for combinatorics using algebra. It involves finding a polynomial that vanishes on a set with high multiplicity. This is applied to problems in list decoding of Reed-Solomon codes, bounding the size of Kakeya sets, and constructing randomness extractors. Specifically, the method is used to improve bounds on list decoding, show that certain Kakeya sets must be large, and allow extraction of more randomness from weak sources. Propagating multiplicities of derivatives allows tighter analysis of these problems.
CVPR2010: Advanced ITinCVPR in a Nutshell: part 5: Shape, Matching and Diverg...zukun
The document discusses using divergence measures like the Jensen-Shannon divergence to align multiple point sets represented as probability density functions. It motivates using the JS divergence by modeling point sets as mixtures of density functions, and shows how the likelihood ratio between models leads to the JS divergence. It then formulates the problem of group-wise point set registration as minimizing the JS divergence between density functions, combined with a regularization term. Experimental results on aligning multiple 3D hippocampus point sets are also presented.
A discussion on sampling graphs to approximate network classification functionsLARCA UPC
The problem of network classification consists on assigning a finite set of labels to the nodes of the graphs; the underlying assumption is that nodes with the same label tend to be connected via strong paths in the graph. This is similar to the assumptions made by semi-supervised learning algorithms based on graphs, which build an artificial graph from vectorial data. Such semi-supervised algorithms are based on label propagation principles and their accuracy heavily relies on the structure (presence of edges) in the graph.
In this talk I will discuss ideas of how to perform sampling in the network graph, thus sparsifying the structure in order to apply semi-supervised algorithms and compute efficiently the classification function on the network. I will show very preliminary experiments indicating that the sampling technique has an important effect on the final results and discuss open theoretical and practical questions that are to be solved yet.
A copula model to analyze minimum admission scores Mariela Fernández
This document discusses using a copula model to analyze minimum admission scores. It introduces the motivation to set minimum scores efficiently using E[Language|Mathematics ≥ m0] and E[Mathematics|Language ≥ l0]. It then provides an overview of copula theory and defines the asymmetric cubic section copula used in the model. The document applies the copula model to admission data from 2010-2011 and examines the results, including final remarks on future work.
The document proposes a new model for solving Weighted Constraint Satisfaction Problems (WCSPs) using a Hopfield neural network approach. It formulates WCSPs as a 0-1 quadratic programming problem subject to linear constraints, which can be solved using the Hopfield neural network. The model was tested on benchmark WCSP instances and was able to find optimal solutions, with the same time complexity as other known methods. The approach recognizes optimal solutions for WCSPs by minimizing an original energy function using the Hopfield neural network.
This document summarizes generative models like VAEs and GANs. It begins with an introduction to information theory, defining key concepts like entropy and maximum likelihood estimation. It then explains generative models as estimating the joint distribution P(X,Y) compared to discriminative models estimating P(Y|X). VAEs are discussed as maximizing the evidence lower bound (ELBO) to estimate the latent variable distribution P(Z|X), allowing generation of new X values. GANs are also covered, defining their minimax game between a generator G and discriminator D, with G learning to generate samples resembling the real data distribution Pemp.
This document discusses modeling heterogeneity using structural varying coefficient models in the presence of endogeneity. It begins with preliminaries on causality, statistical data analysis, and nonparametric methods. It then discusses how heterogeneity can be modeled using varying coefficient models, some existing methods for estimating varying coefficient models, and examples of applications in economics. It also discusses how instrumental variable estimation can be extended to allow for heterogeneous treatment effects using varying coefficient models.
This document discusses macrocanonical models for texture synthesis. It begins by introducing the goal of texture synthesis and providing a brief history. It then describes the parametric question of combining randomness and structure in images. Specifically, it discusses maximizing entropy under geometric constraints. The document goes on to discuss links to statistical physics, defining microcanonical and macrocanonical models. It focuses on studying the macrocanonical model, describing how to find optimal parameters through gradient descent and how to sample from the model using Langevin dynamics. The document provides examples of texture synthesis and compares results to other methods.
1. The document presents Plug-and-Play priors for Bayesian imaging using Langevin-based sampling methods.
2. It introduces the Bayesian framework for image restoration and discusses challenges in modeling the prior.
3. A Plug-and-Play approach is proposed that uses an implicit prior defined by a denoising network in conjunction with Langevin sampling, termed PnP-ULA. Experiments demonstrate its effectiveness on image deblurring and inpainting tasks.
Kernelization algorithms for graph and other structure modification problemsAnthony Perez
The document discusses kernelization algorithms for graph modification problems. It begins by introducing graph modification problems, which take as input a graph and property and output the minimum number of modifications to the graph to satisfy the property. It then discusses using parameterized complexity to more efficiently solve NP-hard graph modification problems. In particular, it covers the concept of kernels, which are polynomial-time algorithms that reduce an instance to an equivalent instance of size bounded by a function of the parameter. The document provides an overview of generic reduction rules and the concept of branches that can be applied to graph modification problems. It also introduces the specific problem of proper interval completion and known results about its parameterized complexity.
An integer linear programming formulation and branch-and-cut algorithm for th...Hernán Berinsky
This document presents an integer linear programming formulation and branch-and-cut algorithm for the Capacitated m-Ring-Star Problem (CmRSP). The CmRSP involves finding minimum cost rings and connections to visit customers while respecting capacity constraints. The formulation is solved using a branch-and-cut algorithm with valid inequalities and heuristic separation routines. Computational results on benchmark instances show the algorithm outperforms CPLEX by solving larger instances to proven optimality faster and with smaller optimality gaps. Future work involves improving heuristics, branching strategies, and developing new valid inequalities and relaxations.
This document discusses conditional random fields (CRFs), a discriminative structured prediction framework. CRFs model the conditional probability of labels given observations, allowing dependencies between labels and arbitrary features of the input. This is in contrast to hidden Markov models, which are generative and make strong independence assumptions. CRFs can capture long-range dependencies and are discriminatively trained to directly optimize the prediction task. Empirical results show CRFs outperform HMMs and other models on tasks involving higher-order dependencies in synthetic and real-world data like part-of-speech tagging.
1. The document discusses the author's research in three areas: graph-based clustering methods, approximate Bayesian computation (ABC), and Bayesian computation using empirical likelihood.
2. For graph-based clustering, the author presents asymptotic results for spectral clustering as the number of data points and bandwidth approach infinity.
3. For ABC, the author discusses sequential ABC algorithms and challenges of model choice and high-dimensional summary statistics. Machine learning methods are proposed to analyze simulated ABC data.
4. For empirical likelihood, the author proposes using it for Bayesian computation when the likelihood is intractable and simulation is infeasible, as it provides correct confidence intervals unlike composite likelihoods.
Quantitative Propagation of Chaos for SGD in Wide Neural NetworksValentin De Bortoli
The document discusses quantitative analysis of stochastic gradient descent (SGD) for training wide neural networks. It presents two different regimes - a deterministic regime where the limiting dynamics is described by an ordinary differential equation, and a stochastic regime where the limiting dynamics is a stochastic differential equation. Experiments on MNIST classification show that the stochastic regime with larger step sizes exhibits better regularization properties. The analysis provides insights into the behavior of neural network training as the number of neurons becomes large.
The document presents an overview of multistrategy learning, which aims to develop learning systems that integrate multiple inferential and computational strategies, such as empirical induction, explanation-based learning, deduction, and genetic algorithms. It describes representative multistrategy learning systems and their applications in domains like knowledge acquisition, planning, scheduling, and decision making. The systems are able to learn from a combination of examples, background knowledge, and inferences to develop more comprehensive models than single strategy learning approaches.
CVPR2010: Advanced ITinCVPR in a Nutshell: part 4: additional slideszukun
This document discusses probability density function estimation using isocontours and its applications to image registration and filtering. It proposes estimating densities from image intensities using the areas enclosed by isocontours rather than histograms. This density estimation technique is applied to mutual information-based image registration and anisotropic neighborhood filtering.
This document is an introduction to statistical machine learning presented by Christfried Webers from NICTA and The Australian National University in 2011. It covers topics such as polynomial curve fitting, probability theory, probability densities, and expectations and covariances. It provides examples of fitting polynomial curves to data, comparing models of different orders, testing models on new data, using regularization to constrain model complexity, and concepts from probability theory like the sum and product rules. Figures and tables of coefficients are included to illustrate the concepts.
This document provides an overview of the course "Statistical Learning Theory and Applications" being taught at MIT in the spring of 2003. The course will cover supervised learning theory and algorithms including regularization networks and support vector machines. It will explore applications of learning from examples in various domains including bioinformatics, computer vision, and text classification. The course will take a multidisciplinary approach, exploring learning from the perspectives of mathematics, algorithms, and neuroscience. Students will complete problem sets and a final project, and participation will be part of the grading.
Human: Thank you for the summary. Can you summarize the document in 2 sentences or less?
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive function. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
The document summarizes the VI Summer School "Achievements and Applications of Contemporary Informatics, Mathematics and Physics" (AACIMP-2011) that took place in Kyiv, Ukraine from August 8-19, 2011. The summer school featured lectures and courses across four streams: Operations Research, Neuroscience, Computer Science, and Innovative Entrepreneurship & Science of Global Challenges. Notable tutors from various universities and research institutions across Europe and the US participated. Over the years the summer school has grown in international participation and now features courses taught entirely in English.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help boost feelings of calmness, happiness and focus.
IDA is a full-service design firm with core competencies in wayfinding, environments
and products, and has grown to include print and interactive. Our personalized approach
and award-winning process is unique. We approach each project with intentions to
understand your brand identity, and work diligently to protect its integrity during the
design process.
This document summarizes a lecture on cooperative game theory given at the 6th Summer School AACIMP in Kyiv, Ukraine from August 8-20, 2011. The lecture covers basic concepts in cooperative game theory including characteristic functions, imputations, the core, balanced games, the Shapley value, and the Weber set. Examples are provided to illustrate these concepts such as a glove game characteristic function and calculations of the core, Shapley value, and Weber set for this game.
Methods from Mathematical Data Mining (Supported by Optimization)SSA KPI
This document summarizes a presentation on cluster stability estimation and determining the optimal number of clusters in a dataset. The presentation proposes a method that draws random samples from the dataset and compares the partitions obtained from each sample to estimate cluster stability. It quantifies the consistency between partitions using minimal spanning trees and the Friedman-Rafsky test statistic. Experiments on synthetic and real-world datasets show that the method can accurately determine the true number of clusters by finding the partition that maximizes cluster stability.
Special Plenary Lecture at the International Conference on VIBRATION ENGINEERING AND TECHNOLOGY OF MACHINERY (VETOMAC), Lisbon, Portugal, September 10 - 13, 2018
http://www.conf.pt/index.php/v-speakers
Propagation of uncertainties in complex engineering dynamical systems is receiving increasing attention. When uncertainties are taken into account, the equations of motion of discretised dynamical systems can be expressed by coupled ordinary differential equations with stochastic coefficients. The computational cost for the solution of such a system mainly depends on the number of degrees of freedom and number of random variables. Among various numerical methods developed for such systems, the polynomial chaos based Galerkin projection approach shows significant promise because it is more accurate compared to the classical perturbation based methods and computationally more efficient compared to the Monte Carlo simulation based methods. However, the computational cost increases significantly with the number of random variables and the results tend to become less accurate for a longer length of time. In this talk novel approaches will be discussed to address these issues. Reduced-order Galerkin projection schemes in the frequency domain will be discussed to address the problem of a large number of random variables. Practical examples will be given to illustrate the application of the proposed Galerkin projection techniques.
Econometrics of panel data - a presentationdrbojutjub
This document discusses methods for accounting for heterogeneity in slope coefficients in panel data models. It begins by introducing the standard linear panel data model and noting its assumption of homogeneous slopes. It then presents four methods for allowing slopes to vary across individuals: seemingly unrelated regression (SUR), Swamy's random coefficient model, mean group estimation, and testing for heterogeneity. SUR estimates separate regressions simultaneously while accounting for cross-equation correlation. Swamy's model specifies slopes as the sum of population and individual effects. Mean group estimation averages individual OLS estimates. Tests can examine heterogeneity in all or some slopes.
The document discusses key concepts in probability theory and statistical decision making under uncertainty. It covers topics like data generation processes being modelled as random variables, Bayes' rule for calculating conditional probabilities, discriminant functions for classification, and utility theory for making rational decisions. Bayesian networks and influence diagrams are introduced as graphical models for representing conditional independence between variables and making decisions. Finally, the document notes that future chapters will focus on estimating probabilities from data using parametric, semiparametric, and nonparametric approaches.
The document summarizes a presentation on detecting adversary nodes in machine-to-machine communication networks using machine learning-based trust models. It introduces machine-to-machine communication and the need to identify adversary nodes that provide false information. The presentation evaluates several machine learning models—including extreme gradient boosting, random forest, and a proposed binary particle swarm optimization extreme gradient boosting model—and compares their performance on a simulated network with varying percentages of adversary nodes. The proposed model achieved promising results in accurately detecting adversary nodes based on features extracted from node transmission data.
Fuzzy Logic And Application Jntu Model Paper{Www.Studentyogi.Com}guest3f9c6b
This document contains an exam for a course on Fuzzy Logic and Applications. It includes 8 questions covering topics such as operations on crisp and fuzzy sets using Venn diagrams, fuzzy relations, membership functions, fuzzy logic connectives, defuzzification methods, and decision making under fuzzy conditions. Students are instructed to answer any 5 of the 8 questions.
The document discusses clustering, mixture models, and the EM algorithm. It provides an overview of k-means clustering and Gaussian mixture models (GMM). K-means aims to partition observations into K clusters while minimizing within-cluster variance. GMM represents data as a weighted sum of Gaussian distributions. The EM algorithm is introduced for training GMM through maximum likelihood. It iteratively performs E-steps to estimate posterior distribution of latent variables, and M-steps to update model parameters, converging to a local optimum.
Tutorial on Belief Propagation in Bayesian NetworksAnmol Dwivedi
The goal of this mini-project is to implement belief propagation algorithms for posterior probability inference and most probable explanation (MPE) inference for the Bayesian Network with binary values in which the Conditional Probability Table for each random-variable/node is given.
1) The document discusses bias amplification that can occur when using instrumental variable calibration estimators with missing survey data. It presents models where a variable of interest (y) and instrumental variables (z) are related, and response propensity depends on the instrumental variables.
2) When an imperfect proxy for the instrumental variables (x) is used in calibration instead of the true variables, it can lead to bias amplification if the proxy is also related to response propensity. This violates the assumption that the proxy is independent of response given the instrumental variables.
3) A simulation study is presented to illustrate how using an imperfect proxy in calibration can amplify bias compared to the naive estimator that ignores nonresponse. The degree of bias
When Classifier Selection meets Information Theory: A Unifying ViewMohamed Farouk
Classifier selection aims to reduce the size of an
ensemble of classifiers in order to improve its efficiency and
classification accuracy. Recently an information-theoretic view
was presented for feature selection. It derives a space of possible
selection criteria and show that several feature selection criteria
in the literature are points within this continuous space. The
contribution of this paper is to export this information-theoretic
view to solve an open issue in ensemble learning which is
classifier selection. We investigated a couple of informationtheoretic
selection criteria that are used to rank classifiers.
Catalan Tau Collocation for Numerical Solution of 2-Dimentional Nonlinear Par...IJERA Editor
Tau method which is an economized polynomial technique for solving ordinary and partial differential
equations with smooth solutions is modified in this paper for easy computation, accuracy and speed. The
modification is based on the systematic use of „Catalan polynomial‟ in collocation tau method and the
linearizing the nonlinear part by the use of Adomian‟s polynomial to approximate the solution of 2-dimentional
Nonlinear Partial differential equation. The method involves the direct use of Catalan Polynomial in the solution
of linearizedPartial differential Equation without first rewriting them in terms of other known functions as
commonly practiced. The linearization process was done through adopting the Adomian Polynomial technique.
The results obtained are quite comparable with the standard collocation tau methods for nonlinear partial
differential equations.
Ireducible core and equal remaining obligations rule for mcst gamesvinnief
This document presents an algorithm for solving minimum cost spanning extension (MCSE) problems, which generalize minimum cost spanning tree problems. The algorithm finds a minimum cost extension of an existing network to connect all users to a source, and generates an associated set of cost allocations. It works by sequentially adding edges in order of non-decreasing cost, such that each added edge does not introduce new cycles. The algorithm's output is proved to be contained within the core of the associated MCSE game and is independent of the specific extension constructed. The paper also generalizes the definition of the "irreducible core" to MCSE problems and shows the algorithm's output coincides with this.
Use of the correlation coefficient as a measure of effectiveness of a scoring...Wajih Alaiyan
The document discusses using the correlation coefficient to measure the effectiveness of machine scoring systems compared to human scoring. It provides three applications of using the correlation coefficient: 1) Assigning machine scores that match the expected value of the human score ranking, 2) Assigning machine scores that match the expected human score within clusters of essays, 3) Using instrumental variables to estimate machine scores in a way that maximizes the correlation with human scores. The analysis shows that maximizing the correlation coefficient provides a justified way to measure scoring system effectiveness.
In this talk we will describe a methodology to handle the causality to make inference on common-cause failure in a situation of missing data. The data are collected in the form of contingency table but the available information are only the numbers of CCF of different orders and the numbers of failure due to a given cause. Therefore only the margins of the contingency table are observed; thefrequencies in each cell are unknown. Assuming a Poisson model for the count, we suggest a Bayesian approach and we use the inverse Bayes formula (IBF) combined with a Metropolis-Hastings algorithm to make inference on the rate of occurrence for the different combination cause, order. The performance of the resulting algorithm is evaluated through simulations. A comparison is made with results obtained from the _-composition approach to deal with causality suggested by Zheng et al. (2013).
This document discusses generative and discriminative classifiers. Generative classifiers model the joint distribution of data and labels, while discriminative classifiers directly model the conditional probability of labels given data. Naive Bayes is an example of a generative classifier, while logistic regression is a discriminative classifier that directly models the probability of a label given input features. The document provides mathematical details on naive Bayes, logistic regression, and how logistic regression can be trained to maximize conditional likelihood through gradient descent.
Contribution à l'étude du trafic routier sur réseaux à l'aide des équations d...Guillaume Costeseque
The document discusses traffic flow modeling on road networks. It begins by motivating the use of Hamilton-Jacobi equations to model traffic at a macroscopic scale on networks. It then provides an introduction to traffic modeling, including microscopic and macroscopic models. It focuses on the Lighthill-Whitham-Richards model and discusses higher-order models. It also discusses how microscopic models can be homogenized to derive macroscopic models using Hamilton-Jacobi equations. Finally, it discusses multi-anticipative traffic models and numerical schemes for solving the equations.
Visual Attention Convergence Index for Virtual Reality ExperiencesPawel Kobylinski
The paper introduces a novel quantitative method in the domain of eye tracking (ET) for virtual reality (VR). The method might be of interest to researchers on the human factor in VR, behavioral psychologists, and designers of VR experiences. Several mathematical formulas describing a novel index quantifying convergence of visual attention are introduced. The index is based on recently developed distance variance, a function of distances between observations in metric spaces. An aggregated version of the visual attention convergence index introduced in the paper allows to measure the effectiveness of any system of attentional cues employed by a designer to guide the attention of VR experience participants along an intended narration line. An individual version of the index allows to capture individual differences in the convergence of visual attention across participants. Possibilities for real-life and academic usage of the index are discussed and example results of application to real VR ET data are summarized.
Cite as:
Kobylinski P., Pochwatko G. (2020) Visual Attention Convergence Index for Virtual Reality Experiences. In: Ahram T., Taiar R., Colson S., Choplin A. (eds) Human Interaction and Emerging Technologies. IHIET 2019. Advances in Intelligent Systems and Computing, vol 1018. Springer, Cham
https://link.springer.com/chapter/10.1007/978-3-030-25629-6_48
Mathematicians use univariate and multivariate analyses to predict the future. For univariate analysis, they use Ornstein-Uhlenbeck and autoregressive models to analyze time series data. For multivariate analysis, they use linear regression to analyze correlations between multiple time series and predict values. Their analyses generate forecasts, confidence bands around predictions, and evaluations of prediction errors. The conclusion indicates the methods provide useful predictions and that inflation rates are correlated across measures.
Similar to How to Handle Interval Solutions for Cooperative Interval Games (20)
This document discusses student organizations and the university system in Germany. It provides an overview of the different types of higher education institutions in Germany, including universities, universities of applied sciences, and arts universities. It describes the degree system including bachelor's, master's, and Ph.D. programs. It also outlines the systems of student participation at universities, using the examples of Leipzig and Hanover. Student councils, departments, and faculty student organizations are discussed.
The document discusses grand challenges in energy and perspectives on moving towards more sustainable systems. It notes that while global energy demand and CO2 emissions rebounded in 2010 after the economic downturn, urgent changes are still needed. It explores perspectives on changing direction, including overcoming barriers like technologies, economies, management, and mindsets. The document advocates a systems approach and backcasting from desirable futures to identify pathways for transitioning between states.
Engineering can play an important role in sustainable development by focusing on meeting human needs over wants and prioritizing projects that serve the most vulnerable populations. Engineers should consider how their work impacts sustainability, affordability, and accessibility. A socially sustainable product is manufactured sustainably and also improves people's lives. Engineers are not neutral and should strive to serve societal needs rather than just generate profits. They can help redefine commerce and an engineering culture focused on meeting needs sustainably through services rather than creating unnecessary products and infrastructure.
Consensus and interaction on a long term strategy for sustainable developmentSSA KPI
The document discusses the need for a long-term vision for sustainable development to address major challenges like climate change, resource depletion, and inequity. A long-term perspective is required because these problems will take consistent action over many years to solve. However, short-term solutions may counteract long-term goals if not guided by an overall strategic vision. Developing a widely accepted long-term sustainable development vision requires input from many stakeholders to find balanced solutions and avoid dead ends. Strategic decisions with long-lasting technological and social consequences need a vision that can adapt to changing conditions over time.
Competences in sustainability in engineering educationSSA KPI
The document discusses competencies in sustainability for engineering education. It defines competencies and lists taxonomies that classify competencies into categories like knowledge, skills, attitudes, and ethics. Engineering graduates are expected to have competencies like critical thinking, systemic thinking, and interdisciplinarity. Analysis of competency frameworks from different universities found that competencies are introduced at varying levels, from basic knowledge to complex problem solving and valuing sustainability challenges. The document also outlines the University of Polytechnic Catalonia's framework for its generic sustainability competency.
The document discusses concepts related to sustainability including carrying capacity, ecological footprint, and the IPAT equation. It provides data on historical and projected world population growth. Examples are given showing the ecological footprint of different countries and how it is calculated based on factors like energy use, agriculture, transportation, housing, goods and services. The human development index is also introduced as a broader measure than GDP for assessing well-being. Graphs illustrate the relationship between increasing HDI, ecological footprint, and the goal of transitioning to sustainable development.
From Huygens odd sympathy to the energy Huygens' extraction from the sea wavesSSA KPI
Huygens observed that two pendulum clocks suspended near each other would synchronize their swings to be 180 degrees out of phase. He conducted experiments that showed the synchronization was caused by small movements transmitted through their common frame. While this discovery did not help solve the longitude problem as intended, it sparked further investigations into coupled oscillators and synchronization phenomena.
1) The document discusses whether dice rolls and other mechanical randomizers can truly produce random outcomes from a dynamics perspective.
2) It analyzes the equations of motion for different dice shapes and coin tossing, showing that outcomes are theoretically predictable if initial conditions can be reproduced precisely.
3) However, in reality small uncertainties in initial conditions mean mechanical randomizers can approximate random processes, even if they are deterministic based on their underlying dynamics.
This document discusses the concept of energy security costs. It defines energy security costs as externalities associated with short-term macroeconomic adjustments to changes in energy prices and long-term impacts of monopoly or monopsony power in energy markets. The document provides references on calculating health and environmental impacts of electricity generation and assessing costs and benefits of oil imports. It also outlines a proposed 4-hour course on basic concepts, examples, and a case study analyzing energy security costs for Ukraine based on impacts of increasing natural gas import prices.
Naturally Occurring Radioactivity (NOR) in natural and anthropic environmentsSSA KPI
This document provides an overview of naturally occurring radioactivity (NOR) and naturally occurring radioactive materials (NORM) with a focus on their relevance to the oil and gas industry. It discusses the main radionuclides of interest, including radium-226, radium-228, uranium, radon-222, and lead-210. It also summarizes the origins of NORM in the oil and gas industry and the types of radiation emitted by NORM.
Advanced energy technology for sustainable development. Part 5SSA KPI
All energy technologies involve risks that must be carefully evaluated and minimized to ensure sustainable development. No technology is perfectly safe, so ongoing analysis of benefits, risks and impacts is needed. Public understanding and acceptance of risks is also important.
Advanced energy technology for sustainable development. Part 4SSA KPI
The document discusses the impacts and benefits of energy technology research, using fusion research as a case study. It outlines four pathways through which energy research can impact economies and societies: 1) direct economic effects, 2) impacts on local communities, 3) impacts on industrial technology capabilities, and 4) long-term impacts on energy markets and technologies. It then analyzes the direct and indirect economic impacts of fusion research investments and the technical spin-offs that fusion research has produced. Finally, it evaluates the potential future role of fusion electricity in global energy markets under environmental constraints.
Advanced energy technology for sustainable development. Part 3SSA KPI
This document discusses using fusion energy for sustainable development through biomass conversion. It proposes a system where fusion energy is used to provide heat for gasifying biomass into synthetic fuels like methane and diesel. Experiments show biomass can be over 95% converted to hydrogen, carbon monoxide and methane gases using nickel catalysts at temperatures of 600-1000 degrees Celsius. A conceptual biomass reactor is presented that could process 6 million tons of biomass per year, consisting of 70% cellulose and 30% lignin, into synthetic fuels to serve as carbon-neutral transportation fuels. Fusion energy could provide the high heat needed for the gasification and synthesis processes.
Advanced energy technology for sustainable development. Part 2SSA KPI
The document summarizes fusion energy technology and its potential for sustainable development. Fusion occurs at extremely high temperatures and is the process that powers the Sun and stars. Researchers are working to develop fusion energy on Earth using hydrogen isotopes as fuel. Key challenges include confining the hot plasma long enough at high density for fusion reactions to produce net energy gain. Progress is being made towards achieving the conditions needed for a sustainable fusion reaction as defined by Lawson's criteria.
Advanced energy technology for sustainable development. Part 1SSA KPI
1. The document discusses the concept of sustainability and sustainable systems. It provides an example of a closed ecosystem with algae, water fleas, and fish, where energy and material balances must be maintained for long-term stability.
2. Key requirements for a sustainable system include energy balance between inputs and outputs, recycling of materials or wastes, and mechanisms to control population relationships and prevent overconsumption of resources.
3. Historically, the environment was seen as external and unchanging, but it is now recognized that the environment co-evolves interactively with the living creatures within it.
This document discusses the use of fluorescent proteins in current biological research. It begins with an overview of the development of optical microscopy and fluorescence techniques. It then focuses on the green fluorescent protein (GFP) and how it has been used as a molecular tag to study protein expression and interactions in living cells through techniques like gene delivery, transfection, viral infection, FRET, and optogenetics. The document concludes that fluorescent proteins have revolutionized cell biology by enabling the real-time visualization and control of molecular pathways and signaling processes in living systems.
Neurotransmitter systems of the brain and their functionsSSA KPI
1. Neurotransmitters are chemical substances released at synapses that transmit signals between neurons. The main neurotransmitters in the brain are acetylcholine, serotonin, dopamine, norepinephrine, glutamate, GABA, and endorphins.
2. Each neurotransmitter system is involved in regulating key brain functions and behaviors such as movement, mood, sleep, cognition, and pain perception.
3. Neurotransmitters act via membrane receptors on target neurons, including ionotropic receptors that are ligand-gated ion channels and metabotropic G-protein coupled receptors.
This presentation was provided by Racquel Jemison, Ph.D., Christina MacLaughlin, Ph.D., and Paulomi Majumder. Ph.D., all of the American Chemical Society, for the second session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session Two: 'Expanding Pathways to Publishing Careers,' was held June 13, 2024.
This presentation was provided by Rebecca Benner, Ph.D., of the American Society of Anesthesiologists, for the second session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session Two: 'Expanding Pathways to Publishing Careers,' was held June 13, 2024.
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
Andreas Schleicher presents PISA 2022 Volume III - Creative Thinking - 18 Jun...EduSkills OECD
Andreas Schleicher, Director of Education and Skills at the OECD presents at the launch of PISA 2022 Volume III - Creative Minds, Creative Schools on 18 June 2024.
How Barcodes Can Be Leveraged Within Odoo 17Celine George
In this presentation, we will explore how barcodes can be leveraged within Odoo 17 to streamline our manufacturing processes. We will cover the configuration steps, how to utilize barcodes in different manufacturing scenarios, and the overall benefits of implementing this technology.
Level 3 NCEA - NZ: A Nation In the Making 1872 - 1900 SML.pptHenry Hollis
The History of NZ 1870-1900.
Making of a Nation.
From the NZ Wars to Liberals,
Richard Seddon, George Grey,
Social Laboratory, New Zealand,
Confiscations, Kotahitanga, Kingitanga, Parliament, Suffrage, Repudiation, Economic Change, Agriculture, Gold Mining, Timber, Flax, Sheep, Dairying,
Beyond Degrees - Empowering the Workforce in the Context of Skills-First.pptxEduSkills OECD
Iván Bornacelly, Policy Analyst at the OECD Centre for Skills, OECD, presents at the webinar 'Tackling job market gaps with a skills-first approach' on 12 June 2024
How to Handle Interval Solutions for Cooperative Interval Games
1. 6th Summer School AACIMP - Kyiv Polytechnic Institute (KPI) - National Technical University of Ukraine, 8-20 August 2011
Cooperative Game Theory. Operations Research
Games. Applications to Interval Games
Lecture 7: How to Handle Interval Solutions for Cooperative
Interval Games
Sırma Zeynep Alparslan G¨k
o
S¨leyman Demirel University
u
Faculty of Arts and Sciences
Department of Mathematics
Isparta, Turkey
email:zeynepalparslan@yahoo.com
August 13-16, 2011
2. 6th Summer School AACIMP - Kyiv Polytechnic Institute (KPI) - National Technical University of Ukraine, 8-20 August 2011
Outline
Introduction
Allocation rules
The one-stage procedure
The multi-stage procedure
Final remarks
References
3. 6th Summer School AACIMP - Kyiv Polytechnic Institute (KPI) - National Technical University of Ukraine, 8-20 August 2011
Introduction
Introduction
This lecture is based on the paper
How to handle interval solutions for cooperative interval games by
Branzei, Tijs and Alparslan G¨k,
o
which was published in
International Journal of Uncertainty, Fuzziness and
Knowledge-based Systems.
4. 6th Summer School AACIMP - Kyiv Polytechnic Institute (KPI) - National Technical University of Ukraine, 8-20 August 2011
Introduction
Motivation
Uncertainty accompanies almost every situation in our lives
and it influences our decisions.
On many occasions uncertainty is so severe that we can only
predict some upper and lower bounds for the outcome of our
(collaborative) actions, i.e., payoffs lie in some intervals.
Cooperative interval games have been proved useful for
solving reward/cost sharing problems in situations with
interval data in a cooperative environment (see Branzei et al.
(2010) for a survey).
A natural way to incorporate the uncertainty of coalition
values into the solution of such reward/cost sharing problems
is by using interval solution concepts.
5. 6th Summer School AACIMP - Kyiv Polytechnic Institute (KPI) - National Technical University of Ukraine, 8-20 August 2011
Introduction
Related literature
Many papers appeared on modeling economic and Operational
Research situations with interval data by using game theory, in
particular cooperative interval games, as a tool.
Branzei, Dimitrov and Tijs (2003), Alparslan G¨k, Miquel and
o
Tijs (2009), Alparslan G¨k (2009), Branzei et al. (2010),
o
Branzei, Mallozzi and Tijs (2010), Yanovskaya, Branzei and
Tijs (2010).
Kimms and Drechsel (2009).
Bauso and Timmer (2009) introduce dynamics into the theory
of cooperative interval games, whereas Mallozzi, Scalzo and
Tijs (2011) extend some results from the theory of
cooperative interval games by considering coalition values
given by means of fuzzy intervals.
6. 6th Summer School AACIMP - Kyiv Polytechnic Institute (KPI) - National Technical University of Ukraine, 8-20 August 2011
Introduction
Cooperative interval games
< N, w >, N = {1, 2, . . . , n}: set of players
w : 2N → I (R): characteristic function, w (∅) = [0, 0]
w (S) = [w (S), w (S)]: worth (value) of S
w (S) : the lower bound, w (S): the upper bound of the interval
w (S)
I (R): the set of all closed and bounded intervals in R
I (R)N : set of all n-dimensional vectors with elements in I (R)
IG N : the class of all interval games with player set N
7. 6th Summer School AACIMP - Kyiv Polytechnic Institute (KPI) - National Technical University of Ukraine, 8-20 August 2011
Introduction
Interval solution concepts
An interval solution concept on IG N is a map assigning to each
interval game w ∈ IG N a set of n-dimensional vectors whose
components belong to I (R).
The interval imputation set:
I(w ) = (I1 , . . . , In ) ∈ I (R)N | Ii = w (N), Ii w (i), ∀i ∈ N .
i∈N
The interval core:
C(w ) = (I1 , . . . , In ) ∈ I(w )| Ii w (S), ∀S ∈ 2N {∅} .
i∈S
The interval Shapley value Φ : SMIG N → I (R)N :
1
Φ(w ) = mσ (w ), for each w ∈ SMIG N .
n!
σ∈Π(N)
8. 6th Summer School AACIMP - Kyiv Polytechnic Institute (KPI) - National Technical University of Ukraine, 8-20 August 2011
Introduction
Interval solution concepts
The payoff vectors x = (x1 , x2 , . . . , xn ) ∈ RN from the classical
cooperative transferable utility (TU) game theory are replaced by
n-dimensional vectors (J1 , . . . , Jn ) ∈ I (R)N , where Ji = [J i , J i ],
i ∈ N.
The players’ agreement on a particular interval allocation
(J1 , . . . , Jn ) based on an interval solution concept merely says that
the payoff xi that player i will receive when the outcome of the
grand coalition is known belongs to the interval Ji .
A procedure to transform an interval allocation
J = (J1 , . . . , Jn ) ∈ I (R)N into a payoff vector
x = (x1 , . . . , xn ) ∈ RN is therefore a basic ingredient of contracts
that people or businesses have to sign when they cannot estimate
with certainty the attainable coalition payoff(s).
9. 6th Summer School AACIMP - Kyiv Polytechnic Institute (KPI) - National Technical University of Ukraine, 8-20 August 2011
Allocation rules
Allocation rules
Let N be a set of players that consider cooperation under interval
uncertainty of coalition values, i.e. knowing what each group S of
players (coalition) can obtain between two bounds, w (S) and
w (S), via cooperation.
If the players use cooperative game theory as a tool, they can
choose an interval solution concept, say the value-type solution Ψ,
that associates with the related cooperative interval game
< N, w > the interval allocation Ψ(w ) = (J1 , . . . , Jn ) which
guarantees for each player i ∈ N a final payoff within the interval
Ji = [J i , J i ] when the value of the grand coalition is known.
Clearly, w (N) = i∈N J i and w (N) = i∈N J i . For each i ∈ N
the interval [J i , J i ] can be seen as the interval claim of i on the
realization R of the payoff for the grand coalition N
(w (N) ≤ R ≤ w (N)).
10. 6th Summer School AACIMP - Kyiv Polytechnic Institute (KPI) - National Technical University of Ukraine, 8-20 August 2011
Allocation rules
Allocation rules
One should determine payoffs xi ∈ [J i , J i ], i ∈ N (the feasibility
condition) such that i∈N xi = R (the efficiency condition).
Notice that in the case R = w (N) the payoff vector x equals
(J 1 , . . . , J n ), in the case R = w (N) we have x = (J 1 , . . . , J n ), but
in the case w (N) < R < w (N) there are infinitely many ways to
determine allocations (x1 , . . . , xn ) satisfying both the efficiency and
the feasibility conditions.
In the last case, we need suitable allocation rules to determine fair
allocations (x1 , . . . , xn ) of R satisfying the above conditions.
As players prefer as large payoffs as possible and the amount R to
be divided between them is smaller than i∈N J i , the players are
facing a bankruptcy-like situation, implying that bankruptcy rules
are good candidates for transforming an interval allocation
(J1 , . . . , Jn ) into a payoff vector (x1 , . . . , xn ).
11. 6th Summer School AACIMP - Kyiv Polytechnic Institute (KPI) - National Technical University of Ukraine, 8-20 August 2011
Allocation rules
Bankruptcy rules
A bankruptcy situation with set of claimants N is a pair (E , d),
where E ≥ 0 is the estate to be divided and d ∈ RN is the vector
+
of claims such that i∈N di ≥ E . We denote by BR N the set of
bankruptcy situations with player set N.
A bankruptcy rule is a function f : BR N → RN which assigns to
each bankruptcy situation (E , d) ∈ BR N a payoff vector
f (E , d) ∈ RN such that 0 ≤ f (E , d) ≤ d (reasonability) and
i∈N fi (E , d) = E (efficiency).
We only use three bankruptcy rules: the proportional rule (PROP),
the constrained equal awards (CEA) rule and the constrained equal
losses (CEL) rule.
12. 6th Summer School AACIMP - Kyiv Polytechnic Institute (KPI) - National Technical University of Ukraine, 8-20 August 2011
Allocation rules
Bankruptcy rules
The rule PROP is defined by
di
PROPi (E , d) = E
j∈N dj
for each bankruptcy problem (E , d) and all i ∈ N.
The rule CEA is defined by
CEAi (E , d) = min {di , α} ,
where α is determined by
CEAi (E , d) = E ,
i∈N
for each bankruptcy problem (E , d) and all i ∈ N.
13. 6th Summer School AACIMP - Kyiv Polytechnic Institute (KPI) - National Technical University of Ukraine, 8-20 August 2011
Allocation rules
Bankruptcy rules
The rule CEL is defined by
CELi (E , d) = max {di − β, 0} ,
where β is determined by
CELi (E , d) = E ,
i∈N
for each bankruptcy problem (E , d) and all i ∈ N.
We introduce the notation F = {CEA, CEL, PROP} and let
f ∈ F. The choice of one specific f ∈ F in a certain bankruptcy
situation is based on the preference of the players involved in that
situation; other bankruptcy rules could be also considered as
elements of a larger F.
14. 6th Summer School AACIMP - Kyiv Polytechnic Institute (KPI) - National Technical University of Ukraine, 8-20 August 2011
Allocation rules
Bankruptcy rules
When the value of the grand coalition becomes known in multiple
stages, i.e., updated estimates of the outcome of cooperation
within the grand coalition are considered during an allocation
process, more general division problems than bankruptcy problems
may arise.
We present the rights-egalitarian (f RE ) rule defined by
1
fi RE (E , d) = di + n (E − i∈N di ), for each division problem (E , d)
and all i ∈ N.
The rights-egalitarian rule divides equally among the agents the
difference between the total claim D = i∈N di and the available
amount E , being suitable for all circumstances of division
problems; in particular, the amount to be divided can be either
positive or negative, the vector of claims d = (d1 , . . . , dn ) may
have negative components, and the amount to be divided may
exceed or fall short of the total claim D.
15. 6th Summer School AACIMP - Kyiv Polytechnic Institute (KPI) - National Technical University of Ukraine, 8-20 August 2011
The one-stage procedure
The one-stage procedure
Let (J1 , . . . , Jn ) be an interval allocation, with Ji = [J i , J i ], i ∈ N,
satisfying i∈N J i = w (N) and i∈N J i = w (N), and let R be
the realization of w (N).
One can write R and J i , i ∈ N, as:
R = w (N) + (R − w (N)), (1)
J i = J i + (J i − J i ), (2)
implying that the problem (R − w (N), (J i − J i )i∈N ) is a bankruptcy
problem. Since R is the realization of w (N), one can expect that
w (N) ≤ R ≤ w (N). (3)
16. 6th Summer School AACIMP - Kyiv Polytechnic Institute (KPI) - National Technical University of Ukraine, 8-20 August 2011
The one-stage procedure
The one-stage procedure
Next we describe and illustrate a simple (one-stage) procedure to
transform an interval allocation (J1 , . . . , Jn ) ∈ I (R)N into a payoff
vector x = (x1 , . . . , xn ) ∈ RN which satisfies
J i ≤ xi ≤ J i for each i ∈ N; (4)
xi = R. (5)
i∈N
The one-stage procedure (in the case when the value of the grand
coalition becomes known at once) uses as input data an interval
allocation (J1 , . . . , Jn ), the realized value of the grand coalition, R,
and function(s) specifying the division rule(s) for distributing the
amount R over the players. It determines for each player i, i ∈ N,
a payoff xi ∈ R such that J i ≤ xi ≤ J i , and i∈N xi = R.
17. 6th Summer School AACIMP - Kyiv Polytechnic Institute (KPI) - National Technical University of Ukraine, 8-20 August 2011
The one-stage procedure
Procedure One-Stage;
Input data: n, (Ji )i=1,n , R;
function f ;
begin
compute w (N) w (N) = i∈N J i ;
for i = 1 to n do
di = J i − J i
{endfor}
for i = 1 to n do
pi = fi (R − w (N), (di )i=1,n )
{endfor}
for i = 1 to n do
xi := J i + pi
{endfor}
Output data: x = (x1 , . . . , xn );
{end procedure}.
18. 6th Summer School AACIMP - Kyiv Polytechnic Institute (KPI) - National Technical University of Ukraine, 8-20 August 2011
The one-stage procedure
Example
Let < N, w > be the three-person interval game with
w (S) = [0, 0] if 3 ∈ S, w (∅) = w (3) = [0, 0], w (1, 3) = [20, 30]
/
and w (N) = w (2, 3) = [50, 90]. We assume that the realization of
w (N) is R = 60 and consider that cooperation within the grand
coalition was settled based on the use of the interval Shapley
value. Then, Φ(w ) = ([3 1 , 5], [18 1 , 35], [28 1 , 50]).
3 3 3
We determine individual uncertainty-free shares distributing the
amount R − w (N) = 10 among the three agents. Note that we
deal here with a classical bankruptcy problem (E , d) with E = 10,
d = (1 2 , 16 3 , 21 2 ).
3
2
3
19. 6th Summer School AACIMP - Kyiv Polytechnic Institute (KPI) - National Technical University of Ukraine, 8-20 August 2011
The one-stage procedure
Example continued
Using the one-stage procedure three times with PROP, CEA and
CEL in the role of f , respectively, we have
f PROP(E , d) CEA(E , d) CEL(E , d)
1 .
p ( 12 , 4 1 , 5 12 ) (1 2 , 4 1 , 4 1 ) (0, 2 2 , 7 2 )
5
6
5
3 6 6
1
Then, we obtain x as (3 1 , 18 1 , 28 1 ) + f (10, (1 2 , 16 3 , 21 3 )),
3 3 3 3
2 2
f ∈ F, shown in the next table.
f PROP(E , d) CEA(E , d) CEL(E , d)
.
x (3 4 , 22 2 , 33 4 ) (5, 22 2 , 32 2 ) (3 3 , 20 5 , 35 5 )
3 1 3 1 1 1
6 6
20. 6th Summer School AACIMP - Kyiv Polytechnic Institute (KPI) - National Technical University of Ukraine, 8-20 August 2011
The one-stage procedure
Remark
First, since R satisfies (3), one idea is to determine λ ∈ [0, 1] such
that
R = λw (N) + (1 − λ)w (N), (6)
and give to each i ∈ N the payoff
xi = λJ i + (1 − λ)J i . (7)
Note that J i ≤ xi ≤ J i and
xi = λ J i + (1 − λ) J i = λw (N) + (1 − λ)w (N) = R.
i∈N i∈N i∈N
So, x satisfies conditions (4) and (5).
21. 6th Summer School AACIMP - Kyiv Polytechnic Institute (KPI) - National Technical University of Ukraine, 8-20 August 2011
The one-stage procedure
Remark continued
Now, we notice that we can also write x = J + (1 − λ)(J − J).
So, the payoff for player i ∈ N can be obtained in the following
manner: first each player i ∈ N is allocated the amount J i ; second,
the amount R − i∈N J i is distributed over the players
proportionally with J i − J i , i ∈ N, which is equivalent with using
the bankruptcy rule PROP for a bankruptcy problem (E , d), where
the estate E equals R − i∈N J i and the claims di are equal to
J i − J i for each i ∈ N.
22. 6th Summer School AACIMP - Kyiv Polytechnic Institute (KPI) - National Technical University of Ukraine, 8-20 August 2011
The multi-stage procedure
The multi-stage procedure
The multi-stage procedure (in the case when the value of the
grand coalition becomes known in multiple stages, say T ) uses as
input data an interval allocation (J1 , . . . , Jn ), a related sequence of
observed outcomes for the grand coalition, R (1) , . . . , R (T ) , and
function(s) specifying the division rule(s) for distributing the
amount R (t) − R (t−1) over the players at stage t, t = 1, . . . , T . It
determines for each player i ∈ N a payoff xi ∈ R such that
J i ≤ xi ≤ J i , and i∈N xi = R (T ) .
23. 6th Summer School AACIMP - Kyiv Polytechnic Institute (KPI) - National Technical University of Ukraine, 8-20 August 2011
The multi-stage procedure
The multi-stage procedure
In this section we introduce some dynamics in allocation processes
for procedures to transform an interval allocation
(J1 , . . . , Jn ) ∈ I (R)N into a payoff vector x ∈ RN satisfying
conditions (4) and (5).
We assume that a finite sequence of updated estimates of the
outcome of the grand coalition, R (t) with t ∈ {1, 2, . . . , T }, is
available because the value of the grand coalition is known in
multiple stages, where
w (N) ≤ R (1) ≤ R (2) ≤ . . . ≤ R (T ) ≤ w (N). (8)
24. 6th Summer School AACIMP - Kyiv Polytechnic Institute (KPI) - National Technical University of Ukraine, 8-20 August 2011
The multi-stage procedure
The multi-stage procedure
At any stage t ∈ {1, 2, . . . , T } a budget of fixed size, R (t) − R (t−1) ,
where R (0) = w (N), is distributed among the players.
The decision as which portion of the budget each player will
receive at that stage depends on the historical allocation and is
specified by a predetermined allocation rule. As allocation rules at
each stage we consider either a bankruptcy rule f (in the case
when a bankruptcy problem arises) or a general division rule (for
example f RE ) otherwise.
25. 6th Summer School AACIMP - Kyiv Polytechnic Institute (KPI) - National Technical University of Ukraine, 8-20 August 2011
The multi-stage procedure
The multi-stage procedure
Procedure Multi-Stage;
Input data: n, (Ji )i=1,n , T , (R (j) )j=1,T ;
function f , g ;
compute w (N) w (N) = i∈N J i ;
begin
R (0) := w (N);
for i = 1 to n do
di = J i − J i ; spi := 0
{endfor}
for t = 1 to T do
begin
D := 0;
for i = 1 to n do
D := D + di
{endfor}
26. 6th Summer School AACIMP - Kyiv Polytechnic Institute (KPI) - National Technical University of Ukraine, 8-20 August 2011
The multi-stage procedure
if D > R (j) − R (j−1)
then for i = 1 to n do pi = fi (R (j) − R (j−1) , (di )i=1,n ) {endfor}
else for i = 1 to n do pi = gi (R (j) − R (j−1) , (di )i=1,n ) {endfor}
{endif}
for i = 1 to n do
di := di − pi ;
spi := spi + pi
{endfor}
{end}
{endfor}
for i = 1 to n do
xi := J i + spi
{endfor}
Output data: x = (x1 , . . . , xn );
{end procedure}.
27. 6th Summer School AACIMP - Kyiv Polytechnic Institute (KPI) - National Technical University of Ukraine, 8-20 August 2011
The multi-stage procedure
Remarks
We notice that the One-Stage procedure appears as a special case
of the Multi-Stage procedure where T = 1. At each stage
t ∈ {1, . . . , T } of the allocation process the fixed amount
R (t) − R (t−1) , where R (t) is the estimate of the payoff for the
grand coalition at stage t, with R (0) = w (N) is distributed among
the players’ by taking into account the players’ updated claims at
the previous stage, di , i ∈ N, to determine the payoff portions, pi ,
i ∈ N.
28. 6th Summer School AACIMP - Kyiv Polytechnic Institute (KPI) - National Technical University of Ukraine, 8-20 August 2011
The multi-stage procedure
Remarks continued
The calculation of the individual payoff portions is done using the
specified bankruptcy rule f when we deal with a bankruptcy
problem, i.e. when the total claim D is greater than R (j) − R (j−1)
(and all the individual claims are nonnegative).
These payoff portions are used further to update both the
aggregate portions spi and the individual claims di , i ∈ N.
Notice that under the assumption (8) our procedure assures that
all the individual claims are nonnegative as far as we apply a
bankruptcy rule f . However, the condition D > R (j) − R (j−1) may
be not satisfied requiring the use of a general division rule g like
f RE .
29. 6th Summer School AACIMP - Kyiv Polytechnic Institute (KPI) - National Technical University of Ukraine, 8-20 August 2011
The multi-stage procedure
Example
Consider the interval game and the interval Shapley value as in the
previous example. But, suppose there are 3 updated estimates of
the realization of the payoff for the grand coalition:
R (1) = 60; R (2) = 65 and R (3) = 80.
We have R (0) = 50; d = (1 2 , 16 2 , 21 2 ); sp = (0, 0, 0);
3 3 3
Stage 1. The amount R (1) − R (0) = 10 is distributed over agents in N
according to the claims d = (1 2 , 16 2 , 21 3 ). Note that
3 3
2
D = 40 > 10, so the bankruptcy rule PROP can be applied at
this stage yielding p = ( 12 , 4 1 , 5 12 ). Clearly,
5
6
5
5 1 5
sp = ( 12 , 4 6 , 5 12 ). The vector of claims becomes
d = (1 4 , 12 2 , 16 1 ).
1 1
4
30. 6th Summer School AACIMP - Kyiv Polytechnic Institute (KPI) - National Technical University of Ukraine, 8-20 August 2011
The multi-stage procedure
Example continued
Stage 2. The amount R (2) − R (1) = 5 is distributed over agents in N
according to d = (1 1 , 12 1 , 16 1 ). Note that D = 30 > 5, so
4 2 4
the bankruptcy rule PROP can be applied yielding
p = ( 24 , 2 12 , 2 17 ). Then the adjusted vector of claims is
5 1
24
d = (1 24 , 10 12 , 13 13 ) and sp equals now ( 5 , 6 1 , 8 1 ).
1 5
24 8 4 8
Stage 3. The amount R (3) − R (2) = 15 is distributed over agents in N
according to d = (1 24 , 10 12 , 13 13 ). Since D = 24 5 > 15, we
1 5
24 6
can apply the bankruptcy rule PROP obtaining
p = ( 5 , 6 4 , 8 8 ). Then we obtain sp = (1 4 , 12 1 , 16 4 ) (No
8
1 1 1
2
1
claims are further needed because T = 3).
Finally, x = (3 3 + 1 1 , 18 1 + 12 2 , 28 1 + 16 4 ) = (4 12 , 30 5 , 44 12 ).
1
4 3
1
3
1 7
6
7
31. 6th Summer School AACIMP - Kyiv Polytechnic Institute (KPI) - National Technical University of Ukraine, 8-20 August 2011
Final remarks
Final remarks
In collaborative situations with interval data, to settle cooperation
within the grand coalition using the cooperative game theory as a
tool, the players should jointly choose:
(i) An interval solution concept, for example a value-type interval
solution Ψ, that captures the interval uncertainty with regard
to the coalition values under the form of an interval allocation,
say J = (J1 , . . . , Jn ), where Ji = Ψi (w ) for all i ∈ N;
(ii) A procedure, specifying the allocation process and the
allocation rule(s) to be used during the allocation process, in
order to transform the interval allocation (J1 , . . . , Jn ) into a
payoff vector (x1 , . . . , xn ) ∈ RN such that J i ≤ xi ≤ J i for
each i ∈ N and i∈N xi = R, where R is the revenue for the
grand coalition at the end of cooperation.
32. 6th Summer School AACIMP - Kyiv Polytechnic Institute (KPI) - National Technical University of Ukraine, 8-20 August 2011
Final remarks
Final remarks
The two procedures presented transform an interval allocation into
a payoff vector, under the assumption that only the uncertainty
with regard to the value of the grand coalition has been resolved.
In both procedures the vector of computed payoff shares belongs to
the core1 C (v ) of a selection2 < N, v > of the interval game
< N, w >.
1
The core of a cooperative transferable utility game was introduced by
Gillies (1959).
2
Let < N, w > be an interval game; then v : 2N → R is called a selection of
w if v (S) ∈ w (S) for each S ∈ 2N (Alparslan G¨k, Miquel and Tijs (2009)).
o
33. 6th Summer School AACIMP - Kyiv Polytechnic Institute (KPI) - National Technical University of Ukraine, 8-20 August 2011
Final remarks
Final remarks
In the sequel, we discuss two cases where besides the realization of
w (N) also the realizations of w (S) for some or all S ⊂ N are
known.
First, suppose that the uncertainty on all outcomes is resolved,
implying that a selection of the initial interval game is available.
Then, we can use for this selection a suitable classical solution (for
example the classical solution corresponding to the interval solution
Ψ) to determine a posteriori uncertainty-free individual shares.
34. 6th Summer School AACIMP - Kyiv Polytechnic Institute (KPI) - National Technical University of Ukraine, 8-20 August 2011
Final remarks
Final remarks
Secondly, suppose that only the uncertainty on some coalition
values (including the payoff for the grand coalition) was resolved.
In such situations, we propose to adjust the initial interval
allocation (J1 , . . . , Jn ) using the same interval solution concept Ψ
which generated it, but for the interval game < N, w > where
w (S) = [RS , RS ] for all S ⊂ N whose worth realizations RS are
known, w (N) = R, w (∅) = [0, 0], and w (S) = w (S) otherwise.
Then, the obtained interval allocation for the game < N, w > will
be transformed into an allocation x = (x1 , . . . , xn ) ∈ R N of R
using our procedures.
Finally, an alternative approach for designing one-stage procedures
is to use taxation rules instead of bankruptcy rules by handing out
first J i and then taking away with the aid of a taxation rule the
deficit T = i∈N J i − R based on di = J i − J i for each i ∈ N.
35. 6th Summer School AACIMP - Kyiv Polytechnic Institute (KPI) - National Technical University of Ukraine, 8-20 August 2011
References
References
[1]Alparslan G¨k S.Z., “Cooperative Interval Games: Theory and
o
Applications”, Lambert Academic Publishing (LAP), Germany
(2010) ISBN:978-3-8383-3430-1.
[2]Alparslan G¨k S.Z., “Cooperative interval games”, PhD
o
Dissertation Thesis, Institute of Applied Mathematics, Middle East
Technical University (2009).
[3]Alparslan G¨k S.Z., Miquel S. and Tijs S., “Cooperation under
o
interval uncertainty”, Mathematical Methods of Operations
Research, Vol. 69, no.1 (2009) 99-109.
[4] Bauso D. and Timmer J.B., “Robust Dynamic Cooperative
Games”, International Journal of Game Theory, Vol. 38, no. 1
(2009) 23-36.
36. 6th Summer School AACIMP - Kyiv Polytechnic Institute (KPI) - National Technical University of Ukraine, 8-20 August 2011
References
References
[5] Branzei R., Branzei O., Alparslan G¨k S.Z., Tijs S.,
o
“Cooperative interval games: a survey”, Central European Journal
of Operations Research (CEJOR), Vol.18, no.3 (2010) 397-411.
[6]Branzei R., Tijs S., Alparslan G¨k S.Z., “How to handle interval
o
solutions for cooperative interval games”, International Journal of
Uncertainty, Fuzziness and Knowledge-based Systems, Vol.18,
Issue 2 (2010) 123-132.
[7] Branzei R., Dimitrov D. and Tijs S., “Shapley-like values for
interval bankruptcy games”, Economics Bulletin Vol. 3 (2003) 1-8.
[8]Branzei R., Mallozzi L. and Tijs S., “Peer group situations and
games with interval uncertainty”, International Journal of
Mathematics, Game Theory, and Algebra, Vol. 19, issues 5-6
(2010).
37. 6th Summer School AACIMP - Kyiv Polytechnic Institute (KPI) - National Technical University of Ukraine, 8-20 August 2011
References
References
[9] Gillies D.B., Solutions to general non-zero-sum games. In:
Tucker, A.W. and Luce, R.D. (Eds.), Contributions to theory of
games IV, Annals of Mathematical Studies, Vol. 40. Princeton
University Press, Princeton (1959) pp. 47-85.
[10] Kimms A and Drechsel J., “Cost sharing under uncertainty: an
algorithmic approach to cooperative interval-valued games”, BuR -
Business Research, Vol. 2 (urn:nbn:de:0009-20-21721) (2009).
[11] Mallozzi L., Scalzo V. and Tijs S., “Fuzzy interval cooperative
games”, Fuzzy Sets and Systems, Vol. 165 (2011) pp.98-105.
[12] Yanovskaya E., Branzei R. and Tijs S., “Monotonicity
Properties of Interval Solutions and the Dutta-Ray Solution for
Convex Interval Games”, Chapter 16 in “Collective Decision
Making: Views from Social Choice and Game Theory”, series
Theory and Decision Library C, Springer Verlag Berlin/ Heidelberg
(2010).