This document discusses parallelism in artificial intelligence and evolutionary computation. It explains that comparison-based optimization algorithms, which include many evolutionary algorithms, can be naturally parallelized by speculatively running multiple branches in parallel with a branching factor of 3 or more. This allows theoretical logarithmic speedups to be achieved in practice through simple parallelization tricks.
The document discusses derivative-free optimization and evolutionary algorithms. It begins with an introduction to derivative-free optimization, explaining why it is useful when derivatives are unavailable or functions are noisy. Evolutionary algorithms are then discussed, including their fundamental elements like populations, selection, and variation operators. Specific evolutionary algorithms are presented, such as the estimation of distribution algorithm (EDA) and the (1+1)-ES algorithm with 1/5th success rule adaptation. The slides note that evolutionary algorithms are robust to noise and difficult optimization problems but are generally slower than derivative-based methods.
The document discusses problem solving agents and search algorithms. It describes problem solving as having four steps: goal formulation, problem formulation, search, and execution. It then discusses different types of problems agents may face, such as single state problems and problems with partial information. The document introduces tree search algorithms and strategies for searching a state space, such as breadth-first search. It analyzes the performance of breadth-first search and notes its exponential time and memory complexity for large problems.
The document discusses various techniques for fitting models to data, including linear least squares fitting, nonlinear least squares fitting, and robust estimation. Linear least squares fitting provides an exact solution by minimizing the residuals between the data and linear model. Nonlinear least squares fitting iteratively searches for the parameter values that minimize the residuals between the data and nonlinear model. Robust estimation techniques are less sensitive to outliers than least squares by down-weighting points with large deviations from the model.
NYAI - A Path To Unsupervised Learning Through Adversarial Networks by Soumit...Rizwan Habib
A Path To Unsupervised Learning Through Adversarial Networks - (Soumith Chintala, Researcher at Facebook AI Research)
Soumith Chintala is a Researcher at Facebook AI Research, where he works on deep learning, reinforcement learning, generative image models, agents for video games and large-scale high-performance deep learning. He holds a Masters in CS from NYU, and spent time in Yann LeCun's NYU lab building deep learning models for pedestrian detection, natural image OCR, depth-images among others.
Soumith will go over generative adversarial networks, a particular way of training neural networks to build high quality generative models. The talk will take you through an easy to follow timeline of the research and improvements in adversarial networks, followed by some future directions, as well as applications.
Generative Adversarial Networks (GANs) are a type of generative model that uses two neural networks - a generator and discriminator - competing against each other. The generator takes noise as input and generates synthetic samples, while the discriminator evaluates samples as real or generated. They are trained together until the generator fools the discriminator. GANs can generate realistic images, do image-to-image translation, and have applications in reinforcement learning. However, training GANs is challenging due to issues like non-convergence and mode collapse.
Adversarial learning for neural dialogue generationKeon Kim
This document summarizes an adversarial learning approach for neural dialogue generation. The model uses a generator and discriminator, where the generator produces responses and the discriminator determines if they are human-like. The generator is trained to maximize rewards from the discriminator using policy gradients. Two methods are introduced to assign rewards at each generation step to address issues with the baseline approach. Teacher forcing is also used to directly expose the generator to human responses during training. The results showed this adversarial training approach generates higher quality responses than previous baselines.
The document discusses different types of generative models including auto-regressive models, variational auto-encoders, and generative adversarial networks. It provides examples of each type of model and highlights some of their features and issues during training. Specific models discussed in more detail include PixelRNNs, DCGANs, WGANs, BEGANs, Pix2Pix, and CycleGANs. The document aims to introduce deep generative models and their applications.
The document discusses derivative-free optimization and evolutionary algorithms. It begins with an introduction to derivative-free optimization, explaining why it is useful when derivatives are unavailable or functions are noisy. Evolutionary algorithms are then discussed, including their fundamental elements like populations, selection, and variation operators. Specific evolutionary algorithms are presented, such as the estimation of distribution algorithm (EDA) and the (1+1)-ES algorithm with 1/5th success rule adaptation. The slides note that evolutionary algorithms are robust to noise and difficult optimization problems but are generally slower than derivative-based methods.
The document discusses problem solving agents and search algorithms. It describes problem solving as having four steps: goal formulation, problem formulation, search, and execution. It then discusses different types of problems agents may face, such as single state problems and problems with partial information. The document introduces tree search algorithms and strategies for searching a state space, such as breadth-first search. It analyzes the performance of breadth-first search and notes its exponential time and memory complexity for large problems.
The document discusses various techniques for fitting models to data, including linear least squares fitting, nonlinear least squares fitting, and robust estimation. Linear least squares fitting provides an exact solution by minimizing the residuals between the data and linear model. Nonlinear least squares fitting iteratively searches for the parameter values that minimize the residuals between the data and nonlinear model. Robust estimation techniques are less sensitive to outliers than least squares by down-weighting points with large deviations from the model.
NYAI - A Path To Unsupervised Learning Through Adversarial Networks by Soumit...Rizwan Habib
A Path To Unsupervised Learning Through Adversarial Networks - (Soumith Chintala, Researcher at Facebook AI Research)
Soumith Chintala is a Researcher at Facebook AI Research, where he works on deep learning, reinforcement learning, generative image models, agents for video games and large-scale high-performance deep learning. He holds a Masters in CS from NYU, and spent time in Yann LeCun's NYU lab building deep learning models for pedestrian detection, natural image OCR, depth-images among others.
Soumith will go over generative adversarial networks, a particular way of training neural networks to build high quality generative models. The talk will take you through an easy to follow timeline of the research and improvements in adversarial networks, followed by some future directions, as well as applications.
Generative Adversarial Networks (GANs) are a type of generative model that uses two neural networks - a generator and discriminator - competing against each other. The generator takes noise as input and generates synthetic samples, while the discriminator evaluates samples as real or generated. They are trained together until the generator fools the discriminator. GANs can generate realistic images, do image-to-image translation, and have applications in reinforcement learning. However, training GANs is challenging due to issues like non-convergence and mode collapse.
Adversarial learning for neural dialogue generationKeon Kim
This document summarizes an adversarial learning approach for neural dialogue generation. The model uses a generator and discriminator, where the generator produces responses and the discriminator determines if they are human-like. The generator is trained to maximize rewards from the discriminator using policy gradients. Two methods are introduced to assign rewards at each generation step to address issues with the baseline approach. Teacher forcing is also used to directly expose the generator to human responses during training. The results showed this adversarial training approach generates higher quality responses than previous baselines.
The document discusses different types of generative models including auto-regressive models, variational auto-encoders, and generative adversarial networks. It provides examples of each type of model and highlights some of their features and issues during training. Specific models discussed in more detail include PixelRNNs, DCGANs, WGANs, BEGANs, Pix2Pix, and CycleGANs. The document aims to introduce deep generative models and their applications.
발표자: 이활석 (Naver Clova)
발표일: 2017.11.
(현) NAVER Clova Vision
(현) TFKR 운영진
개요:
최근 딥러닝 연구는 지도학습에서 비지도학습으로 급격히 무게 중심이 옮겨지고 있습니다.
특히 컴퓨터 비전 기술 분야에서는 지도학습에 해당하는 이미지 내에 존재하는 정보를 찾는 인식 기술에서,
비지도학습에 해당하는 특정 정보를 담는 이미지를 생성하는 기술인 생성 기술로 연구 동향이 바뀌어 가고 있습니다.
본 세미나에서는 생성 기술의 두 축을 담당하고 있는 VAE(variational autoencoder)와 GAN(generative adversarial network) 동작 원리에 대해서 간략히 살펴 보고, 관련된 주요 논문들의 결과를 공유하고자 합니다.
딥러닝에 대한 지식이 없더라도 생성 모델을 학습할 수 있는 두 방법론인 VAE와 GAN의 개념에 대해 이해하고
그 기술 수준을 파악할 수 있도록 강의 내용을 구성하였습니다.
Review of Metaheuristics and Generalized Evolutionary Walk AlgorithmXin-She Yang
This document provides an overview of nature-inspired metaheuristic algorithms for optimization. It discusses the main components of metaheuristic algorithms, including intensification and diversification. It then reviews the history and development of several important metaheuristic algorithms from the 1960s to the 1990s, including genetic algorithms, evolutionary strategies, simulated annealing, ant colony optimization, particle swarm optimization, and differential evolution. The document aims to analyze why these algorithms work and provide a unified view of metaheuristics.
Introduction to behavior based recommendation systemKimikazu Kato
Material presented at Tokyo Web Mining Meetup, March 26, 2016.
The source code is here:
https://github.com/hamukazu/tokyo.webmining.2016-03-26
東京ウェブマイニング(2016年3月27)の発表資料です。すべて英語です。
발표자: 이활석(NAVER)
발표일: 2017.11.
최근 딥러닝 연구는 지도학습에서 비지도학습으로 급격히 무게 중심이 옮겨 지고 있습니다. 본 과정에서는 비지도학습의 가장 대표적인 방법인 오토인코더의 모든 것에 대해서 살펴보고자 합니다. 차원 축소관점에서 가장 많이 사용되는Autoencoder와 (AE) 그 변형 들인 Denoising AE, Contractive AE에 대해서 공부할 것이며, 데이터 생성 관점에서 최근 각광 받는 Variational AE와 (VAE) 그 변형 들인 Conditional VAE, Adversarial AE에 대해서 공부할 것입니다. 또한, 오토인코더의 다양한 활용 예시를 살펴봄으로써 현업과의 접점을 찾아보도록 노력할 것입니다.
1. Revisit Deep Neural Networks
2. Manifold Learning
3. Autoencoders
4. Variational Autoencoders
5. Applications
An Introduction To Applied Evolutionary Meta Heuristicsbiofractal
This presentation introduces some of the main themes in modern evolutionary algorithm research while emphasising their application to problems that exhibit real-world complexity.
발표자: 박태성 (UC Berkeley 박사과정)
발표일: 2017.6.
Taesung Park is a Ph.D. student at UC Berkeley in AI and computer vision, advised by Prof. Alexei Efros.
His research interest lies between computer vision and computational photography, such as generating realistic images or enhancing photo qualities. He received B.S. in mathematics and M.S. in computer science from Stanford University.
개요:
Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs.
However, for many tasks, paired training data will not be available.
We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples.
Our goal is to learn a mapping G: X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss.
Because this mapping is highly under-constrained, we couple it with an inverse mapping F: Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa).
Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc.
Quantitative comparisons against several prior methods demonstrate the superiority of our approach.
This document provides an overview of generative adversarial networks (GANs). It begins by explaining the basic GAN framework of having a generator and discriminator. It then discusses several GAN variations including DCGAN, EBGAN, WGAN, and BEGAN. Applications of GANs mentioned include image synthesis, image-to-image translation, and domain adaptation. The document notes that evaluating GANs is difficult as log-likelihood does not correlate with visual quality and qualitative assessments can be misleading. It reviews metrics like Inception Score and discusses challenges training GANs such as instability. In conclusion, the document covers the key concepts of GANs, related methods, applications, evaluation challenges, and training techniques.
This document provides an overview of distributed decision making in partially observable dynamic games and multiobjective policy optimization. It discusses applying these techniques to optimization problems in games like chess and Go, as well as industrial applications like managing groups of power plants involving renewable energy, nuclear power, coal, hydroelectric power, and interactions with electricity consumers and networks. The goal is to optimize strategies using parallel computing and test these approaches on games and energy systems.
Choosing between several options in uncertain environmentsOlivier Teytaud
The document discusses bandit problems with strategic choices and small budgets. It defines bandit problems, strategic bandit problems, and compares the two. It presents algorithms for exploring options and making recommendations in both one-player and two-player settings. Experimental results on a Go positioning problem and an online card game show that TEXP3 outperforms other algorithms in two-player settings. The document concludes with discussions on extensions to structured bandits and using strategic bandits to model investment choices.
Hydroelectricity uses water to produce electricity and has advantages for electricity storage. It provides daily, yearly, and negative electricity production by pumping water to higher reservoirs. However, expanding hydroelectricity is challenging due to its large infrastructure requirements and local environmental impacts. New technologies may improve energy storage capabilities and grid stability in the future, but developing large-scale annual storage remains difficult given constraints. Hydroelectricity will continue playing an important role in energy systems alongside other renewable technologies and efficiency strategies.
Tools for Discrete Time Control; Application to Power SystemsOlivier Teytaud
3 main algorithms from the state of the art:
- Model Predictive Control
- Stochastic Dynamic Programming
- Direct Policy Search
==> and our proposal, a modified Direct Policy Search
termed Direct Value Search
This document discusses blind Go, a variant of the game where players do not look at the board and must memorize positions. It explores strategies for blind Go, such as playing unusual moves that are harder for the opponent to remember. Experiments found that providing an empty board as a visual aid helped players. When playing against professionals in blind 9x9 Go, the computer won 2 of 3 games. In a 19x19 game against a top human player, the computer won through an unexpected, unusual move where the human made a rare mistake due to not seeing the board. Further research is needed, but playing unconventional moves seems beneficial in blind Go.
Theory of games, with a short reminder of computational complexity and an independent appendix on human complexity and the game of Go
@article{david:hal-00710073,
hal_id = {hal-00710073},
url = {http://hal.inria.fr/hal-00710073},
title = {{The Frontier of Decidability in Partially Observable Recursive Games}},
author = {David, Auger and Teytaud, Olivier},
abstract = {{The classical decision problem associated with a game is whether a given player has a winning strategy, i.e. some strategy that leads almost surely to a victory, regardless of the other players' strategies. While this problem is relevant for deterministic fully observable games, for a partially observable game the requirement of winning with probability 1 is too strong. In fact, as shown in this paper, a game might be decidable for the simple criterion of almost sure victory, whereas optimal play (even in an approximate sense) is not computable. We therefore propose another criterion, the decidability of which is equivalent to the computability of approximately optimal play. Then, we show that (i) this criterion is undecidable in the general case, even with deterministic games (no random part in the game), (ii) that it is in the jump 0', and that, even in the stochastic case, (iii) it becomes decidable if we add the requirement that the game halts almost surely whatever maybe the strategies of the players.}},
language = {Anglais},
affiliation = {Laboratoire de Recherche en Informatique - LRI , TAO - INRIA Saclay - Ile de France},
booktitle = {{Special Issue on "Frontier between Decidability and Undecidability"}},
publisher = {World Scinet},
journal = {International Journal on Foundations of Computer Science (IJFCS)},
volume = {Accepted},
note = {revised 2011, accepted 2011, in press },
audience = {internationale },
year = {2012},
}
A simple tutorial on Monte-Carlo Tree Search
Contains a description of dynamic programming and alpha-beta search, then MCTS. Special cases for simultaneous actions are discussed.
I should add comments so that it can be used without preliminary knowledge of MCTS, if there is at least one request for doing so I'll do it.
@article{gelly:hal-00695370,
hal_id = {hal-00695370},
url = {http://hal.inria.fr/hal-00695370},
title = {{The Grand Challenge of Computer Go: Monte Carlo Tree Search and Extensions}},
author = {Gelly, Sylvain and Kocsis, Levente and Schoenauer, Marc and Sebag, Mich{\`e}le and Silver, David and Szepesvari, Csaba and Teytaud, Olivier},
abstract = {{The ancient oriental game of Go has long been considered a grand challenge for artificial intelligence. For decades, com- puter Go has defied the classical methods in game tree search that worked so successfully for chess and checkers. How- ever, recent play in computer Go has been transformed by a new paradigm for tree search based on Monte-Carlo meth- ods. Programs based on Monte-Carlo tree search now play at human-master levels and are beginning to challenge top professional players. In this paper we describe the leading algorithms for Monte-Carlo tree search and explain how they have advanced the state of the art in computer Go.}},
language = {Anglais},
affiliation = {TAO - INRIA Saclay - Ile de France , Laboratoire de Recherche en Informatique - LRI , LPDS , Microsoft Research - Inria Joint Centre - MSR - INRIA , University of Alberta, Canada , Department of Computing Science},
publisher = {ACM},
pages = {106-113},
journal = {Communication of the ACM},
volume = {55},
number = {3 },
audience = {internationale },
year = {2012},
pdf = {http://hal.inria.fr/hal-00695370/PDF/CACM-MCTS.pdf},
}
Don't believe what is written in these slides.
These statements are just provocative statements, most of them found on internet, here for discussion and for brain storming.
Tools for artificial intelligence: EXP3, Zermelo algorithm, Alpha-Beta, and s...Olivier Teytaud
Here are a few suggestions on how to improve the Zermelo algorithm when it is too slow:
1. Add a depth limit. Stop recursion when a maximum search depth is reached. Return a heuristic evaluation instead of continuing search.
2. Use alpha-beta pruning. Track the best value found (alpha) and prune branches that cannot improve on it.
3. Iterative deepening. Run successive searches with increasing depth limits to get progressively better approximations.
4. Move ordering. Evaluate better moves earlier in the search tree. This prunes bad moves earlier.
5. Transposition tables. Store previously computed move evaluations to avoid re-expanding the same position.
6. Parallelize the
Ilab Metis: we optimize power systems and we are not afraid of direct policy ...Olivier Teytaud
Ilab METIS is a collaboration between TAO, a machine learning and optimization team within INRIA, and Artelys, an SME focused on optimization. They work on optimizing energy policies through simulations of power systems while taking into account uncertainties and stochastic variables. Their methodologies use a hybrid of reinforcement learning, mathematical programming, and direct policy search to optimize investments and operational decisions for power grids over multiple timescales while handling constraints. They have applied their approaches to problems involving interconnection planning, demand balancing, and renewable integration on scales from cities to entire continents.
Noisy Optimization combining Bandits and Evolutionary AlgorithmsOlivier Teytaud
@inproceedings{rolet:inria-00437140,
hal_id = {inria-00437140},
url = {http://hal.inria.fr/inria-00437140},
title = {{Bandit-based Estimation of Distribution Algorithms for Noisy Optimization: Rigorous Runtime Analysis}},
author = {Rolet, Philippe and Teytaud, Olivier},
abstract = {{We show complexity bounds for noisy optimization, in frame- works in which noise is stronger than in previously published papers[19]. We also propose an algorithm based on bandits (variants of [16]) that reaches the bound within logarithmic factors. We emphasize the differ- ences with empirical derived published algorithms.}},
keywords = {noisy optimization evolutionary algorithms bandits},
language = {Anglais},
affiliation = {Laboratoire de Recherche en Informatique - LRI , TAO - INRIA Futurs , TAO - INRIA Saclay - Ile de France},
booktitle = {{Lion4}},
address = {Venice, Italie},
audience = {internationale },
year = {2010},
pdf = {http://hal.inria.fr/inria-00437140/PDF/lion4long.pdf},
}
@inproceedings{coulom:hal-00517157,
hal_id = {hal-00517157},
url = {http://hal.archives-ouvertes.fr/hal-00517157},
title = {{Handling Expensive Optimization with Large Noise}},
author = {Coulom, R{\'e}mi and Rolet, Philippe and Sokolovska, Nataliya and Teytaud, Olivier},
abstract = {{This paper exhibits lower and upper bounds on runtimes for expensive noisy optimization problems. Runtimes are expressed in terms of number of fitness evaluations. Fitnesses considered are monotonic transformations of the {\em sphere} function. The analysis focuses on the common case of fitness functions quadratic in the distance to the optimum in the neighborhood of this optimum---it is nonetheless also valid for any monotonic polynomial of degree p>2. Upper bounds are derived via a bandit-based estimation of distribution algorithm that relies on Bernstein races called R-EDA. It is known that the algorithm is consistent even in non-differentiable cases. Here we show that: (i) if the variance of the noise decreases to 0 around the optimum, it can perform optimally for quadratic transformations of the norm to the optimum, (ii) otherwise, it provides a slower convergence rate than the one exhibited empirically by an algorithm called Quadratic Logistic Regression based on surrogate models---although QLR requires a probabilistic prior on the fitness class.}},
keywords = {Noisy optimization, Bernstein races},
language = {Anglais},
affiliation = {SEQUEL - INRIA Lille - Nord Europe , TAO - INRIA Saclay - Ile de France , Laboratoire de Recherche en Informatique - LRI},
booktitle = {{Foundations of Genetic Algorithms (FOGA 2011)}},
pages = {TBA},
address = {Autriche},
editor = {ACM },
audience = {internationale },
year = {2011},
month = Jan,
pdf = {http://hal.archives-ouvertes.fr/hal-00517157/PDF/foga10noise.pdf},
}
- The document discusses games with simultaneous actions and hidden information. It presents games as directed graphs with actions, players, observations, rewards, and loops.
- Games with simultaneous actions and short-term hidden information can be represented as games with hidden information by removing intermediate turns.
- Questions about the existence of a sure-win strategy for one player (the "UD" question) are only relevant for games with full observability, not matrix games.
This document discusses how to save money by using open source software instead of proprietary software like Microsoft Office. It recommends downloading and using OpenOffice or LibreOffice instead, as they are free alternatives that work very well. It also recommends installing a free open source operating system like Linux, as this can save a lot of money on software costs over time. Open source is discussed as an economic model where the marginal cost of sharing and distributing code is very low, enabling new business models to earn money through services, support or customization rather than just software licenses. A variety of important open source software projects are listed across different domains like operating systems, office suites, web servers and more.
발표자: 이활석 (Naver Clova)
발표일: 2017.11.
(현) NAVER Clova Vision
(현) TFKR 운영진
개요:
최근 딥러닝 연구는 지도학습에서 비지도학습으로 급격히 무게 중심이 옮겨지고 있습니다.
특히 컴퓨터 비전 기술 분야에서는 지도학습에 해당하는 이미지 내에 존재하는 정보를 찾는 인식 기술에서,
비지도학습에 해당하는 특정 정보를 담는 이미지를 생성하는 기술인 생성 기술로 연구 동향이 바뀌어 가고 있습니다.
본 세미나에서는 생성 기술의 두 축을 담당하고 있는 VAE(variational autoencoder)와 GAN(generative adversarial network) 동작 원리에 대해서 간략히 살펴 보고, 관련된 주요 논문들의 결과를 공유하고자 합니다.
딥러닝에 대한 지식이 없더라도 생성 모델을 학습할 수 있는 두 방법론인 VAE와 GAN의 개념에 대해 이해하고
그 기술 수준을 파악할 수 있도록 강의 내용을 구성하였습니다.
Review of Metaheuristics and Generalized Evolutionary Walk AlgorithmXin-She Yang
This document provides an overview of nature-inspired metaheuristic algorithms for optimization. It discusses the main components of metaheuristic algorithms, including intensification and diversification. It then reviews the history and development of several important metaheuristic algorithms from the 1960s to the 1990s, including genetic algorithms, evolutionary strategies, simulated annealing, ant colony optimization, particle swarm optimization, and differential evolution. The document aims to analyze why these algorithms work and provide a unified view of metaheuristics.
Introduction to behavior based recommendation systemKimikazu Kato
Material presented at Tokyo Web Mining Meetup, March 26, 2016.
The source code is here:
https://github.com/hamukazu/tokyo.webmining.2016-03-26
東京ウェブマイニング(2016年3月27)の発表資料です。すべて英語です。
발표자: 이활석(NAVER)
발표일: 2017.11.
최근 딥러닝 연구는 지도학습에서 비지도학습으로 급격히 무게 중심이 옮겨 지고 있습니다. 본 과정에서는 비지도학습의 가장 대표적인 방법인 오토인코더의 모든 것에 대해서 살펴보고자 합니다. 차원 축소관점에서 가장 많이 사용되는Autoencoder와 (AE) 그 변형 들인 Denoising AE, Contractive AE에 대해서 공부할 것이며, 데이터 생성 관점에서 최근 각광 받는 Variational AE와 (VAE) 그 변형 들인 Conditional VAE, Adversarial AE에 대해서 공부할 것입니다. 또한, 오토인코더의 다양한 활용 예시를 살펴봄으로써 현업과의 접점을 찾아보도록 노력할 것입니다.
1. Revisit Deep Neural Networks
2. Manifold Learning
3. Autoencoders
4. Variational Autoencoders
5. Applications
An Introduction To Applied Evolutionary Meta Heuristicsbiofractal
This presentation introduces some of the main themes in modern evolutionary algorithm research while emphasising their application to problems that exhibit real-world complexity.
발표자: 박태성 (UC Berkeley 박사과정)
발표일: 2017.6.
Taesung Park is a Ph.D. student at UC Berkeley in AI and computer vision, advised by Prof. Alexei Efros.
His research interest lies between computer vision and computational photography, such as generating realistic images or enhancing photo qualities. He received B.S. in mathematics and M.S. in computer science from Stanford University.
개요:
Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs.
However, for many tasks, paired training data will not be available.
We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples.
Our goal is to learn a mapping G: X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss.
Because this mapping is highly under-constrained, we couple it with an inverse mapping F: Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa).
Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc.
Quantitative comparisons against several prior methods demonstrate the superiority of our approach.
This document provides an overview of generative adversarial networks (GANs). It begins by explaining the basic GAN framework of having a generator and discriminator. It then discusses several GAN variations including DCGAN, EBGAN, WGAN, and BEGAN. Applications of GANs mentioned include image synthesis, image-to-image translation, and domain adaptation. The document notes that evaluating GANs is difficult as log-likelihood does not correlate with visual quality and qualitative assessments can be misleading. It reviews metrics like Inception Score and discusses challenges training GANs such as instability. In conclusion, the document covers the key concepts of GANs, related methods, applications, evaluation challenges, and training techniques.
This document provides an overview of distributed decision making in partially observable dynamic games and multiobjective policy optimization. It discusses applying these techniques to optimization problems in games like chess and Go, as well as industrial applications like managing groups of power plants involving renewable energy, nuclear power, coal, hydroelectric power, and interactions with electricity consumers and networks. The goal is to optimize strategies using parallel computing and test these approaches on games and energy systems.
Choosing between several options in uncertain environmentsOlivier Teytaud
The document discusses bandit problems with strategic choices and small budgets. It defines bandit problems, strategic bandit problems, and compares the two. It presents algorithms for exploring options and making recommendations in both one-player and two-player settings. Experimental results on a Go positioning problem and an online card game show that TEXP3 outperforms other algorithms in two-player settings. The document concludes with discussions on extensions to structured bandits and using strategic bandits to model investment choices.
Hydroelectricity uses water to produce electricity and has advantages for electricity storage. It provides daily, yearly, and negative electricity production by pumping water to higher reservoirs. However, expanding hydroelectricity is challenging due to its large infrastructure requirements and local environmental impacts. New technologies may improve energy storage capabilities and grid stability in the future, but developing large-scale annual storage remains difficult given constraints. Hydroelectricity will continue playing an important role in energy systems alongside other renewable technologies and efficiency strategies.
Tools for Discrete Time Control; Application to Power SystemsOlivier Teytaud
3 main algorithms from the state of the art:
- Model Predictive Control
- Stochastic Dynamic Programming
- Direct Policy Search
==> and our proposal, a modified Direct Policy Search
termed Direct Value Search
This document discusses blind Go, a variant of the game where players do not look at the board and must memorize positions. It explores strategies for blind Go, such as playing unusual moves that are harder for the opponent to remember. Experiments found that providing an empty board as a visual aid helped players. When playing against professionals in blind 9x9 Go, the computer won 2 of 3 games. In a 19x19 game against a top human player, the computer won through an unexpected, unusual move where the human made a rare mistake due to not seeing the board. Further research is needed, but playing unconventional moves seems beneficial in blind Go.
Theory of games, with a short reminder of computational complexity and an independent appendix on human complexity and the game of Go
@article{david:hal-00710073,
hal_id = {hal-00710073},
url = {http://hal.inria.fr/hal-00710073},
title = {{The Frontier of Decidability in Partially Observable Recursive Games}},
author = {David, Auger and Teytaud, Olivier},
abstract = {{The classical decision problem associated with a game is whether a given player has a winning strategy, i.e. some strategy that leads almost surely to a victory, regardless of the other players' strategies. While this problem is relevant for deterministic fully observable games, for a partially observable game the requirement of winning with probability 1 is too strong. In fact, as shown in this paper, a game might be decidable for the simple criterion of almost sure victory, whereas optimal play (even in an approximate sense) is not computable. We therefore propose another criterion, the decidability of which is equivalent to the computability of approximately optimal play. Then, we show that (i) this criterion is undecidable in the general case, even with deterministic games (no random part in the game), (ii) that it is in the jump 0', and that, even in the stochastic case, (iii) it becomes decidable if we add the requirement that the game halts almost surely whatever maybe the strategies of the players.}},
language = {Anglais},
affiliation = {Laboratoire de Recherche en Informatique - LRI , TAO - INRIA Saclay - Ile de France},
booktitle = {{Special Issue on "Frontier between Decidability and Undecidability"}},
publisher = {World Scinet},
journal = {International Journal on Foundations of Computer Science (IJFCS)},
volume = {Accepted},
note = {revised 2011, accepted 2011, in press },
audience = {internationale },
year = {2012},
}
A simple tutorial on Monte-Carlo Tree Search
Contains a description of dynamic programming and alpha-beta search, then MCTS. Special cases for simultaneous actions are discussed.
I should add comments so that it can be used without preliminary knowledge of MCTS, if there is at least one request for doing so I'll do it.
@article{gelly:hal-00695370,
hal_id = {hal-00695370},
url = {http://hal.inria.fr/hal-00695370},
title = {{The Grand Challenge of Computer Go: Monte Carlo Tree Search and Extensions}},
author = {Gelly, Sylvain and Kocsis, Levente and Schoenauer, Marc and Sebag, Mich{\`e}le and Silver, David and Szepesvari, Csaba and Teytaud, Olivier},
abstract = {{The ancient oriental game of Go has long been considered a grand challenge for artificial intelligence. For decades, com- puter Go has defied the classical methods in game tree search that worked so successfully for chess and checkers. How- ever, recent play in computer Go has been transformed by a new paradigm for tree search based on Monte-Carlo meth- ods. Programs based on Monte-Carlo tree search now play at human-master levels and are beginning to challenge top professional players. In this paper we describe the leading algorithms for Monte-Carlo tree search and explain how they have advanced the state of the art in computer Go.}},
language = {Anglais},
affiliation = {TAO - INRIA Saclay - Ile de France , Laboratoire de Recherche en Informatique - LRI , LPDS , Microsoft Research - Inria Joint Centre - MSR - INRIA , University of Alberta, Canada , Department of Computing Science},
publisher = {ACM},
pages = {106-113},
journal = {Communication of the ACM},
volume = {55},
number = {3 },
audience = {internationale },
year = {2012},
pdf = {http://hal.inria.fr/hal-00695370/PDF/CACM-MCTS.pdf},
}
Don't believe what is written in these slides.
These statements are just provocative statements, most of them found on internet, here for discussion and for brain storming.
Tools for artificial intelligence: EXP3, Zermelo algorithm, Alpha-Beta, and s...Olivier Teytaud
Here are a few suggestions on how to improve the Zermelo algorithm when it is too slow:
1. Add a depth limit. Stop recursion when a maximum search depth is reached. Return a heuristic evaluation instead of continuing search.
2. Use alpha-beta pruning. Track the best value found (alpha) and prune branches that cannot improve on it.
3. Iterative deepening. Run successive searches with increasing depth limits to get progressively better approximations.
4. Move ordering. Evaluate better moves earlier in the search tree. This prunes bad moves earlier.
5. Transposition tables. Store previously computed move evaluations to avoid re-expanding the same position.
6. Parallelize the
Ilab Metis: we optimize power systems and we are not afraid of direct policy ...Olivier Teytaud
Ilab METIS is a collaboration between TAO, a machine learning and optimization team within INRIA, and Artelys, an SME focused on optimization. They work on optimizing energy policies through simulations of power systems while taking into account uncertainties and stochastic variables. Their methodologies use a hybrid of reinforcement learning, mathematical programming, and direct policy search to optimize investments and operational decisions for power grids over multiple timescales while handling constraints. They have applied their approaches to problems involving interconnection planning, demand balancing, and renewable integration on scales from cities to entire continents.
Noisy Optimization combining Bandits and Evolutionary AlgorithmsOlivier Teytaud
@inproceedings{rolet:inria-00437140,
hal_id = {inria-00437140},
url = {http://hal.inria.fr/inria-00437140},
title = {{Bandit-based Estimation of Distribution Algorithms for Noisy Optimization: Rigorous Runtime Analysis}},
author = {Rolet, Philippe and Teytaud, Olivier},
abstract = {{We show complexity bounds for noisy optimization, in frame- works in which noise is stronger than in previously published papers[19]. We also propose an algorithm based on bandits (variants of [16]) that reaches the bound within logarithmic factors. We emphasize the differ- ences with empirical derived published algorithms.}},
keywords = {noisy optimization evolutionary algorithms bandits},
language = {Anglais},
affiliation = {Laboratoire de Recherche en Informatique - LRI , TAO - INRIA Futurs , TAO - INRIA Saclay - Ile de France},
booktitle = {{Lion4}},
address = {Venice, Italie},
audience = {internationale },
year = {2010},
pdf = {http://hal.inria.fr/inria-00437140/PDF/lion4long.pdf},
}
@inproceedings{coulom:hal-00517157,
hal_id = {hal-00517157},
url = {http://hal.archives-ouvertes.fr/hal-00517157},
title = {{Handling Expensive Optimization with Large Noise}},
author = {Coulom, R{\'e}mi and Rolet, Philippe and Sokolovska, Nataliya and Teytaud, Olivier},
abstract = {{This paper exhibits lower and upper bounds on runtimes for expensive noisy optimization problems. Runtimes are expressed in terms of number of fitness evaluations. Fitnesses considered are monotonic transformations of the {\em sphere} function. The analysis focuses on the common case of fitness functions quadratic in the distance to the optimum in the neighborhood of this optimum---it is nonetheless also valid for any monotonic polynomial of degree p>2. Upper bounds are derived via a bandit-based estimation of distribution algorithm that relies on Bernstein races called R-EDA. It is known that the algorithm is consistent even in non-differentiable cases. Here we show that: (i) if the variance of the noise decreases to 0 around the optimum, it can perform optimally for quadratic transformations of the norm to the optimum, (ii) otherwise, it provides a slower convergence rate than the one exhibited empirically by an algorithm called Quadratic Logistic Regression based on surrogate models---although QLR requires a probabilistic prior on the fitness class.}},
keywords = {Noisy optimization, Bernstein races},
language = {Anglais},
affiliation = {SEQUEL - INRIA Lille - Nord Europe , TAO - INRIA Saclay - Ile de France , Laboratoire de Recherche en Informatique - LRI},
booktitle = {{Foundations of Genetic Algorithms (FOGA 2011)}},
pages = {TBA},
address = {Autriche},
editor = {ACM },
audience = {internationale },
year = {2011},
month = Jan,
pdf = {http://hal.archives-ouvertes.fr/hal-00517157/PDF/foga10noise.pdf},
}
- The document discusses games with simultaneous actions and hidden information. It presents games as directed graphs with actions, players, observations, rewards, and loops.
- Games with simultaneous actions and short-term hidden information can be represented as games with hidden information by removing intermediate turns.
- Questions about the existence of a sure-win strategy for one player (the "UD" question) are only relevant for games with full observability, not matrix games.
This document discusses how to save money by using open source software instead of proprietary software like Microsoft Office. It recommends downloading and using OpenOffice or LibreOffice instead, as they are free alternatives that work very well. It also recommends installing a free open source operating system like Linux, as this can save a lot of money on software costs over time. Open source is discussed as an economic model where the marginal cost of sharing and distributing code is very low, enabling new business models to earn money through services, support or customization rather than just software licenses. A variety of important open source software projects are listed across different domains like operating systems, office suites, web servers and more.
- The document discusses energy management in France and potential areas of research collaboration between France and Taiwan.
- Key areas discussed include optimizing long-term investment policies for electricity generation using tools like reinforcement learning and stochastic programming to account for uncertainties.
- Specific questions mentioned are around optimal connections between Europe and Africa, impacts of subsidizing solar power or switching off nuclear plants, and benefits of demand reduction contracts.
- The researcher proposes combining methods like direct policy search and Monte Carlo tree search to better optimize long-term planning while accounting for short-term effects. Plans are discussed to test new ideas, share data and codes, and potentially organize joint work between the two regions.
The document discusses the computational complexity of partially observable games. Some key points:
1. Two-player unobservable games are EXPSPACE-complete, as strategies are just sequences of actions with no observability.
2. Encoding a Turing machine as a game shows the hardness of the unobservable case. The tape configurations can be represented in a game state of size logarithmic in the tape size.
3. Two-player partially observable games or one-player partially observable games against randomness are 2EXPTIME-complete, even more complex than the unobservable case.
We provide an overview of the tools that enable deep learning in R, including packages such as tensorflown keras, and tfestimators. Demos are included to show the API. We also discuss the latest features.
1. The document discusses various methods for continuous optimization, including rates of convergence for noise-free and noisy settings.
2. In noise-free settings, methods like Newton's method and BFGS have quadratic or superlinear convergence rates, while evolutionary strategies (ES) have linear convergence rates.
3. Lower bounds on optimization complexity are also discussed, showing minimum comparisons or evaluations needed depending on problem properties like domain size and precision required.
The document provides an overview of machine learning, including definitions of machine learning, the differences between programming and machine learning, examples of machine learning applications, and descriptions of various machine learning algorithms and techniques. It discusses supervised learning methods like classification and regression. Unsupervised learning methods like clustering are also covered. The document outlines the machine learning process and provides cautions about machine learning.
The document provides an overview of machine learning and discusses various concepts related to applying machine learning to real-world problems. It covers topics such as feature extraction, encoding input data, classification vs regression, evaluating model performance, and challenges like overfitting and underfitting models to data. Examples are given for different types of learning problems, including text classification, sentiment analysis, and predicting stock prices.
GDC2019 - SEED - Towards Deep Generative Models in Game DevelopmentElectronic Arts / DICE
Deep learning is becoming ubiquitous in Machine Learning (ML) research, and it's also finding its place in industry-related applications. Specifically, deep generative models have proven incredibly useful at generating and remixing realistic content from scratch, making themselves a very appealing technology in the field of AI-enhanced content authoring. As part of this year's Machine Learning Tutorial at the Game Developers Conference 2019 (GDC), Jorge Del Val from SEED will cover in an accessible manner the fundamentals of deep generative modeling, including some common algorithms and architectures. He will also discuss applications to game development and explore some recent advances in the field.
The attendee will gain basic understanding of the fundamentals of generative models and how to implement them. Also, attendees will grasp potential applications in the field of game development to inspire their work and companies. This talk does not require a mathematical or machine learning background, although previous knowledge on either of those is beneficial.
Using Topological Data Analysis on your BigDataAnalyticsWeek
Synopsis:
Topological Data Analysis (TDA) is a framework for data analysis and machine learning and represents a breakthrough in how to effectively use geometric and topological information to solve 'Big Data' problems. TDA provides meaningful summaries (in a technical sense to be described) and insights into complex data problems. In this talk, Anthony will begin with an overview of TDA and describe the core algorithm that is utilized. This talk will include both the theory and real world problems that have been solved using TDA. After this talk, attendees will understand how the underlying TDA algorithm works and how it improves on existing “classical” data analysis techniques as well as how it provides a framework for many machine learning algorithms and tasks.
Speaker:
Anthony Bak, Senior Data Scientist, Ayasdi
Prior to coming to Ayasdi, Anthony was at Stanford University where he did a postdoc with Ayasdi co-founder Gunnar Carlsson, working on new methods and applications of Topological Data Analysis. He completed his Ph.D. work in algebraic geometry with applications to string theory at the University of Pennsylvania and ,along the way, he worked at the Max Planck Institute in Germany, Mount Holyoke College in Germany, and the American Institute of Mathematics in California.
This talk was based on my Master's thesis which I had completed earlier that year. It gives an overview on how certain parallel dynamic programming can be computed in parallel efficiently, and what we want that to mean here.
The plots in "Performance Examples" show speedup S on the left and efficiency E on the right, both against input size.
Read more over here: http://reitzig.github.io/publications/Reitzig2012
Diversity mechanisms for evolutionary populations in Search-Based Software En...Annibale Panichella
This document discusses mechanisms for maintaining diversity in evolutionary algorithms. It begins by explaining the importance of balancing exploration and exploitation. Several techniques for preserving diversity are then presented, including modifying genetic operators, changing the objective function, and applying statistical methods. Empirical evaluations demonstrate how diversity mechanisms can improve performance in search-based software engineering problems like test data generation and test suite optimization, which often suffer from premature convergence and getting stuck in local optima due to loss of diversity. Parameter tuning techniques like adjusting the mutation rate and niching methods like fitness sharing are also described as ways to explicitly promote diversity.
This document provides an overview of deep learning. It discusses the motivation and history of machine learning, including pattern recognition, machine learning algorithms based on linear models, and neural networks. It then introduces deep learning, noting that deep neural networks combined with GPUs and large datasets have led to significant performance gains compared to other machine learning techniques.
Innovations in technology has revolutionized financial services to an extent that large financial institutions like Goldman Sachs are claiming to be technology companies! It is no secret that technological innovations like Data science and AI are changing fundamentally how financial products are created, tested and delivered. While it is exciting to learn about technologies themselves, there is very little guidance available to companies and financial professionals should retool and gear themselves towards the upcoming revolution.
In this master class, we will discuss key innovations in Data Science and AI and connect applications of these novel fields in forecasting and optimization. Through case studies and examples, we will demonstrate why now is the time you should invest to learn about the topics that will reshape the financial services industry of the future!
Topic
- Frontier topics in Optimization
The document discusses various concepts in machine learning and deep learning including:
1. The semantic gap between what computers can see/read from raw inputs versus higher-level semantics. Deep learning aims to close this gap through hierarchical representations.
2. Traditional computer vision techniques versus deep learning approaches for tasks like face recognition.
3. The differences between rule-based AI, machine learning, and deep learning.
4. Key components of supervised machine learning models including data, models, loss functions, and optimizers.
5. Different problem types in machine learning like regression, classification, and their associated model architectures, activation functions, and loss functions.
6. Frameworks for machine learning like Keras and
MS CS - Selecting Machine Learning AlgorithmKaniska Mandal
ML Algorithms usually solve an optimization problem such that we need to find parameters for a given model that minimizes
— Loss function (prediction error)
— Model simplicity (regularization)
This document summarizes various algorithms topics including pattern matching, matrix multiplication, graph algorithms, algebraic problems, and NP-hard and NP-complete problems. It provides details on pattern matching techniques in computer science including exact string matching and applications. It also describes how to find the most efficient way to multiply a sequence of matrices by considering different orders of operations. Graph algorithms are introduced including directed and undirected graphs. Popular design approaches for algebraic problems such as divide-and-conquer, greedy techniques, and dynamic programming are outlined. Finally, the key differences between NP, NP-hard, and NP-complete problems are defined.
This document summarizes key concepts from the CS 221 lecture on machine learning. It discusses supervised learning techniques like Naive Bayes classification, linear regression, perceptrons, and SVMs. It also covers unsupervised learning through k-nearest neighbors and discusses challenges like overfitting, generalization, and the curse of dimensionality.
Evolutionary Optimization Algorithms & Large-Scale Machine LearningUniversity of Maribor
The document discusses a workshop organized by the DAPHNE project on evolutionary optimization algorithms and large-scale machine learning. It includes an agenda for a use case workshop on September 26th, 2023 in Graz, Austria. The document also provides background on differential evolution methods, including descriptions of the algorithm, control parameters, applications, and related work on improvements.
This document provides an introduction to machine learning and inductive inference. It discusses what machine learning is, common learning tasks like concept learning and function learning, different data representations, and example applications such as knowledge discovery and building adaptive systems. The course will cover generalizing from specific examples to broader concepts through inductive inference and different learning approaches.
This document discusses object detection using Adaboost and various techniques. It begins with an overview of the Adaboost algorithm and provides a toy example to illustrate how it works. Next, it describes how Viola and Jones used Adaboost with Haar-like features and an integral image representation for rapid face detection in images. It achieved high detection rates with very low false positives. The document also discusses how Schneiderman and Kanade used a parts-based representation with localized wavelet coefficients as features for object detection and used statistical independence of parts to obtain likelihoods for classification.
Similar to Artificial Intelligence and Optimization with Parallelism (20)
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
How to Build a Module in Odoo 17 Using the Scaffold MethodCeline George
Odoo provides an option for creating a module by using a single line command. By using this command the user can make a whole structure of a module. It is very easy for a beginner to make a module. There is no need to make each file manually. This slide will show how to create a module using the scaffold method.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
Thinking of getting a dog? Be aware that breeds like Pit Bulls, Rottweilers, and German Shepherds can be loyal and dangerous. Proper training and socialization are crucial to preventing aggressive behaviors. Ensure safety by understanding their needs and always supervising interactions. Stay safe, and enjoy your furry friends!
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
Artificial Intelligence and Optimization with Parallelism
1. HABILITATION
Artificial intelligence
with Parallelism
Acknowledgments:
All the TAO team. People in Liège, Taiwan, Lri,Artelys, Mash, Iomca, ..,
Thanks a lot to the committee.
Thanks + good recovery to Jonathan Shapiro.
Thanks to Grid5000.
Olivier Teytaud olivier.teytaud@inria.fr
2. Introduction
What is AI ?
Why evolutionary optimization is a part of AI
Why parallelism ?
Evolutionary computation
Comparison-based optimization
Parallelization
Noisy cases
Sequential decision making
Fundamental facts
Monte-Carlo Tree Search
Conclusion
3. AI = using computers where they
are weak / weaker than humans.
(thanks Michèle S.)
Difficult optimization (complex structure,
noisy objective functions)
Games (difficult ones)
Key difference with many operational research works:
AI = choosing a model as close as possible to reality and
(very) approximately solve it
OR = choosing the best model that you can solve almost exactly
4. AI = using computers where they
are weak / weaker than humans.
(thanks Michèle S.)
Difficult optimization (complex structure,
noisy objective functions)
Games (difficult ones)
Key difference with many operational research works:
AI = choosing a model as close as possible to reality and
(very) approximately solve it
OR = choosing the best model that you can solve almost exactly
5. AI = using computers where they
are weak / weaker than humans.
(thanks Michèle S.)
Difficult optimization (complex structure,
noisy objective functions)
Games (difficult ones)
Key difference with many operational research works:
AI = choosing a model as close as possible to reality and
(very) approximately solve it
OR = choosing the best model that you can solve almost exactly
6. AI = using computers where they
are weak / weaker than humans.
(thanks Michèle S.)
Difficult optimization (complex structure,
noisy objective functions)
Games (difficult ones)
Key difference with many operational research works:
AI = choosing a model as close as possible to reality and
(very) approximately solve it
OR = choosing the best model that you can solve almost exactly
7. Many works are about numbers.
Providing standard deviations, rates, etc.
Other goal (more ambitious ?):
switching from something which does not work
to something which works.
E.g. vision; a computer can distinguish:
9. And it's a disaster for categorizing
- children,
- women,
- panda,
- babies,
- children
- men,
- bears,
- trucks,
- cars.
10. And it's a disaster for categorizing children,
women, panda, babies, children, men, bears, trucks, cars.
11. And it's a disaster for categorizing children,
women, panda, babies, children, men, bears, trucks, cars.
3 years old;
she can do it.
12. ==> AI= focus on things which do not
work and (hopefully) make them work.
13. Introduction
What is AI ?
Why evolutionary optimization is a part of AI
Why parallelism ?
Evolutionary computation
Comparison-based optimization
Parallelization
Noisy cases
Sequential decision making
Fundamental facts
Monte-Carlo Tree Search
Conclusion
14. Evolutionary optimization is a part of A.I.
Often considered as bad, because many
EO tools are not that hard,
mathematically speaking.
I've met people using
- randomized mutations
- cross-overs
but who did not call this evolutionary or
genetic, because it would be bad.
15. Gives a lot freedom:
- choose your operators (depending on the problem)
- choose your population-size (depending on your
computer/grid )
- choose (carefully) e.g. min(dimension, /4)
==> Can work on strange domains
19. Voronoi representation:
- a family of points
- their labels
==> cross-over makes sense
==> you can optimize a shape
20. Voronoi representation:
- a family of points
- their labels
==> cross-over makes sense
==> you can optimize a shape
21. Voronoi representation:
- a family of points
- their labels
==> cross-over makes sense
==> you can optimize a shape
Great substitute for
averaging.
“on the benefit of sex”
23. Introduction
What is AI ?
Why evolutionary optimization is a part of AI
Why parallelism ?
Evolutionary computation
Comparison-based optimization
Parallelization
Noisy cases
Sequential decision making
Fundamental facts
Monte-Carlo Tree Search
Conclusion
25. Parallelism.
Thank you G5K
Multi-core machines
Clusters
Grids
Sometimes parallelization completely changes
the picture.
Sometimes not.
We want to know when.
26. Introduction
What is AI ?
Why evolutionary optimization is a part of AI
Why parallelism ?
Evolutionary computation
Comparison-based optimization
Parallelization
Noisy cases Robustness,
Sequential decision making slow rates.
Fundamental facts
Monte-Carlo Tree Search
Conclusion
31. Derivative-free optimization of f
Why derivative free optimization ?
Ok, it's slower
But sometimes you have no derivative
It's simpler (by far) ==> less bugs
32. Derivative-free optimization of f
Why derivative free optimization ?
Ok, it's slower
But sometimes you have no derivative
It's simpler (by far)
It's more robust (to noise, to strange functions...)
33. Derivative-free optimization of f
Optimization algorithms
==> Newton optimization ?
Why derivative free
==> Quasi-Newton (BFGS)
Ok, it's slower
But sometimes you have no derivative
==> Gradient descent
It's simpler (by far)
==> ...robust (to noise, to strange functions...)
It's more
34. Derivative-free optimization of f
Optimization algorithms
Why derivative free optimization ?
Ok, it's slower
Derivative-free optimization
But sometimes you have no derivative
(don't need gradients)
It's simpler (by far)
It's more robust (to noise, to strange functions...)
35. Derivative-free optimization of f
Optimization algorithms
Why derivative free optimization ?
Derivative-free optimization
Ok, it's slower
But sometimes you have no derivative
Comparison-based optimization
(coming soon),
It's simpler (by far)comparisons,
just needing
It's more robust (to noise, to strange functions...)
including evolutionary algorithms
39. Comparison-based algorithms are robust
Consider
f: X --> R
We look for x* such that
x,f(x*) ≤ f(x)
==> what if we see g o f (g increasing) ?
==> x* is the same, but xn might change
parallel evolution 39
40. Robustness of comparison-based algorithms: formal statement
this does not depend on g for a
comparison-based algorithm
a comparison-based algorithm is optimal
for
parallel evolution 40
41. Complexity bounds (N = dimension)
= nb of fitness evaluations for precision
with probability at least ½ for all f
Exp ( - Convergence ratio ) = Convergence rate
Convergence ratio ~ 1 / computational cost
==> more convenient than conv. rate for speed-ups
parallel evolution 41
42. Complexity bounds: basic technique
We want to know how many iterations we need for reaching precision
in an evolutionary algorithm.
Key observation: (most) evolutionary algorithms are comparison-based
Let's consider (for simplicity) a deterministic selection-based non-elitist
algorithm
First idea: how many different branches we have in a run ?
We select points among
Therefore, at most K = ! / ( ! ( - )!) different branches
Second idea: how many different answers should we able to give ?
Use packing numbers: at least N() different possible answers
parallel evolution 42
43. Complexity bounds: basic technique
We want to know how many iterations we need for reaching precision
in an evolutionary algorithm.
Key observation: (most) evolutionary algorithms are comparison-based
Let's consider (for simplicity) a deterministic selection-based non-elitist
algorithm
First idea: how many different branches we have in a run ?
We select points among
Therefore, at most K = ! / ( ! ( - )!) different branches
Second idea: how many different answers should we able to give ?
Use packing numbers: at least N() different possible answers
parallel evolution 43
44. Complexity bounds: basic technique
We want to know how many iterations we need for reaching precision
in an evolutionary algorithm.
Key observation: (most) evolutionary algorithms are comparison-based
Let's consider (for simplicity) a deterministic selection-based non-elitist
algorithm
First idea: how many different branches we have in a run ?
We select points among
Therefore, at most K = ! / ( ! ( - )!) different branches
Second idea: how many different answers should we able to give ?
Use packing numbers: at least N() different possible answers
parallel evolution 44
45. Complexity bounds: basic technique
We want to know how many iterations we need for reaching precision
in an evolutionary algorithm.
Key observation: (most) evolutionary algorithms are comparison-based
Let's consider (for simplicity) a deterministic selection-based non-elitist
algorithm
First idea: how many different branches we have in a run ?
We select points among
Therefore, at most K = ! / ( ! ( - )!) different branches
Second idea: how many different answers should we able to give ?
Use packing numbers: at least N() different possible answers
parallel evolution 45
46. Complexity bounds: -balls
We want to know how many iterations we need for reaching precision
in an evolutionary algorithm.
Key observation: (most) evolutionary algorithms are comparison-based
Let's consider (for simplicity) a deterministic selection-based non-elitist
algorithm
First idea: how many different branches we have in a run ?
We select points among
Therefore, at most K = ! / ( ! ( - )!) different branches
Second idea: how many different answers should we able to give ?
Use packing numbers: at least N() different possible answers
parallel evolution 46
47. Complexity bounds: -balls
We want to know how many iterations we need for reaching precision
in an evolutionary algorithm.
Key observation: (most) evolutionary algorithms are comparison-based
Let's consider (for simplicity) a deterministic selection-based non-elitist
algorithm
First idea: how many different branches we have in a run ?
We select points among
Therefore, at most K = ! / ( ! ( - )!) different branches
Second idea: how many different answers should we able to give ?
Use packing numbers: at least N() different possible answers
parallel evolution 47
48. Complexity bounds: -balls
We want to know how many iterations we need for reaching precision
in an evolutionary algorithm.
Key observation: (most) evolutionary algorithms are comparison-based
Let's consider (for simplicity) a deterministic selection-based non-elitist
algorithm
First idea: how many different branches we have in a run ?
We select points among
Therefore, at most K = ! / ( ! ( - )!) different branches
Second idea: how many different answers should we able to give ?
Use packing numbers: at least N() different possible answers
parallel evolution 48
49. Complexity bounds: basic technique
We want to know how many iterations we need for reaching precision
in an evolutionary algorithm.
Key observation: (most) evolutionary algorithms are comparison-based
Let's consider (for simplicity) a deterministic selection-based non-elitist
algorithm
First idea: how many different branches we have in a run ?
We select points among
Therefore, at most K = ! / ( ! ( - )!) different branches
Second idea: how many different answers should we able to give ?
Use packing numbers: at least N() different possible answers
Conclusion: the number n of iterations should verify
Kn ≥ N ( )
parallel evolution 49
50. Complexity bounds on the convergence ratio
FR: full ranking (selected points are ranked)
SB: selection-based (selected points are not ranked)
parallel evolution 50
51. Complexity bounds on the convergence ratio
This is why I love
cross-over.
FR: full ranking (selected points are ranked)
SB: selection-based (selected points are not ranked)
parallel evolution 51
52. Complexity bounds on the convergence ratio
Fournier, T., 2009;
using VC-dim.
FR: full ranking (selected points are ranked)
SB: selection-based (selected points are not ranked)
parallel evolution 52
53. Complexity bounds on the convergence ratio
Quadratic functions easier
than sphere functions ?
But not for translation invariant
quadratic functions...
FR: full ranking (selected points are ranked)
SB: selection-based (selected points are not ranked)
parallel evolution 53
54. Complexity bounds on the convergence ratio
Quadratic functions easier
than sphere functions ?
But not for translation invariant
quadratic functions...
FR: full ranking (selected points are ranked) results.
Covers existing
SB: selection-based (selected pointswith discrete domains.
Compliant are not ranked)
parallel evolution 54
55. Introduction
What is AI ?
Why evolutionary optimization is a part of AI
Why parallelism ?
Evolutionary computation
Comparison-based optimization
1) Mathematical proof that all
Parallelization
comparison-based algorithms
Noisy cases
can be parallelized
(log speed-up)
Sequential decision making
Fundamental facts
2) Practical hint: simple tricks
Monte-Carlo Tree Search
for some well-known algorithms
Conclusion
59. Speculative parallelization with branching factor 3
Parallel version for D=2.
Population = union of all pops for 2 iterations.
parallel evolution 59
61. Introduction
What is AI ?
Why evolutionary optimization is a part of AI
Why parallelism ?
Evolutionary computation
Comparison-based optimization
1) Mathematical proof that all
Parallelization
comparison-based algorithms
Noisy cases
can be parallelized
(log speed-up)
Sequential decision making
Fundamental facts
2) Practical hint: simple tricks
Monte-Carlo Tree Search
for some well-known algorithms
Conclusion
62. Define:
Necessary condition for log() speed-up:
- E log( * ) ~ log()
But for many algorithms,
- E log( * ) = O(1)
==> asymptotically constant speed-up
63. These algos do not reach the log(lambda) speed-up.
th
(1+1)-ES with 1/5 rule
Standard CSA
Standard EMNA
Standard SA.
Teytaud, T, PPSN 2010
64. Example 1: Estimation of Multivariate Normal Algorithm
While ( I have time )
{
Generate points (x1,...,x) distributed as N(x,)
Evaluate the fitness at x1,...,x
X= mean best points
= standard deviation of best points
/= log( / 7)1 / d
}
65. Ex 2: Log(lambda) correction for mutative self-adapt.
= min( /4,d)
While ( I have time )
{
Generate points (1,...,) as x exp(- k.N)
Generate points (x1,...,x) distributed as N(x,i)
Select the best points
Update x (=mean), update (=log. mean)
}
66. Log() corrections (SA, dim 3)
● In the discrete case (XPs): automatic
parallelization surprisingly efficient.
● Simple trick in the continuous case
- E log( *) should be linear in log()
(this provides corrections which
work for SA and CSA)
parallel evolution 66
67. Log() corrections
● In the discrete case (XPs): automatic
parallelization surprisingly efficient.
● Simple trick in the continuous case
- E log( *) should be linear in log()
(this provides corrections which
work for SA and CSA)
parallel evolution 67
68. SUMMARY of the EA part up to now:
- evolutionary algorithms are robust (with
a precise statement of this robustness)
- evolutionary algorithms are somehow
slow (precisely quantified...)
- evolutionary algorithms are parallel (at least
“until” the dimension for the conv. rate)
69. SUMMARY of the EA part up to now:
- evolutionary algorithms are robust (with
a precise statement of this robustness)
- evolutionary algorithms are somehow
slow (precisely quantified...)
- evolutionary algorithms are parallel (at least
“until” the dimension for the conv. rate)
Now, noisy optimization
70. Introduction
What is AI ?
Why evolutionary optimization is a part of AI
Why parallelism ?
Evolutionary computation
Comparison-based optimization
Parallelization
Noisy cases
Sequential decision making
Fundamental facts
Monte-Carlo Tree Search
Conclusion
71. Many works focus on fitness functions with “small” noise:
f(x) = ||x||2 x (1+Gaussian )
This is because the more realistic case
f(x) = ||x||2 + Gaussian (variance >0 at optimum)
is too hard for publishing nice curves.
72. Many works focus on fitness functions with “small” noise:
f(x) = ||x||2 x (1+Gaussian )
This is because the more realistic case
f(x) = ||x||2 + Gaussian
is too hard for publishing nice curves.
==> see however Arnold Beyer 2006.
==> a tool: races ( Heidrich-Meisner et al, Icml 2009)
- reevaluating until statistically significant differences
- … but we must (sometimes) limit the number of
reevaluations
73. Another difficult case: Bernoulli functions.
fitness(x) = B( f(x) )
f(0) not necessarily = 0.
74. Another difficult case: Bernoulli functions.
EDA
Based on
fitness(x) = B( f(x) )
+ races MaxUncertainty
f(0) not necessarily = 0. (Coulom)
75. Another difficult case: Bernoulli functions.
EDA
Based on
fitness(x) = B( f(x) )
+ races MaxUncertainty
f(0) not necessarily = 0. (Coulom)
I like this case
With p=2
with p=2
76. Another difficult case: Bernoulli functions.
EDA
Based on
fitness(x) = B( f(x) )
+ races MaxUncertainty
f(0) not necessarily = 0. (Coulom)
I like this case
With p=2
with p=2
77. Another difficult case: Bernoulli functions.
EDA
Based on
fitness(x) = B( f(x) )
+ races MaxUncertainty
f(0) not necessarily = 0. (Coulom)
We prove good
results here.
I like this case
With p=2
with p=2
78. Another difficult case: Bernoulli functions.
EDA
Based on
fitness(x) = B( f(x) )
+ races MaxUncertainty
f(0) not necessarily = 0. (Coulom)
We prove good
results here.
We prove good
I like this case results here.
With p=2
with p=2
79. Introduction
What is AI ?
Why evolutionary optimization is a part of AI
Why parallelism ?
Evolutionary computation
Comparison-based optimization
Parallelization
Noisy cases
Sequential decision making
Fundamental facts
Monte-Carlo Tree Search
Conclusion
80. The game of Go is a part of AI.
Computers are ridiculous in front of children.
Easy situation.
Termed “semeai”.
Requires a little bit
of abstraction.
81. The game of Go is a part of AI.
Computers are ridiculous in front of children.
800 cores, 4.7
GHz,
top level program.
Plays a stupid
move.
82. The game of Go is a part of AI.
Computers are ridiculous in front of children.
8 years old;
little training;
finds the good move
83. Introduction
What is AI ?
Why evolutionary optimization is a part of AI
Why parallelism ?
Evolutionary computation
Comparison-based optimization
Parallelization
Noisy cases
Sequential decision making
Fundamental facts
Monte-Carlo Tree Search
Conclusion
84. Monte-Carlo Tree Search
1. Games (a bit of formalism)
2. Decidability / complexity
Games with simultaneous actions 84 Paris 1st of February
85. A game is a directed graph
parallel evolution 85
86. A game is a directed graph with actions
1
2
3
parallel evolution 86
87. A game is a directed graph with actions and players
1 White
Black
2
3
White 12
43
White Black
Black
Black
Black
parallel evolution 87
88. A game is a directed graph with actions
and players and observations
Bob
Bear Bee
Bee 1 White
Black
2
3
White 12
43
White Black
Black
Black
Black
parallel evolution 88
89. A game is a directed graph with actions
and players and observations and rewards
Bob
Bear Bee
Bee 1 White
Black
2
+1
3
0
White 12
Rewards
43
White Black on leafs
Black
only!
Black
Black
parallel evolution 89
90. A game is a directed graph +actions
+players +observations +rewards +loops
Bob
Bear Bee
Bee 1 White
Black
2
+1
3
0
White 12
43
White Black
Black
Black
Black
parallel evolution 90
91. Monte-Carlo Tree Search
1. Games (a bit of formalism)
2. Decidability / complexity
Games with simultaneous actions 91 Paris 1st of February
92. Complexity (2P, no random)
Unbounded Exponential Polynomial
horizon horizon horizon
Full Observability EXP EXP PSPACE
No obs EXPSPACE NEXP
(X=100%) (Hasslum et al, 2000)
Partially 2EXP EXPSPACE
Observable (Rintanen, 97)
(X=100%)
Simult. Actions ? EXPSPACE ? <<<= EXP <<<= EXP
No obs / PO undecidable
93. Complexity question ? (UD)
Instance = position.
Question = Is there a strategy
which wins whatever
are the decisions
of the opponent ?
= natural question if full observability.
Answering this question then allows perfect play.
94. Hummm ?
Do you know a PO game in which you can
ensure a win with probability 1 ?
95. Complexity question for matrix
game ?
100000
010000 Good for column-player !
001000
==> but no sure win.
000100
==> the “UD” question is not
000010
relevant here!
000001
96. Complexity question for Joint work with
phantom-games ? F. Teytaud
This is phantom-go.
Good for black: wins
with proba 1-1/(8!)
Here,
there's no move
which ensures a win.
But some moves are
much better than
others!
99. Madani et al.
1 player + random = undecidable.
We extend to two players with no random.
Problem: rewrite random nodes, thanks to
additional player.
102. A random node to be rewritten
Rewritten as follows:
Player 1 chooses a in [[0,N-1]]
Player 2 chooses b in [[0,N-1]]
c=(a+b) modulo N
Go to tc
Each player can force the game to be equivalent to
the initial one (by playing uniformly)
==> the proba of winning for player 1 (in case of perfect play)
is the same as for the initial game
==> undecidability!
103. Important remark
Existence of a strategy for winning with
proba > 0.5
==> also undecidable for the
restriction to games in which the proba
is >0.6 or <0.4
==> not just a subtle
precision trouble.
116. ... or exploration ?
SCORE =
0/2
+ k.sqrt( log(10)/2 )
Binary win/loss
games: no explo!
(Berthier, D., T., 2010)
117. Games vs pros
in the game of Go
First win in 9x9
First win over 5 games in 9x9 blind Go
First win with H2.5 in 13x13 Go
First win with H6 in 19x19 Go
First win with H7 in 19x19 Go vs top pro
118. ... or exploration ?
SCORE =
0/2
+ k.sqrt( log(10)/2 )
Simultaneous actions:
replace it with
EXP3 / INF
119. MCTS for simultaneous actions
Player 1 plays
Player 2 plays Both players
play
... Player 1 plays
Player 2 plays
120. MCTS for simultaneous actions
Player 1 plays
= maxUCB node
Player 2 plays
Both players play
=minUCB node
=EXP3 node
Player 1 plays
... Player 2 plays
=maxUCB node
=minUCB node
121. MCTS for hidden information
Player 1
Observation set 1 Observation set 2
EXP3 node EXP3 node
Observation set 3
EXP3 node
Player 2
Observation set 2
Observation set 1
EXP3 node
EXP3 node
Observation set 3
EXP3 node
122. MCTS for hidden information
Player 1
Observation set 1 Observation set 2
EXP3 node EXP3 node
Observation set 3
EXP3 node Thanks Martin
(incrementally + application to phantom-tic-tac-toe: see D. Auger 2010)
Player 2
Observation set 2
Observation set 1
EXP3 node
EXP3 node
Observation set 3
EXP3 node
123. EXP3 in one slide
Grigoriadis et al, Auer et al, Audibert & Bubeck Colt 2009
124. Monte-Carlo Tree Search
Appli to Urban Rivals ==>
(simultaneous actions)
Games with simultaneous actions 124 Paris 1st of February
125. Let's have fun with Urban Rivals (4 cards)
Each player has
- four cards (each one can be used once)
- 12 pilz (each one can be used once)
- 12 life points
Each card has:
- one attack level
- one damage
- special effects (forget that...)
Four turns:
P1 attacks P2, P2 attacks P1,
P1 attacks P2, P2 attacks P1.
parallel evolution 125
126. Let's have fun with Urban Rivals
First, attacker plays:
- chooses a card
- chooses ( PRIVATELY ) a number of pilz
Attack level = attack(card) x (1+nb of pilz)
Then, defender plays:
- chooses a card
- chooses a number of pilz
Defense level = attack(card) x (1+nb of pilz)
Result:
If attack > defense
Defender looses Power(attacker's card)
Else
Attacker looses Power(defender's card)
parallel evolution 126
127. Let's have fun with Urban Rivals
==> The MCTS-based AI is now at the best human level.
Experimental (only) remarks on EXP3:
- discard strategies with small number of sims
= better approx of the Nash
- also an improvement by taking
into account the other bandit
- virtual simulations (inspired
by Kummer)
parallel evolution 127
128. When is MCTS relevant ?
Robust in front of:
High dimension;
Non-convexity of Bellman values;
Complex models
Delayed reward
Simultaneous actions, partial information
More difficult for
High values of H;
Model-free
Highly unobservable cases (Monte-Carlo, but not Monte-Carlo Tree
Search, see Cazenave et al.)
Lack of reasonable baseline for the MC
129. When is MCTS relevant ?
T., Dagstuhl 2010, D. Auger,
Robust in front of: EvoStar 2011.
EvoStar 2011;
High dimension;
Unpublished
Non-convexity of Bellman values;
Complex models results on
Delayed reward Some endgames
undecidability
Simultaneous actions
More difficult for results
High values of H;
Model-free
Highly unobservable cases (Monte-Carlo, but not Monte-Carlo Tree
Search, see Cazenave et al.)
Lack of reasonable baseline for the MC
130. Conclusion
Evo. Opt: robustness, tight bounds, simple
algorithmic modifs for better speed-up (SA, 1/5th,
(CSA))
MCTS just great (but requires a model); UCB
not necessary; extension to hidden info (rmk:
undecidability); PO endgames; but no abstraction
power.
Noisy optimization: Consider high noise. Use
QR and Learning (in all EA in fact).
Not mentioned here: multimodal, multiobj, GP, bandits.
131. Future ?
- Solving semeais ? Would involve great AI progress I think...
- Noisy optimization; there are still things to be done.
==> Promoting high noise fitness functions even if it is less
publication-efficient.
- ``Inheritance'' of belief state in partially observable games.
Big progress to be done. Crucial for applications.
- Sparse bandits / mixed stochastic/adversarial cases.
Thanks for your attention.
Thanks to all collaborators for all I've learnt with them.
133. MCTS with hidden information:
incremental version
While (there is time for thinking)
{
s=initial state
os(1)=() os(2)=()
while (s not terminal)
{
p=player(s)
b=Exp3Bandit(os(p))
d=b.makeDecision
(s,o)=transition(s,d)
}
send reward to all bandits in the simulation
}
134. MCTS with hidden information:
incremental version
While (there is time for thinking)
{
s=initial state
os(1)=() os(2)=()
while (s not terminal)
{
p=player(s)
b=Exp3Bandit(os(p))
d=b.makeDecision
(s,o)=transition(s,d)
}
send reward to all bandits in the simulation
}
135. MCTS with hidden information:
incremental version
While (there is time for thinking)
{
s=initial state
os(1)=() os(2)=()
while (s not terminal)
{
p=player(s)
b=Exp3Bandit(os(p))
d=b.makeDecision
(s,o)=transition(s,d)
}
send reward to all bandits in the simulation
}
136. MCTS with hidden information:
incremental version
While (there is time for thinking)
{
s=initial state
os(1)=() os(2)=()
while (s not terminal)
{
p=player(s)
b=Exp3Bandit(os(p))
d=b.makeDecision
(s,o)=transition(s,d)
}
send reward to all bandits in the simulation
}
137. MCTS with hidden information:
incremental version
While (there is time for thinking)
{
s=initial state
os(1)=() os(2)=()
while (s not terminal)
{
p=player(s)
b=Exp3Bandit(os(p))
d=b.makeDecision
(s,o)=transition(s,d)
}
send reward to all bandits in the simulation
}
138. MCTS with hidden information:
incremental version
While (there is time for thinking)
{
s=initial state
os(1)=() os(2)=()
while (s not terminal)
{
p=player(s)
b=Exp3Bandit(os(p))
d=b.makeDecision
(s,o)=transition(s,d)
}
send reward to all bandits in the simulation
}
139. MCTS with hidden information:
incremental version
While (there is time for thinking)
{
s=initial state
os(1)=() os(2)=()
while (s not terminal)
{
p=player(s)
b=Exp3Bandit(os(p))
d=b.makeDecision
(s,o)=transition(s,d)
}
send reward to all bandits in the simulation
}
140. MCTS with hidden information:
incremental version
While (there is time for thinking)
{ Possibly
s=initial state
os(1)=() os(2)=() refine
while (s not terminal)
the family
{
p=player(s) of bandits.
b=Exp3Bandit(os(p))
d=b.makeDecision
(s,o)=transition(s,d)
}
send reward to all bandits in the simulation
}
Editor's Notes
I am Frederic Lemoine, PhD student at the University Paris Sud. I will present you my work on GenoQuery, a new querying module adapted to a functional genomics warehouse
I am Frederic Lemoine, PhD student at the University Paris Sud. I will present you my work on GenoQuery, a new querying module adapted to a functional genomics warehouse