This document provides an overview of chess algorithms and techniques used in computer chess programs. It discusses the complexity of chess, the history of computer chess programs, common search algorithms like minimax and negamax, and pruning techniques like alpha-beta pruning. It also covers challenges like node explosion, transposition tables, endgame tablebases, and evaluations functions. The goal is to explain how modern chess programs are able to search deeply despite the enormous game tree size.
Game playing has been an important area of AI research as games allow experiments with adversarial situations in a constrained environment. Minimax search and alpha-beta pruning are commonly used techniques for two-player zero-sum games, evaluating the game tree to varying depths depending on the time available and using a static board evaluator to estimate leaf node values. While early AI programs solved simple games like tic-tac-toe perfectly, scaling search to capture the complexity of chess required innovations like progressive deepening, quiescence search, and evaluation function learning from self-play. Modern AI programs now surpass top human players in many games including checkers, Scrabble, Othello, and backgammon.
Game playing was one of the earliest areas of AI research. Games allow researchers to experiment with adversarial situations in a constrained environment. Minimax search and alpha-beta pruning are commonly used techniques to search game trees. Evaluating board positions gets more difficult farther from the leaf nodes, so most game programs use heuristic evaluators with limited lookahead. Modern AI programs have surpassed humans in many games like checkers, Othello, and chess through increasingly powerful search and evaluation methods.
1. Game playing is an important domain for artificial intelligence research as games provide formal reasoning problems that allow direct comparison between computer programs and humans.
2. Alpha-beta pruning can speed up minimax search in game trees by pruning branches that cannot alter the outcome. It works by maintaining lower and upper bounds on the score.
3. Evaluating leaf nodes is challenging. For chess, linear evaluation functions combining weighted features like material and position are commonly used, and reinforcement learning can help tune the weights.
Game playing problems can be modeled as game trees to apply search techniques like minimax and alpha-beta pruning. Minimax searches a game tree to determine the move that maximizes the minimum payoff against an optimal opponent. It recursively evaluates nodes based on the minimax rule. Alpha-beta pruning improves performance by avoiding evaluating subtrees that cannot impact the result. Chance nodes in games like Backgammon use expected values to model uncertainty from dice rolls.
It is an artificial document, please. regarding Ai topicschougulesup79
Game playing can be studied to understand problems involving adversarial agents. Games define large search spaces that require minimal initial structure to study. Typical perfect games have two players who alternate moves in a zero-sum game with complete information and no chance elements. To play such a game, one considers all legal moves and evaluates the resulting positions to determine the best move. Minimax search evaluates positions and uses the minimax rule to select the move that maximizes the guaranteed payoff. Alpha-beta pruning improves minimax search by avoiding evaluating unpromising moves.
The document discusses game playing and artificial intelligence. It notes that games provide well-defined problems that require intelligence to play well and introduce uncertainty since opponents' moves cannot be predicted. It describes how search spaces can be very large for games like chess. The document then discusses how humans seem to rely on pattern recognition rather than extensive search. It also notes that games provide a good test domain for search methods and pruning techniques.
This document discusses adversarial search techniques for game playing. It introduces minimax search and alpha-beta pruning to find optimal moves. Minimax search finds the best possible outcome against an optimal opponent by choosing the move with the highest minimax value. Alpha-beta pruning improves search efficiency by pruning branches that cannot affect the result. While minimax search is optimal, its complexity makes it infeasible for games like chess; heuristics and resource limits must be used to approximate search.
This document provides an introduction to using artificial intelligence for games. It discusses how AI can be used to create challenging opponents or helpful allies that act autonomously based on their programming. It notes that while human-level general intelligence is difficult to achieve, AI can perform well in narrow contexts like chess. For games, the AI must be intentionally flawed to ensure a fun challenge and cannot have obvious weaknesses. It must also be able to perform calculations and make decisions in real-time to interact with the game. The document then outlines some common AI techniques used in games, including MinMax search trees and finite state machines to control agent behavior. It provides pseudocode examples of the MinMax algorithm and discusses enhancements like Alpha-Beta pruning
Game playing has been an important area of AI research as games allow experiments with adversarial situations in a constrained environment. Minimax search and alpha-beta pruning are commonly used techniques for two-player zero-sum games, evaluating the game tree to varying depths depending on the time available and using a static board evaluator to estimate leaf node values. While early AI programs solved simple games like tic-tac-toe perfectly, scaling search to capture the complexity of chess required innovations like progressive deepening, quiescence search, and evaluation function learning from self-play. Modern AI programs now surpass top human players in many games including checkers, Scrabble, Othello, and backgammon.
Game playing was one of the earliest areas of AI research. Games allow researchers to experiment with adversarial situations in a constrained environment. Minimax search and alpha-beta pruning are commonly used techniques to search game trees. Evaluating board positions gets more difficult farther from the leaf nodes, so most game programs use heuristic evaluators with limited lookahead. Modern AI programs have surpassed humans in many games like checkers, Othello, and chess through increasingly powerful search and evaluation methods.
1. Game playing is an important domain for artificial intelligence research as games provide formal reasoning problems that allow direct comparison between computer programs and humans.
2. Alpha-beta pruning can speed up minimax search in game trees by pruning branches that cannot alter the outcome. It works by maintaining lower and upper bounds on the score.
3. Evaluating leaf nodes is challenging. For chess, linear evaluation functions combining weighted features like material and position are commonly used, and reinforcement learning can help tune the weights.
Game playing problems can be modeled as game trees to apply search techniques like minimax and alpha-beta pruning. Minimax searches a game tree to determine the move that maximizes the minimum payoff against an optimal opponent. It recursively evaluates nodes based on the minimax rule. Alpha-beta pruning improves performance by avoiding evaluating subtrees that cannot impact the result. Chance nodes in games like Backgammon use expected values to model uncertainty from dice rolls.
It is an artificial document, please. regarding Ai topicschougulesup79
Game playing can be studied to understand problems involving adversarial agents. Games define large search spaces that require minimal initial structure to study. Typical perfect games have two players who alternate moves in a zero-sum game with complete information and no chance elements. To play such a game, one considers all legal moves and evaluates the resulting positions to determine the best move. Minimax search evaluates positions and uses the minimax rule to select the move that maximizes the guaranteed payoff. Alpha-beta pruning improves minimax search by avoiding evaluating unpromising moves.
The document discusses game playing and artificial intelligence. It notes that games provide well-defined problems that require intelligence to play well and introduce uncertainty since opponents' moves cannot be predicted. It describes how search spaces can be very large for games like chess. The document then discusses how humans seem to rely on pattern recognition rather than extensive search. It also notes that games provide a good test domain for search methods and pruning techniques.
This document discusses adversarial search techniques for game playing. It introduces minimax search and alpha-beta pruning to find optimal moves. Minimax search finds the best possible outcome against an optimal opponent by choosing the move with the highest minimax value. Alpha-beta pruning improves search efficiency by pruning branches that cannot affect the result. While minimax search is optimal, its complexity makes it infeasible for games like chess; heuristics and resource limits must be used to approximate search.
This document provides an introduction to using artificial intelligence for games. It discusses how AI can be used to create challenging opponents or helpful allies that act autonomously based on their programming. It notes that while human-level general intelligence is difficult to achieve, AI can perform well in narrow contexts like chess. For games, the AI must be intentionally flawed to ensure a fun challenge and cannot have obvious weaknesses. It must also be able to perform calculations and make decisions in real-time to interact with the game. The document then outlines some common AI techniques used in games, including MinMax search trees and finite state machines to control agent behavior. It provides pseudocode examples of the MinMax algorithm and discusses enhancements like Alpha-Beta pruning
This document provides an introduction to using artificial intelligence for games. It discusses how AI can be used to create challenging opponents or helpful allies that act autonomously based on their perceptions and the game rules. The key techniques covered include MinMax and alpha-beta pruning for decision making, finite state machines to define agent behavior, and evaluating game states. It also outlines some of the additional considerations for AI in games compared to other applications, such as ensuring agents are intentionally flawed to provide a fun challenge and can perform actions in real-time to match the pace of gameplay.
The document provides an introduction to using artificial intelligence for games. It discusses how AI can be used to create challenging opponents or helpful allies that act autonomously based on game rules and circumstances. The key techniques covered include MinMax for decision making, finite state machines to model agent behavior, and evaluating board positions in games like chess. It also notes important considerations for AI in games like responding in real-time, allowing players to defeat the AI in fun ways, and adjusting the difficulty through the "amount and type of AI."
Minimax is an algorithm that is commonly used for game playing. It works by constructing a game tree that represents all possible future game states up to a certain depth, and uses an evaluation function to assign a value to each state. The minimax algorithm traverses the tree and "backs up" values from the leaves to the root by maximizing values at MAX nodes and minimizing values at MIN nodes. Alpha-beta pruning improves upon minimax by pruning branches that cannot affect the final output value.
This document provides a brief history of chess and an overview of chess engine programming fundamentals. It discusses:
- Key developments in computer chess from the 18th century to modern champions like AlphaZero and StockFish.
- Core concepts like bitboards, evaluation functions, minimax algorithm, move generation, alpha-beta pruning, and transposition tables.
- Additional techniques such as opening books, iterative deepening, and tools/protocols like UCI and FEN.
- Examples of how concepts are implemented, including code snippets for bitboards, evaluation tables, and minimax searches.
This document provides a brief history of chess and chess engine programming fundamentals. It discusses:
- Key developments in computer chess from the 18th century to modern champions like AlphaZero and StockFish.
- Core concepts like bitboards, evaluation functions, minimax algorithm, move generation, alpha-beta pruning, and transposition tables.
- How modern engines use these techniques like iterative deepening, move ordering, and quiescence search to achieve superhuman performance within limited processing power.
This document discusses algorithms and techniques for game playing by computers, including minimax, alpha-beta pruning, and heuristic evaluation functions. It provides examples of computers that have achieved world-champion level performance at various games, such as Deep Blue in chess, Chinook in checkers, and AlphaGo in Go. While early programs relied mainly on brute force search, modern approaches also incorporate huge databases, pattern recognition, and machine learning. Games have become an important domain for advancing artificial intelligence research.
This document discusses techniques for game playing in artificial intelligence. It covers topics like game trees, minimax algorithm, alpha-beta pruning, static evaluation functions, and games that AI has been applied to like checkers, chess, and Go. The document provides examples of how minimax and alpha-beta pruning work and discusses challenges in searching large game trees.
This document discusses game theory concepts like game trees, minimax search, and alpha-beta pruning. It explains that game trees model all possible moves in a game but quickly become intractable due to their exponential size. Minimax search evaluates game states using heuristics and searches the tree to find the optimal move, but is inefficient. Alpha-beta pruning improves on minimax by pruning branches that cannot affect the choice of move, allowing deeper search and exponential time savings. Tic-Tac-Toe is used as a running example to illustrate these concepts.
The document discusses artificial intelligence and adversarial search in games. It describes how minimax search works for deterministic two-player zero-sum games like tic-tac-toe by searching the game tree and using alpha-beta pruning to improve search efficiency. Evaluation functions are used to estimate leaf node values. For games with chance elements like Backgammon, expectimax search averages the child node values rather than taking the maximum or minimum. Learning evaluation functions from self-play using temporal difference methods like TD-Gammon has achieved championship-level performance in Backgammon.
This document discusses game playing as an area of artificial intelligence research. It begins with case studies of computers playing chess at grandmaster level, with IBM's Deep Blue being the first computer to defeat a reigning world champion in 1997. Game playing is described as a good problem for AI due to games being well-defined, repeatable problems that allow direct comparison of human and computer performance. Game playing is framed as a search problem over the game tree defined by legal moves. Optimal strategies can be found through minimax search with alpha-beta pruning to reduce the search space. Static board evaluators are used to estimate non-terminal states.
Adversarial search is an algorithm used in game playing to plan ahead when other agents are planning against you. The minimax algorithm determines the optimal strategy by assuming the opponent will make the best counter-move. It searches the game tree to find the move with the highest minimum payoff. α-β pruning improves on minimax by pruning branches that cannot affect the choice of move. State-of-the-art game programs use techniques like precomputed databases, deep search trees, and pattern knowledge bases to defeat human champions at games like checkers, chess, and Othello.
1) Adversarial search involves searching game trees to find the best move in games with competitive players like tic-tac-toe and chess. Important concepts include pruning unwanted search tree portions and heuristic evaluation functions.
2) A game tree represents all possible game states, moves, and outcomes. Elements of game playing search include the initial state, players, legal moves, game results, end states, and payoffs for winning, losing, or drawing.
3) The minimax algorithm searches game trees to determine the best guaranteed outcome against an optimal opponent. It recursively evaluates nodes using the minimax rule to maximize the minimum payoff for the maximizing player.
The document summarizes the evolution of computer chess from its early beginnings in the 1950s to more advanced programs in the late 1990s and early 2000s. It describes several notable computer chess programs from each era like Deep Thought, Deep Blue, Chessterfield vi3, Star Wars Chess, and Chessmaster 8,000. It also discusses the improvements in algorithms, search techniques, and hardware that led to stronger computer chess players over time.
Here are the steps to solve an 8-puzzle problem using BFS or A*:
1. Represent the start and goal states as 3x3 matrices with the numbers 1-8 and a blank space.
2. For BFS:
- Create a queue and add the start state to it
- Repeatedly dequeue the first state and enqueue its successors
- A successor is obtained by swapping the blank space with an adjacent number
- Continue until the goal state is found or the queue is empty
3. For A*:
- Create a priority queue ordered by f(n) = g(n) + h(n)
- Where g(n) is the cost to reach state n
This document discusses adversarial search techniques used in artificial intelligence to model games as search problems. It introduces the minimax algorithm and alpha-beta pruning to determine optimal strategies by looking ahead in the game tree. These techniques allow computers to search deeper and play games like chess and Go at a world-champion level by evaluating board positions and pruning unfavorable branches in the search.
Two player games involve two players alternating turns trying to maximize or minimize the outcome of a game. The MiniMax algorithm is commonly used to choose the best move for a player in two player games by recursively evaluating all possible future moves. Alpha-beta pruning improves upon MiniMax by pruning branches that cannot influence the final outcome, allowing deeper search of the game tree within the same time limit.
AlphaGo Zero is an AI agent created by DeepMind to master the game of Go without human data or expertise. It uses reinforcement learning through self-play with the following key aspects:
1. It uses a single deep neural network that predicts both the next move and the winner of the game from the current board position. This dual network is trained solely through self-play reinforcement learning.
2. The neural network improves the Monte Carlo tree search used to select moves. The search uses the network predictions to guide selection and backup of information during search.
3. Training involves repeated self-play games to generate data, then using this data to update the neural network parameters through gradient descent. The updated network plays
Deep Blue was an IBM computer that defeated world chess champion Garry Kasparov in 1997. It used specialized chess chips that could evaluate millions of positions per second to search the game tree deeply. Each chip contained a move generator, evaluation function with fast and slow components, and search control logic. After initial losses to Kasparov in 1996, improvements to Deep Blue's evaluation function and search algorithms allowed it to win the rematch in 1997, demonstrating that a computer could defeat the top human player at chess under tournament time controls.
ch_5 Game playing Min max and Alpha Beta pruning.pptSanGeet25
Game-Playing & Adversarial Search was covered in two lectures. Minimax search finds the optimal strategy but is impractical for large games. Minimax with alpha-beta pruning improves search efficiency by pruning subtrees that cannot affect the result. Iterative deepening allows more search within time limits by incrementally increasing search depth. Heuristics help guide search and handle limited lookahead.
The document provides information about game playing and constraint satisfaction problems (CSP). It discusses adversarial search techniques like minimax algorithm and alpha-beta pruning that are used for game playing. The minimax algorithm uses recursion to search through the game tree and find the optimal move. Alpha-beta pruning improves on minimax by pruning parts of the tree that are guaranteed not to affect the outcome. The document also mentions other topics like Monte Carlo tree search, stochastic games with elements of chance, and formalization of game state, actions, results, and utilities.
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
Temple of Asclepius in Thrace. Excavation resultsKrassimira Luka
The temple and the sanctuary around were dedicated to Asklepios Zmidrenus. This name has been known since 1875 when an inscription dedicated to him was discovered in Rome. The inscription is dated in 227 AD and was left by soldiers originating from the city of Philippopolis (modern Plovdiv).
More Related Content
Similar to chess-algorithms-theory-and-practice_ver2017.pdf
This document provides an introduction to using artificial intelligence for games. It discusses how AI can be used to create challenging opponents or helpful allies that act autonomously based on their perceptions and the game rules. The key techniques covered include MinMax and alpha-beta pruning for decision making, finite state machines to define agent behavior, and evaluating game states. It also outlines some of the additional considerations for AI in games compared to other applications, such as ensuring agents are intentionally flawed to provide a fun challenge and can perform actions in real-time to match the pace of gameplay.
The document provides an introduction to using artificial intelligence for games. It discusses how AI can be used to create challenging opponents or helpful allies that act autonomously based on game rules and circumstances. The key techniques covered include MinMax for decision making, finite state machines to model agent behavior, and evaluating board positions in games like chess. It also notes important considerations for AI in games like responding in real-time, allowing players to defeat the AI in fun ways, and adjusting the difficulty through the "amount and type of AI."
Minimax is an algorithm that is commonly used for game playing. It works by constructing a game tree that represents all possible future game states up to a certain depth, and uses an evaluation function to assign a value to each state. The minimax algorithm traverses the tree and "backs up" values from the leaves to the root by maximizing values at MAX nodes and minimizing values at MIN nodes. Alpha-beta pruning improves upon minimax by pruning branches that cannot affect the final output value.
This document provides a brief history of chess and an overview of chess engine programming fundamentals. It discusses:
- Key developments in computer chess from the 18th century to modern champions like AlphaZero and StockFish.
- Core concepts like bitboards, evaluation functions, minimax algorithm, move generation, alpha-beta pruning, and transposition tables.
- Additional techniques such as opening books, iterative deepening, and tools/protocols like UCI and FEN.
- Examples of how concepts are implemented, including code snippets for bitboards, evaluation tables, and minimax searches.
This document provides a brief history of chess and chess engine programming fundamentals. It discusses:
- Key developments in computer chess from the 18th century to modern champions like AlphaZero and StockFish.
- Core concepts like bitboards, evaluation functions, minimax algorithm, move generation, alpha-beta pruning, and transposition tables.
- How modern engines use these techniques like iterative deepening, move ordering, and quiescence search to achieve superhuman performance within limited processing power.
This document discusses algorithms and techniques for game playing by computers, including minimax, alpha-beta pruning, and heuristic evaluation functions. It provides examples of computers that have achieved world-champion level performance at various games, such as Deep Blue in chess, Chinook in checkers, and AlphaGo in Go. While early programs relied mainly on brute force search, modern approaches also incorporate huge databases, pattern recognition, and machine learning. Games have become an important domain for advancing artificial intelligence research.
This document discusses techniques for game playing in artificial intelligence. It covers topics like game trees, minimax algorithm, alpha-beta pruning, static evaluation functions, and games that AI has been applied to like checkers, chess, and Go. The document provides examples of how minimax and alpha-beta pruning work and discusses challenges in searching large game trees.
This document discusses game theory concepts like game trees, minimax search, and alpha-beta pruning. It explains that game trees model all possible moves in a game but quickly become intractable due to their exponential size. Minimax search evaluates game states using heuristics and searches the tree to find the optimal move, but is inefficient. Alpha-beta pruning improves on minimax by pruning branches that cannot affect the choice of move, allowing deeper search and exponential time savings. Tic-Tac-Toe is used as a running example to illustrate these concepts.
The document discusses artificial intelligence and adversarial search in games. It describes how minimax search works for deterministic two-player zero-sum games like tic-tac-toe by searching the game tree and using alpha-beta pruning to improve search efficiency. Evaluation functions are used to estimate leaf node values. For games with chance elements like Backgammon, expectimax search averages the child node values rather than taking the maximum or minimum. Learning evaluation functions from self-play using temporal difference methods like TD-Gammon has achieved championship-level performance in Backgammon.
This document discusses game playing as an area of artificial intelligence research. It begins with case studies of computers playing chess at grandmaster level, with IBM's Deep Blue being the first computer to defeat a reigning world champion in 1997. Game playing is described as a good problem for AI due to games being well-defined, repeatable problems that allow direct comparison of human and computer performance. Game playing is framed as a search problem over the game tree defined by legal moves. Optimal strategies can be found through minimax search with alpha-beta pruning to reduce the search space. Static board evaluators are used to estimate non-terminal states.
Adversarial search is an algorithm used in game playing to plan ahead when other agents are planning against you. The minimax algorithm determines the optimal strategy by assuming the opponent will make the best counter-move. It searches the game tree to find the move with the highest minimum payoff. α-β pruning improves on minimax by pruning branches that cannot affect the choice of move. State-of-the-art game programs use techniques like precomputed databases, deep search trees, and pattern knowledge bases to defeat human champions at games like checkers, chess, and Othello.
1) Adversarial search involves searching game trees to find the best move in games with competitive players like tic-tac-toe and chess. Important concepts include pruning unwanted search tree portions and heuristic evaluation functions.
2) A game tree represents all possible game states, moves, and outcomes. Elements of game playing search include the initial state, players, legal moves, game results, end states, and payoffs for winning, losing, or drawing.
3) The minimax algorithm searches game trees to determine the best guaranteed outcome against an optimal opponent. It recursively evaluates nodes using the minimax rule to maximize the minimum payoff for the maximizing player.
The document summarizes the evolution of computer chess from its early beginnings in the 1950s to more advanced programs in the late 1990s and early 2000s. It describes several notable computer chess programs from each era like Deep Thought, Deep Blue, Chessterfield vi3, Star Wars Chess, and Chessmaster 8,000. It also discusses the improvements in algorithms, search techniques, and hardware that led to stronger computer chess players over time.
Here are the steps to solve an 8-puzzle problem using BFS or A*:
1. Represent the start and goal states as 3x3 matrices with the numbers 1-8 and a blank space.
2. For BFS:
- Create a queue and add the start state to it
- Repeatedly dequeue the first state and enqueue its successors
- A successor is obtained by swapping the blank space with an adjacent number
- Continue until the goal state is found or the queue is empty
3. For A*:
- Create a priority queue ordered by f(n) = g(n) + h(n)
- Where g(n) is the cost to reach state n
This document discusses adversarial search techniques used in artificial intelligence to model games as search problems. It introduces the minimax algorithm and alpha-beta pruning to determine optimal strategies by looking ahead in the game tree. These techniques allow computers to search deeper and play games like chess and Go at a world-champion level by evaluating board positions and pruning unfavorable branches in the search.
Two player games involve two players alternating turns trying to maximize or minimize the outcome of a game. The MiniMax algorithm is commonly used to choose the best move for a player in two player games by recursively evaluating all possible future moves. Alpha-beta pruning improves upon MiniMax by pruning branches that cannot influence the final outcome, allowing deeper search of the game tree within the same time limit.
AlphaGo Zero is an AI agent created by DeepMind to master the game of Go without human data or expertise. It uses reinforcement learning through self-play with the following key aspects:
1. It uses a single deep neural network that predicts both the next move and the winner of the game from the current board position. This dual network is trained solely through self-play reinforcement learning.
2. The neural network improves the Monte Carlo tree search used to select moves. The search uses the network predictions to guide selection and backup of information during search.
3. Training involves repeated self-play games to generate data, then using this data to update the neural network parameters through gradient descent. The updated network plays
Deep Blue was an IBM computer that defeated world chess champion Garry Kasparov in 1997. It used specialized chess chips that could evaluate millions of positions per second to search the game tree deeply. Each chip contained a move generator, evaluation function with fast and slow components, and search control logic. After initial losses to Kasparov in 1996, improvements to Deep Blue's evaluation function and search algorithms allowed it to win the rematch in 1997, demonstrating that a computer could defeat the top human player at chess under tournament time controls.
ch_5 Game playing Min max and Alpha Beta pruning.pptSanGeet25
Game-Playing & Adversarial Search was covered in two lectures. Minimax search finds the optimal strategy but is impractical for large games. Minimax with alpha-beta pruning improves search efficiency by pruning subtrees that cannot affect the result. Iterative deepening allows more search within time limits by incrementally increasing search depth. Heuristics help guide search and handle limited lookahead.
The document provides information about game playing and constraint satisfaction problems (CSP). It discusses adversarial search techniques like minimax algorithm and alpha-beta pruning that are used for game playing. The minimax algorithm uses recursion to search through the game tree and find the optimal move. Alpha-beta pruning improves on minimax by pruning parts of the tree that are guaranteed not to affect the outcome. The document also mentions other topics like Monte Carlo tree search, stochastic games with elements of chance, and formalization of game state, actions, results, and utilities.
Similar to chess-algorithms-theory-and-practice_ver2017.pdf (20)
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
Temple of Asclepius in Thrace. Excavation resultsKrassimira Luka
The temple and the sanctuary around were dedicated to Asklepios Zmidrenus. This name has been known since 1875 when an inscription dedicated to him was discovered in Rome. The inscription is dated in 227 AD and was left by soldiers originating from the city of Philippopolis (modern Plovdiv).
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
Gender and Mental Health - Counselling and Family Therapy Applications and In...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
Beyond Degrees - Empowering the Workforce in the Context of Skills-First.pptxEduSkills OECD
Iván Bornacelly, Policy Analyst at the OECD Centre for Skills, OECD, presents at the webinar 'Tackling job market gaps with a skills-first approach' on 12 June 2024
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
বাংলাদেশ অর্থনৈতিক সমীক্ষা (Economic Review) ২০২৪ UJS App.pdf
chess-algorithms-theory-and-practice_ver2017.pdf
1. Chess Algorithms
Theory and Practice
Rune Djurhuus
Chess Grandmaster
runed@ifi.uio.no / runedj@microsoft.com
September 20, 2017
1
2. Content
• Complexity of a chess game
• Solving chess, is it a myth?
• History of computer chess
• Chess compared to Go
• Search trees and position evaluation
• Minimax: The basic search algorithm
• Negamax: «Simplified» minimax
• Node explosion
• Pruning techniques:
– Alpha-Beta pruning
– Analyze the best move first
– Killer-move heuristics
– Zero-move heuristics
• Iterative deeper depth-first search (IDDFS)
• Search tree extensions
• Transposition tables (position cache)
• Other challenges
• Endgame tablebases
• Demo
2
3. Complexity of a Chess Game
• 20 possible start moves, 20 possible
replies, etc.
• 400 possible positions after 2 ply
(half moves)
• 197 281 positions after 4 ply
• 713 positions after 10 ply (5 White
moves and 5 Black moves)
• Exponential explosion!
• Approximately 40 legal moves in
a typical position
• There exists about 10120 possible
chess games
3
4. Solving Chess, is it a myth?
Chess Complexity Space
• The estimated number of possible
chess games is 10120
– Claude E. Shannon
– 1 followed by 120 zeroes!!!
• The estimated number of reachable
chess positions is 1047
– Shirish Chinchalkar, 1996
• Modern GPU’s performs 1013 flops
• If we assume one million GPUs with 10
flops per position we can calculate 1018
positions per second
• It will take us 1 600 000 000 000 000
000 000 years to solve chess
Assuming Moore’s law works in
the future
• Todays top supercomputers delivers
1016 flops
• Assuming 100 operations per position
yields 1014 positions per second
• Doing retrograde analysis on
supercomputers for 4 months we can
calculate 1021 positions.
• When will Moore’s law allow us to
reach 1047 positions?
• Answer: in 128 years, or around year
2142!
4
http://chessgpgpu.blogspot.no/2013/06/solving-chess-
facts-and-fiction.html
5. History of Computer Chess
• Chess was a good fit for computers:
– Clearly defined rules
– Game of complete information
– Easy to evaluate (judge) positions
– Search tree is not too small or too big
• 1950: Programming a Computer for Playing Chess (Claude Shannon)
• 1951: First chess playing program (on paper) (Alan Turing)
• 1958: First computer program that can play a complete chess game
• 1981: Cray Blitz wins a tournament in Mississippi and achieves master rating
• 1989: Deep Thought loses 0-2 against World Champion Garry Kasparov
• 1996: Deep Blue wins a game against Kasparov, but loses match 2-4
• 1997: Upgraded Dee Blue wins 3.5-2.5 against Kasparov
• 2005: Hydra destroys GM Michael Adams 5.5-0.5
• 2006: World Champion Vladimir Kramnik looses 2-4 against Deep Fritz (PC chess
engine)
• 2014: Magnus Carlsen launches “Play Magnus “ app on iOS where anyone can play
against a chess engine that emulates the World Champion’s play at 21 different
ages (5 to 25 years). 5
6. Chess Compared to Go
• Go is played on a 19x19 square board where a new stone is
placed on any free square each move (and never moved
around)
• Go has a much higher branching factor (starting with 361 and
slowly descending) and much more complicated leaf node
evaluation
• For many years the best Go programs had amateur rating only
• In 2016 Alpha Go surprisingly beat Lee Sedol (9-dan
profession) 4-1 using a combination of machine learning
(deep neural network) and Monte Carlo tree search
algorithm.
• Alpha Go beat Ke Jie (ranked no. 1 in the world) 3-0 in 2017
and retired afterwards.
6
7. Search Trees and Position Evaluation
• Search trees (nodes are positions, edges are legal chess
moves)
• Leaf nodes are end positions which needs to be
evaluated (judged)
• A simple judger: Check mate? If not, count material
• Nodes are marked with a numeric evaluation value
7
8. Minimax: The Basic Search Algorithm
• Minimax: Assume that both White and Black plays the best
moves. We maximizes White’s score
• Perform a depth-first search and evaluate the leaf nodes
• Choose child node with highest value if it is White to move
• Choose child node with lowest value if it is Black to move
• Branching factor is 40 in a typical chess position
White
Black
White
Black
White
ply = 0
ply = 1
ply = 2
ply = 3
ply = 4
8
9. NegaMax – “Simplified” Minimax
Minimax
int maxi( int depth ) {
if ( depth == 0 )
return evaluate();
int max = -∞;
for ( all moves) {
score = mini( depth - 1 );
if( score > max )
max = score;
}
return max;
}
NegaMax
int negaMax( int depth ) {
if ( depth == 0 ) return evaluate();
int max = - ∞;
for ( all moves) {
score = -negaMax( depth - 1 );
if( score > max )
max = score;
}
return max;
}
9
int mini( int depth ) {
if ( depth == 0 )
return -evaluate();
int min = + ∞;
for ( all moves) {
score = maxi( depth - 1 );
if( score < min )
min = score;
}
return min;
}
max(a, b) == -min(-a, -b)
10. Node explosion
➢ 10 M nodes per second
(nps) is realistic for
modern chess engines
➢ Modern engines routinely
reach depths 25-35 ply at
tournament play
➢ But they only have a few
minutes per move, so they
should only be able to go
5-6 ply deep
➢ How do they then get to
depth 25 so easily?
10
Depth Node count Time at 10M nodes/sec
1 40 0.000004 s
2 1 600 0.00016 s
3 64 000 0.0064 s
4 2 560 000 0.256 s
5 102 400 000 10.24 s
6 4 096 000 000 6 min 49,6 s
7 163 840 000 000 4 h 33 min 4 s
8 6 553 600 000 000 7 d 14 h 2 min 40 s
A typical middle-game position has
40 legal moves.
11. Pruning Techniques
• The complexity of searching d ply ahead is
O(b*b*…*b) = O(bd)
• With a branching factor (b) of 40 it is crucial to
be able to prune the search tree
11
12. Alpha-Beta Pruning
“Position is so good for White (or Black) that the opponent with best
play will not enter the variation that gives the position.”
• Use previous known max and min values to limit the search tree
• Alpha value: White is guaranteed this score or better (start value: -∞)
• Beta value: Black is guaranteed this score or less (start value: +∞)
• If Alpha is higher than Beta, then the position will never occur assuming
best play
• If search tree below is evaluated left to right, then we can skip the greyed-
out sub trees
• Regardless of what values we get for the grey nodes, they will not
influence the root node score
White
Black
White
Black
White
ply = 0
ply = 1
ply = 2
ply = 3
ply = 4 12
13. Analyze the Best Move First
• Even with alpha-beta pruning, if we always start
with the worst move, we still get O(b*b*..*b) =
O(bd)
• If we always start with the best move (also
recursive) it can be shown that complexity is
O(b*1*b*1*b*1…) = O(bd/2) = O(√bd)
• We can double the search depth without using
more resources
• Conclusion: It is very important to try to start
with the strongest moves first
13
14. Killer-Move Heuristics
• Killer-move heuristics is based on the
assumption that a strong move which gave a
large pruning of a sub tree, might also be a
strong move in other nodes in the search tree
• Therefore we start with the killer moves in
order to maximize search tree pruning
14
15. Zero-Move Heuristics
• Alpha-Beta cutoff: “The position is so good for White (or Black) that
the opponent with best play will avoid the variation resulting in that
position”
• Zero-Move heuristics is based on the fact that in most positions it is
an advantage to be the first player to move
• Let the player (e.g. White) who has just made a move, play another
move (two moves in a row), and perform a shallower (2-3 ply less)
and therefore cheaper search from that position
• If the shallower search gives a cutoff value (e.g. bad score for
White), it means that most likely the search tree can be pruned at
this position without performing a deeper search, since two moves
in a row did not help
• Very effective pruning technique!
• Cavecats: Check and endgames (where a player can be in
“trekktvang” – every move worsens the position)
15
16. Iterative Deeper Depth-First Search (IDDFS)
• Since it is so important to evaluate the best
move first, it might be worthwhile to execute
a shallower search first and then use the
resulting alpha/beta cutoff values as start
values for a deeper search
• Since the majority of search nodes are on the
lowest level in a balanced search tree, it is
relatively cheap to do an extra shallower
search
16
17. Search Tree Extensions
• PC programs today can compute 25-35 ply ahead
(Deep Blue computed 12 ply against Kasparov in
1997, Hydra (64 nodes with FPGAs) computed at
least 18 ply)
• It is important to extend the search in leaf nodes
that are “unstable”
• Good search extensions includes all moves that
gives check or captures a piece
• The longest search extensions are typically
double the average length of the search tree!
17
18. Transposition Table
• Same position will commonly occur from
different move orders
• All chess engines therefore has a transposition
table (position cache)
• Implemented using a hash table with chess
position as key
• Doesn’t have to evaluate large sub trees over and
over again
• Chess engines typically uses half of available
memory to hash table – proves how important it
is
18
19. Other challenges
• Move generator (hardware / software)
– Hydra (64 nodes Xeon cluster, FPGA chips) computed 200 millions
positions per second, approximately the same as Deep Blue (on
older ASIC chip sets)
– Hydra computed 18+ ply ahead while Deep Blue only managed
12 (Hydra prunes search tree better)
– Komodo 10 chess engine calculates 3-4 mill moves/second on my
Surface Book (Intel i7 @ 2.6 GHz with 3 cores) and computes 20+
ply in less than 5 seconds and 25+ ply in less than 30 seconds
• Efficient data structure for a chess board (0x88, bitboards)
• Opening library suited for a chess computer
• Position evaluation:
• Traditionally chess computers has done deep searches with a simple
evaluation function
• But one of the best PC chess engines today, Rybka, sacrifices search
depth for a complex position evaluation and better search heuristics
19
20. Endgame Tablebases
• Chess engines plays endgames with 3-7 pieces left on
the board perfectly by looking up best move in huge
tables
• These endgame databases are called Tablebases
• Retrograde analyses: Tablebases are generated by
starting with final positions (check mate, steal mate or
insufficient mating material (e.g. king vs. king)) and
then compute backwards until all nodes in search tree
are marked as win, draw or loose
• Using complex compression algorithms (Nalimov,
Syzygy)
• The newer Syzygy compression format uses less than
200 GB for all endgames with up to 6 piezes (compared
to over 1 TB for Nalimov tablebases)
20
21. Lomonosov Tablebases
• All 7 piece endgames (except 6 pieces vs a lone king)
calculated for the first time in 2013 on the
Lomonosov supercomputer in Moscow State
University.
• Took 6 months to generate
• Needed 140 TB of storage
• Longest forced mate:
White to mate in 545 moves!
• See http://chessok.com/?page_id=27966,
http://tb7.chessok.com/
21
22. Demo
• Demo: ChessBase with chess engine Komodo
10 and Stockfish 7
• Best open source UCI chess engine (and may
be best overall):
– Stockfish (stockfishchess.org)
22