SlideShare a Scribd company logo
[Challenge/Hackath
on Name]
[Presentation Topic]
CAPGEMINI
CHALLENGE NAME:SAIL POINT HACKTHON
PRESENTATION TOPIC:ARTIFICIAL INTELLIGENCE
By
B.SRUJANA
Brief Synopsis
Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh
euismod tincidunt ut laoreet Lorem ipsum dolor sit amet, consectetuer adipiscing elit,
sed diam nonummy nibh euismod tincidunt ut laoreet Lorem ipsum dolor sit amet,
consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet
Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh
euismod tincidunt ut laoreet
Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh
euismod tincidunt ut laoreet
Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh
euismod tincidunt ut laoreetLorem ipsum dolor sit amet, consectetuer adipiscing elit,
sed diam nonummy nibh euismod tincidunt ut laoreet
Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh
euismod tincidunt ut laoreet
Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh
euismod tincidunt ut laoreetLorem ipsum Lorem ipsum dolor sit amet, consectetuer
adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet Lorem ipsum
dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod
tincidunt ut laoreet
Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh
Artificial Intelligence
Adversarial Search
Games
• Multi agent environments : any given agent will
need to consider the actions of other agents and
how they affect its own welfare.
• The unpredictability of these other agents can
introduce many possible contingencies
• There could be competitive or cooperative
environments
• Competitive environments, in which the agent’s
goals are in conflict require adversarial search –
these problems are called as games
What kind of games?
• Abstraction: To describe a game we must capture every
relevant aspect of the game. Such as:
– Chess
– Tic-tac-toe
– …
• Accessible environments: Such games are characterized by
perfect information
• Search: game-playing then consists of a search through
possible game positions
• Unpredictable opponent: introduces uncertainty thus
game-playing must deal with contingency problems
Slide adapted from Macskassy
Type of Games
Deterministic Games
• Many possible formalizations, one is:
– States: S (start at s0)
– Players: P={1...N} (usually take turns)
– Actions: A (may depend on player / state)
– Transition Function: SxA →S
– Terminal Test: S → {t,f}
– Terminal Utilities: SxP → R
• Solution for a player is a policy: S → A
Games vs. search problems
• “Unpredictable" opponent  solution is a strategy specifying
a move for every possible opponent reply
• Time limits  unlikely to find goal, must approximate
• Plan of attack:
– Computer considers possible lines of play (Babbage, 1846)
– Algorithm for perfect play (Zermelo, 1912; Von Neumann, 1944)
– Finite horizon, approximate evaluation (Zuse, 1945; Wiener, 1948;
Shannon, 1950)
– First chess program (Turing, 1951)
– Machine learning to improve evaluation accuracy (Samuel, 1952-57)
– Pruning to allow deeper search (McCarthy, 1956)
Deterministic Single-Player?
• Deterministic, single player, perfect
information:
– Know the rules
– Know what actions do
– Know when you win
– E.g. Freecell, 8-Puzzle, Rubik’s cube
• … it’s just search!
Slide adapted from Macskassy
Deterministic Two-Player
• E.g. tic-tac-toe, chess, checkers
• Zero-sum games
– One player maximizes result
– The other minimizes result
• Minimax search
– A state-space search tree
– Players alternate
– Each layer, or ply, consists of a round
of moves
– Choose move to position with highest
minimax value = best achievable utility
against best play
Slide adapted from Macskassy
11
Two-Agent Games (1/2)
• Idealized Setting
– The actions of the agents are interleaved.
• Example
– Grid-Space World
– Two robots : “Black” and “White”
– Goal of Robots
• White : to be in the same cell with Black
• Black : to prevent this from happening
– After settling on a first move, the agent makes the
move, senses what the other agent does, and then
repeats the planning process in sense/plan/act
fashion.
12
Figure 12.2 Search Tree for the Moves of Two Robots
13
Two-Agent Games (2/2)
• two-agent, perfect information, zero-sum
games
• Two agents move in turn until either one of
them wins or the result is a draw.
• Each player has a complete model of the
environment and of its own and the other’s
possible actions and their effects.
14
Minimax Procedure (1/5)
• Two player : MAX and MIN
• Task : find a “best” move for MAX
• Assume that MAX moves first, and that the two
players move alternately.
• MAX node
– nodes at even-numbered depths correspond to
positions in which it is MAX’s move next
• MIN node
– nodes at odd-numbered depths correspond to
positions in which it is MIN’s move next
15
Minimax Procedure (2/5)
• Complete search of most game graphs is impossible.
– For Chess, 1040 nodes
• 1022 centuries to generate the complete search graph
• assuming that a successor could be generated in 1/3 of a
nanosecond
• The universe is estimated to be on the order of 108 centuries old.
– Heuristic search techniques do not reduce the effective
branching factor sufficiently to be of much help.
• Can use either breadth-first, depth-first, or heuristic
methods, except that the termination conditions must
be modified.
16
Minimax Procedure (3/5)
• Estimate of the best-first move
– applying a static evaluation function to the leaf nodes
– measure the “worth” of the leaf nodes.
– The measurement is based on various features thought
to influence this worth.
– It is customary in analyzing game trees to adopt the
convention
• game positions favorable to MAX cause the evaluation
function to have a positive value
• positions favorable to MIN cause the evaluation function to
have negative value
• Values near zero correspond to game positions not
particularly favorable to either MAX or MIN.
17
Minimax Procedure (4/5)
• Good first move extracted
– Assume that MAX were to choose among the tip
nodes of a search tree, he would prefer that node
having the largest evaluation.
• The backed-up value of a MAX node parent of MIN tip
nodes is equal to the maximum of the static evaluations
of the tip nodes.
– MIN would choose that node having the smallest
evaluation.
18
Minimax Procedure (5/5)
• After the parents of all tip nods have been
assigned backed-up values, we back up values
another level.
– MAX would choose that successor MIN node with
the largest backed-up value
– MIN would choose that successor MAX node with
the smallest backed-up value.
– Continue to back up values, level by level from the
leaves, until the successors of the start node are
assigned backed-up values.
19
Example : Tic-Tac-Toe (1/4)
• MAX marks crosses and MIN marks circles and it
is MAX’s turn to play first.
– With a depth bound of 2, conduct a breadth-first
search
– evaluation function e(p) of a position p
• If p is not a winning for either player,
e(p) = (no. of complete rows, columns, or diagonals that are
still open for MAX) - (no. of complete rows, columns, or
diagonals that are still open for MIN)
• If p is a win of MAX,
e(p) = 
• If p is a win of MIN
e(p) = - 
20
Example : Tic-Tac-Toe (2/4)
• First move
21
Example : Tic-Tac-Toe (3/4)
22
Example : Tic-Tac-Toe (4/4)
23
The Alpha-Beta Procedure (1/5)
• Only after tree generation is completed does
position evaluation begin  inefficient
• Remarkable reductions in the amount of search
needed are possible if perform tip-node
evaluations and calculate backed-up values
simultaneously with tree generation.
• After the node marked A is generated and
evaluated, there is no point in generating nodes B,
C, and D.
– MIN has A available and MIN could prefer nothing to A.
24
The Alpha-Beta Procedure (2/5)
• Alpha value
– depending on the backed-up values of the other successors
of the start node, the final backed-up value of the start
node may be greater than -1, but it cannot be less
• Beta value
– depending on the static values of the rest of node
successors, the final backed-up value of node can be less
than -1, but it cannot be greater
• Note
– The alpha values of MAX nodes can never decrease.
– The beta values of MIN nodes can never increase.
25
The Alpha-Beta Procedure (3/5)
• Rules for discontinuing the search
1. Search can be discontinued below any MIN node
having a beta value less than or equal to the alpha
value of any of its MAX node ancestors. The final
backed-up value of this MIN node can be set to its
beta value.
2. Search can be discontinued below any MAX node
having an alpha value greater than or equal to the
beta value of any of its MIN node ancestors. The final
backed-up value of this MAX node can be set to its
alpha value.
26
The Alpha-Beta Procedure (4/5)
• How to compute alpha and beta values
– The alpha value of a MAX node is set equal to the current largest final
backed-up value of its successors.
– The beta value of a MIN node is set equal to the current smallest final
backed-up value of its successors.
• Cut-off
– Alpha cut-off
• search is discontinued under rule 1.
– Beta cut-off
• search is discontinued under rule 2.
• Alpha-Beta Procedure
– The whole process of keeping track of alpha and beta values and
making cut-offs when possible
27
The Alpha-Beta Procedure (5/5)
• Pseudocode
28
29
Consider the following game tree in which the static scores (numbers in leaf
boxes) are all from the MAX player’s point of view.
a. What moves should the MAX player choose?
b. What nodes would not need to be examined using the alpha-beta algorithm– assuming
that nodes are examined in the left-to-right order
30
Search Efficiency (1/3)
• Notation
– b : depth of tree
– d : number of successors of every node (except a tip
node)
– bd : number of tip nodes
– Suppose that an alpha-beta procedure generated
successors in the order of their true backed-up values.
• This order maximizes the number of cut-offs that will
minimizes the number of tip nodes generated.
• Nd : this minimal number of tip nodes
31
Search Efficiency (2/3)
• [Slager & Dixon 1969, Knuth & Moore 1975]
– The number of tip nodes of depth d that would be
generated by optimal alpha-beta search is about the
same as the number of tip nodes that would have
been generated at depth d / 2 without alpha-beta.
– Alpha-beta, with perfect ordering, reduces the
effective branching factor from b to approximately .
• [Pearl 1982]
– The average branching factor is reduced to
approximately.





 
dbb
db
N dd
d
d
oddfor1
evenfor12
2/)1(2/)1(
2/
b
4 3
b
32
Search Efficiency (3/3)
• The most straightforward method for ordering
successor nodes
– to use the static evaluation function.
– Side effect of using a version of iterative
deepening
• Depending on the time resources available, search to
deeper plys can be aborted at any time, and the move
judged best by the search last completed can be made.
33
Other Important Matters (1/2)
• Various Difficulties
– Search might end at a position in which MAX (or
MIN) is able to make a great move.
– Make sure that a position is quiescent before
ending search at that position.
– Quiescent position
• Its static value is not much different from what its
backed-up value would be by looking a move or two
ahead.
34
Other Important Matters (2/2)
• Horizon Effect
– There can be situations in which disaster or
success lurks just beyond the search horizon.
• Both minimax and alpha-beta extension
assume that the opposing player will always
make its best move.
– There are occasions in which this assumption is
inappropriate.
– Minimax would be inappropriate if one player had
some kind of model of the other player’s strategy.
35
Games of Chance (1/2)
• Backammon
– MAX’s and MIN’s turns now each involve a throw of the die.
– Imagine that at each dice throw, a fictitious third player, DICE,
makes a move.
36
Games of Chance (2/2)
• Expectimaxing
– Back up the expected (average) values of the values of
the successors instead of a maximum or minimum.
– Back up the minimum value of the values of
successors of nodes for which it is MIN’s move, the
maximum value of the values of successors of nodes
for which it is MAX’s move, and the expected value of
the values of successors of nodes for which it is the
DICE’s move.
• Introducing a chance move often makes the game
tree branch too much for effective searching.
– Important to have a good static evaluation function
37
Learning Evaluation Functions (1/2)
• TD-GAMMON
– play Backgammon by
training a layered,
feedforward neural
network
– overall value of a
board position
• v = p1 + 2p2 - p3 -2p4
38
Learning Evaluation Functions (2/2)
• Temporal difference training of the network
– accomplished during actual play
– vt+1: the estimate at time t + 1
– Wt: vector of all weights at time t
– Training method have been performed by having the
network play many hundreds of thousands of games
against itself.
– Performance of a well-trained network is at or near
championship level.
W
W


 
t
ttt
v
vvc )( 1
Games : State-of-the-Art
• Checkers: Chinook ended 40-year-reign of human world champion Marion
Tinsley in 1994. Used an endgame database defining perfect play for all
positions involving 8 or fewer pieces on the board, a total of
443,748,401,247 positions. Checkers is now solved!
• Chess: Deep Blue defeated human world champion Gary Kasparov in a six-
game match in 1997. Deep Blue examined 200 million positions per
second, used very sophisticated evaluation and undisclosed methods for
extending some lines of search up to 40 ply. Current programs are even
better, if less historic.
• Othello: In 1997, Logistello defeated human champion by six games to
none. Human champions refuse to compete against computers, which are
too good.
• Go: Human champions are beginning to be challenged by machines, In Go,
b > 300, so most programs use pattern knowledge bases to suggest
plausible moves, along with aggressive pruning.
• Backgammon: Neural-net learning program TDGammon one of world’s
top 3 players.
COSC 159 - Fundamentals of AI 40
Exercise
• Matchstick game
– Start randomly with 14-28 matchsticks
– Player can remove 1,2,or 3 matchsticks
– Player removing last matchstick loses
• Formulate this as an adversarial search
– What are the states?
– What are the termination states?
– What are the actions and their behavior?
Thank You

More Related Content

Similar to Capgemini 1

Two player games
Two player gamesTwo player games
Two player games
Subash Chandra Pakhrin
 
AI subject - Game Theory and cps ppt pptx
AI subject  - Game Theory and cps ppt pptxAI subject  - Game Theory and cps ppt pptx
AI subject - Game Theory and cps ppt pptx
nizmishaik1
 
Adversarial search
Adversarial searchAdversarial search
Adversarial search
Shiwani Gupta
 
cs-171-07-Games and Adversarila Search.ppt
cs-171-07-Games and Adversarila Search.pptcs-171-07-Games and Adversarila Search.ppt
cs-171-07-Games and Adversarila Search.ppt
Samiksha880257
 
Minmax and alpha beta pruning.pptx
Minmax and alpha beta pruning.pptxMinmax and alpha beta pruning.pptx
Minmax and alpha beta pruning.pptx
PriyadharshiniG41
 
Game playing (tic tac-toe), andor graph
Game playing (tic tac-toe), andor graphGame playing (tic tac-toe), andor graph
Game playing (tic tac-toe), andor graph
Syed Zaid Irshad
 
GamePlaying.ppt
GamePlaying.pptGamePlaying.ppt
GamePlaying.ppt
VihaanN2
 
AI3391 Artificial Intelligence UNIT III Notes_merged.pdf
AI3391 Artificial Intelligence UNIT III Notes_merged.pdfAI3391 Artificial Intelligence UNIT III Notes_merged.pdf
AI3391 Artificial Intelligence UNIT III Notes_merged.pdf
Asst.prof M.Gokilavani
 
9SearchAdversarial (1).pptx
9SearchAdversarial (1).pptx9SearchAdversarial (1).pptx
9SearchAdversarial (1).pptx
umairshams6
 
Topic - 6 (Game Playing).ppt
Topic - 6 (Game Playing).pptTopic - 6 (Game Playing).ppt
Topic - 6 (Game Playing).ppt
SabrinaShanta2
 
Games
GamesGames
AI_unit3.pptx
AI_unit3.pptxAI_unit3.pptx
AI_unit3.pptx
G1719HarshalDafade
 
adversial search.pptx
adversial search.pptxadversial search.pptx
adversial search.pptx
KalaiarasiRaja
 
Unit_I_Introduction(Part_III).ppt
Unit_I_Introduction(Part_III).pptUnit_I_Introduction(Part_III).ppt
Unit_I_Introduction(Part_III).ppt
ganesh15478
 
AI.ppt
AI.pptAI.ppt
AI.ppt
ArghyaGayen2
 
Artificial intelligence games
Artificial intelligence gamesArtificial intelligence games
Artificial intelligence games
Sujithmlamthadam
 
Alpha beta
Alpha betaAlpha beta
Alpha beta
sabairshad4
 
Chess engine presentation
Chess engine presentationChess engine presentation
Chess engine presentation
TanushreeSharma34
 

Similar to Capgemini 1 (20)

Two player games
Two player gamesTwo player games
Two player games
 
AI subject - Game Theory and cps ppt pptx
AI subject  - Game Theory and cps ppt pptxAI subject  - Game Theory and cps ppt pptx
AI subject - Game Theory and cps ppt pptx
 
Adversarial search
Adversarial searchAdversarial search
Adversarial search
 
cs-171-07-Games and Adversarila Search.ppt
cs-171-07-Games and Adversarila Search.pptcs-171-07-Games and Adversarila Search.ppt
cs-171-07-Games and Adversarila Search.ppt
 
Minmax and alpha beta pruning.pptx
Minmax and alpha beta pruning.pptxMinmax and alpha beta pruning.pptx
Minmax and alpha beta pruning.pptx
 
Games.4
Games.4Games.4
Games.4
 
Game playing (tic tac-toe), andor graph
Game playing (tic tac-toe), andor graphGame playing (tic tac-toe), andor graph
Game playing (tic tac-toe), andor graph
 
GamePlaying.ppt
GamePlaying.pptGamePlaying.ppt
GamePlaying.ppt
 
AI3391 Artificial Intelligence UNIT III Notes_merged.pdf
AI3391 Artificial Intelligence UNIT III Notes_merged.pdfAI3391 Artificial Intelligence UNIT III Notes_merged.pdf
AI3391 Artificial Intelligence UNIT III Notes_merged.pdf
 
9SearchAdversarial (1).pptx
9SearchAdversarial (1).pptx9SearchAdversarial (1).pptx
9SearchAdversarial (1).pptx
 
Topic - 6 (Game Playing).ppt
Topic - 6 (Game Playing).pptTopic - 6 (Game Playing).ppt
Topic - 6 (Game Playing).ppt
 
Games
GamesGames
Games
 
AI_unit3.pptx
AI_unit3.pptxAI_unit3.pptx
AI_unit3.pptx
 
adversial search.pptx
adversial search.pptxadversial search.pptx
adversial search.pptx
 
Unit_I_Introduction(Part_III).ppt
Unit_I_Introduction(Part_III).pptUnit_I_Introduction(Part_III).ppt
Unit_I_Introduction(Part_III).ppt
 
AI.ppt
AI.pptAI.ppt
AI.ppt
 
Ai
AiAi
Ai
 
Artificial intelligence games
Artificial intelligence gamesArtificial intelligence games
Artificial intelligence games
 
Alpha beta
Alpha betaAlpha beta
Alpha beta
 
Chess engine presentation
Chess engine presentationChess engine presentation
Chess engine presentation
 

More from berasrujana

Network programming pdf
Network programming pdfNetwork programming pdf
Network programming pdf
berasrujana
 
Topic : Shared memory
Topic : Shared memoryTopic : Shared memory
Topic : Shared memory
berasrujana
 
Distributed computing file
Distributed computing fileDistributed computing file
Distributed computing file
berasrujana
 
Kairos aarohan
Kairos  aarohanKairos  aarohan
Kairos aarohan
berasrujana
 
Atm using fingerprint
Atm using fingerprintAtm using fingerprint
Atm using fingerprint
berasrujana
 
Big data seminor
Big data seminorBig data seminor
Big data seminor
berasrujana
 

More from berasrujana (6)

Network programming pdf
Network programming pdfNetwork programming pdf
Network programming pdf
 
Topic : Shared memory
Topic : Shared memoryTopic : Shared memory
Topic : Shared memory
 
Distributed computing file
Distributed computing fileDistributed computing file
Distributed computing file
 
Kairos aarohan
Kairos  aarohanKairos  aarohan
Kairos aarohan
 
Atm using fingerprint
Atm using fingerprintAtm using fingerprint
Atm using fingerprint
 
Big data seminor
Big data seminorBig data seminor
Big data seminor
 

Recently uploaded

block diagram and signal flow graph representation
block diagram and signal flow graph representationblock diagram and signal flow graph representation
block diagram and signal flow graph representation
Divya Somashekar
 
J.Yang, ICLR 2024, MLILAB, KAIST AI.pdf
J.Yang,  ICLR 2024, MLILAB, KAIST AI.pdfJ.Yang,  ICLR 2024, MLILAB, KAIST AI.pdf
J.Yang, ICLR 2024, MLILAB, KAIST AI.pdf
MLILAB
 
Nuclear Power Economics and Structuring 2024
Nuclear Power Economics and Structuring 2024Nuclear Power Economics and Structuring 2024
Nuclear Power Economics and Structuring 2024
Massimo Talia
 
Forklift Classes Overview by Intella Parts
Forklift Classes Overview by Intella PartsForklift Classes Overview by Intella Parts
Forklift Classes Overview by Intella Parts
Intella Parts
 
MCQ Soil mechanics questions (Soil shear strength).pdf
MCQ Soil mechanics questions (Soil shear strength).pdfMCQ Soil mechanics questions (Soil shear strength).pdf
MCQ Soil mechanics questions (Soil shear strength).pdf
Osamah Alsalih
 
LIGA(E)11111111111111111111111111111111111111111.ppt
LIGA(E)11111111111111111111111111111111111111111.pptLIGA(E)11111111111111111111111111111111111111111.ppt
LIGA(E)11111111111111111111111111111111111111111.ppt
ssuser9bd3ba
 
Quality defects in TMT Bars, Possible causes and Potential Solutions.
Quality defects in TMT Bars, Possible causes and Potential Solutions.Quality defects in TMT Bars, Possible causes and Potential Solutions.
Quality defects in TMT Bars, Possible causes and Potential Solutions.
PrashantGoswami42
 
Top 10 Oil and Gas Projects in Saudi Arabia 2024.pdf
Top 10 Oil and Gas Projects in Saudi Arabia 2024.pdfTop 10 Oil and Gas Projects in Saudi Arabia 2024.pdf
Top 10 Oil and Gas Projects in Saudi Arabia 2024.pdf
Teleport Manpower Consultant
 
ethical hacking in wireless-hacking1.ppt
ethical hacking in wireless-hacking1.pptethical hacking in wireless-hacking1.ppt
ethical hacking in wireless-hacking1.ppt
Jayaprasanna4
 
Democratizing Fuzzing at Scale by Abhishek Arya
Democratizing Fuzzing at Scale by Abhishek AryaDemocratizing Fuzzing at Scale by Abhishek Arya
Democratizing Fuzzing at Scale by Abhishek Arya
abh.arya
 
ethical hacking-mobile hacking methods.ppt
ethical hacking-mobile hacking methods.pptethical hacking-mobile hacking methods.ppt
ethical hacking-mobile hacking methods.ppt
Jayaprasanna4
 
Planning Of Procurement o different goods and services
Planning Of Procurement o different goods and servicesPlanning Of Procurement o different goods and services
Planning Of Procurement o different goods and services
JoytuBarua2
 
weather web application report.pdf
weather web application report.pdfweather web application report.pdf
weather web application report.pdf
Pratik Pawar
 
Courier management system project report.pdf
Courier management system project report.pdfCourier management system project report.pdf
Courier management system project report.pdf
Kamal Acharya
 
WATER CRISIS and its solutions-pptx 1234
WATER CRISIS and its solutions-pptx 1234WATER CRISIS and its solutions-pptx 1234
WATER CRISIS and its solutions-pptx 1234
AafreenAbuthahir2
 
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxCFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
R&R Consult
 
Water Industry Process Automation and Control Monthly - May 2024.pdf
Water Industry Process Automation and Control Monthly - May 2024.pdfWater Industry Process Automation and Control Monthly - May 2024.pdf
Water Industry Process Automation and Control Monthly - May 2024.pdf
Water Industry Process Automation & Control
 
Event Management System Vb Net Project Report.pdf
Event Management System Vb Net  Project Report.pdfEvent Management System Vb Net  Project Report.pdf
Event Management System Vb Net Project Report.pdf
Kamal Acharya
 
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
obonagu
 
Automobile Management System Project Report.pdf
Automobile Management System Project Report.pdfAutomobile Management System Project Report.pdf
Automobile Management System Project Report.pdf
Kamal Acharya
 

Recently uploaded (20)

block diagram and signal flow graph representation
block diagram and signal flow graph representationblock diagram and signal flow graph representation
block diagram and signal flow graph representation
 
J.Yang, ICLR 2024, MLILAB, KAIST AI.pdf
J.Yang,  ICLR 2024, MLILAB, KAIST AI.pdfJ.Yang,  ICLR 2024, MLILAB, KAIST AI.pdf
J.Yang, ICLR 2024, MLILAB, KAIST AI.pdf
 
Nuclear Power Economics and Structuring 2024
Nuclear Power Economics and Structuring 2024Nuclear Power Economics and Structuring 2024
Nuclear Power Economics and Structuring 2024
 
Forklift Classes Overview by Intella Parts
Forklift Classes Overview by Intella PartsForklift Classes Overview by Intella Parts
Forklift Classes Overview by Intella Parts
 
MCQ Soil mechanics questions (Soil shear strength).pdf
MCQ Soil mechanics questions (Soil shear strength).pdfMCQ Soil mechanics questions (Soil shear strength).pdf
MCQ Soil mechanics questions (Soil shear strength).pdf
 
LIGA(E)11111111111111111111111111111111111111111.ppt
LIGA(E)11111111111111111111111111111111111111111.pptLIGA(E)11111111111111111111111111111111111111111.ppt
LIGA(E)11111111111111111111111111111111111111111.ppt
 
Quality defects in TMT Bars, Possible causes and Potential Solutions.
Quality defects in TMT Bars, Possible causes and Potential Solutions.Quality defects in TMT Bars, Possible causes and Potential Solutions.
Quality defects in TMT Bars, Possible causes and Potential Solutions.
 
Top 10 Oil and Gas Projects in Saudi Arabia 2024.pdf
Top 10 Oil and Gas Projects in Saudi Arabia 2024.pdfTop 10 Oil and Gas Projects in Saudi Arabia 2024.pdf
Top 10 Oil and Gas Projects in Saudi Arabia 2024.pdf
 
ethical hacking in wireless-hacking1.ppt
ethical hacking in wireless-hacking1.pptethical hacking in wireless-hacking1.ppt
ethical hacking in wireless-hacking1.ppt
 
Democratizing Fuzzing at Scale by Abhishek Arya
Democratizing Fuzzing at Scale by Abhishek AryaDemocratizing Fuzzing at Scale by Abhishek Arya
Democratizing Fuzzing at Scale by Abhishek Arya
 
ethical hacking-mobile hacking methods.ppt
ethical hacking-mobile hacking methods.pptethical hacking-mobile hacking methods.ppt
ethical hacking-mobile hacking methods.ppt
 
Planning Of Procurement o different goods and services
Planning Of Procurement o different goods and servicesPlanning Of Procurement o different goods and services
Planning Of Procurement o different goods and services
 
weather web application report.pdf
weather web application report.pdfweather web application report.pdf
weather web application report.pdf
 
Courier management system project report.pdf
Courier management system project report.pdfCourier management system project report.pdf
Courier management system project report.pdf
 
WATER CRISIS and its solutions-pptx 1234
WATER CRISIS and its solutions-pptx 1234WATER CRISIS and its solutions-pptx 1234
WATER CRISIS and its solutions-pptx 1234
 
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxCFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
 
Water Industry Process Automation and Control Monthly - May 2024.pdf
Water Industry Process Automation and Control Monthly - May 2024.pdfWater Industry Process Automation and Control Monthly - May 2024.pdf
Water Industry Process Automation and Control Monthly - May 2024.pdf
 
Event Management System Vb Net Project Report.pdf
Event Management System Vb Net  Project Report.pdfEvent Management System Vb Net  Project Report.pdf
Event Management System Vb Net Project Report.pdf
 
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
 
Automobile Management System Project Report.pdf
Automobile Management System Project Report.pdfAutomobile Management System Project Report.pdf
Automobile Management System Project Report.pdf
 

Capgemini 1

  • 1. [Challenge/Hackath on Name] [Presentation Topic] CAPGEMINI CHALLENGE NAME:SAIL POINT HACKTHON PRESENTATION TOPIC:ARTIFICIAL INTELLIGENCE By B.SRUJANA
  • 2. Brief Synopsis Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreetLorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreetLorem ipsum Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh
  • 4. Games • Multi agent environments : any given agent will need to consider the actions of other agents and how they affect its own welfare. • The unpredictability of these other agents can introduce many possible contingencies • There could be competitive or cooperative environments • Competitive environments, in which the agent’s goals are in conflict require adversarial search – these problems are called as games
  • 5. What kind of games? • Abstraction: To describe a game we must capture every relevant aspect of the game. Such as: – Chess – Tic-tac-toe – … • Accessible environments: Such games are characterized by perfect information • Search: game-playing then consists of a search through possible game positions • Unpredictable opponent: introduces uncertainty thus game-playing must deal with contingency problems Slide adapted from Macskassy
  • 7. Deterministic Games • Many possible formalizations, one is: – States: S (start at s0) – Players: P={1...N} (usually take turns) – Actions: A (may depend on player / state) – Transition Function: SxA →S – Terminal Test: S → {t,f} – Terminal Utilities: SxP → R • Solution for a player is a policy: S → A
  • 8. Games vs. search problems • “Unpredictable" opponent  solution is a strategy specifying a move for every possible opponent reply • Time limits  unlikely to find goal, must approximate • Plan of attack: – Computer considers possible lines of play (Babbage, 1846) – Algorithm for perfect play (Zermelo, 1912; Von Neumann, 1944) – Finite horizon, approximate evaluation (Zuse, 1945; Wiener, 1948; Shannon, 1950) – First chess program (Turing, 1951) – Machine learning to improve evaluation accuracy (Samuel, 1952-57) – Pruning to allow deeper search (McCarthy, 1956)
  • 9. Deterministic Single-Player? • Deterministic, single player, perfect information: – Know the rules – Know what actions do – Know when you win – E.g. Freecell, 8-Puzzle, Rubik’s cube • … it’s just search! Slide adapted from Macskassy
  • 10. Deterministic Two-Player • E.g. tic-tac-toe, chess, checkers • Zero-sum games – One player maximizes result – The other minimizes result • Minimax search – A state-space search tree – Players alternate – Each layer, or ply, consists of a round of moves – Choose move to position with highest minimax value = best achievable utility against best play Slide adapted from Macskassy
  • 11. 11 Two-Agent Games (1/2) • Idealized Setting – The actions of the agents are interleaved. • Example – Grid-Space World – Two robots : “Black” and “White” – Goal of Robots • White : to be in the same cell with Black • Black : to prevent this from happening – After settling on a first move, the agent makes the move, senses what the other agent does, and then repeats the planning process in sense/plan/act fashion.
  • 12. 12 Figure 12.2 Search Tree for the Moves of Two Robots
  • 13. 13 Two-Agent Games (2/2) • two-agent, perfect information, zero-sum games • Two agents move in turn until either one of them wins or the result is a draw. • Each player has a complete model of the environment and of its own and the other’s possible actions and their effects.
  • 14. 14 Minimax Procedure (1/5) • Two player : MAX and MIN • Task : find a “best” move for MAX • Assume that MAX moves first, and that the two players move alternately. • MAX node – nodes at even-numbered depths correspond to positions in which it is MAX’s move next • MIN node – nodes at odd-numbered depths correspond to positions in which it is MIN’s move next
  • 15. 15 Minimax Procedure (2/5) • Complete search of most game graphs is impossible. – For Chess, 1040 nodes • 1022 centuries to generate the complete search graph • assuming that a successor could be generated in 1/3 of a nanosecond • The universe is estimated to be on the order of 108 centuries old. – Heuristic search techniques do not reduce the effective branching factor sufficiently to be of much help. • Can use either breadth-first, depth-first, or heuristic methods, except that the termination conditions must be modified.
  • 16. 16 Minimax Procedure (3/5) • Estimate of the best-first move – applying a static evaluation function to the leaf nodes – measure the “worth” of the leaf nodes. – The measurement is based on various features thought to influence this worth. – It is customary in analyzing game trees to adopt the convention • game positions favorable to MAX cause the evaluation function to have a positive value • positions favorable to MIN cause the evaluation function to have negative value • Values near zero correspond to game positions not particularly favorable to either MAX or MIN.
  • 17. 17 Minimax Procedure (4/5) • Good first move extracted – Assume that MAX were to choose among the tip nodes of a search tree, he would prefer that node having the largest evaluation. • The backed-up value of a MAX node parent of MIN tip nodes is equal to the maximum of the static evaluations of the tip nodes. – MIN would choose that node having the smallest evaluation.
  • 18. 18 Minimax Procedure (5/5) • After the parents of all tip nods have been assigned backed-up values, we back up values another level. – MAX would choose that successor MIN node with the largest backed-up value – MIN would choose that successor MAX node with the smallest backed-up value. – Continue to back up values, level by level from the leaves, until the successors of the start node are assigned backed-up values.
  • 19. 19 Example : Tic-Tac-Toe (1/4) • MAX marks crosses and MIN marks circles and it is MAX’s turn to play first. – With a depth bound of 2, conduct a breadth-first search – evaluation function e(p) of a position p • If p is not a winning for either player, e(p) = (no. of complete rows, columns, or diagonals that are still open for MAX) - (no. of complete rows, columns, or diagonals that are still open for MIN) • If p is a win of MAX, e(p) =  • If p is a win of MIN e(p) = - 
  • 20. 20 Example : Tic-Tac-Toe (2/4) • First move
  • 23. 23 The Alpha-Beta Procedure (1/5) • Only after tree generation is completed does position evaluation begin  inefficient • Remarkable reductions in the amount of search needed are possible if perform tip-node evaluations and calculate backed-up values simultaneously with tree generation. • After the node marked A is generated and evaluated, there is no point in generating nodes B, C, and D. – MIN has A available and MIN could prefer nothing to A.
  • 24. 24 The Alpha-Beta Procedure (2/5) • Alpha value – depending on the backed-up values of the other successors of the start node, the final backed-up value of the start node may be greater than -1, but it cannot be less • Beta value – depending on the static values of the rest of node successors, the final backed-up value of node can be less than -1, but it cannot be greater • Note – The alpha values of MAX nodes can never decrease. – The beta values of MIN nodes can never increase.
  • 25. 25 The Alpha-Beta Procedure (3/5) • Rules for discontinuing the search 1. Search can be discontinued below any MIN node having a beta value less than or equal to the alpha value of any of its MAX node ancestors. The final backed-up value of this MIN node can be set to its beta value. 2. Search can be discontinued below any MAX node having an alpha value greater than or equal to the beta value of any of its MIN node ancestors. The final backed-up value of this MAX node can be set to its alpha value.
  • 26. 26 The Alpha-Beta Procedure (4/5) • How to compute alpha and beta values – The alpha value of a MAX node is set equal to the current largest final backed-up value of its successors. – The beta value of a MIN node is set equal to the current smallest final backed-up value of its successors. • Cut-off – Alpha cut-off • search is discontinued under rule 1. – Beta cut-off • search is discontinued under rule 2. • Alpha-Beta Procedure – The whole process of keeping track of alpha and beta values and making cut-offs when possible
  • 27. 27 The Alpha-Beta Procedure (5/5) • Pseudocode
  • 28. 28
  • 29. 29 Consider the following game tree in which the static scores (numbers in leaf boxes) are all from the MAX player’s point of view. a. What moves should the MAX player choose? b. What nodes would not need to be examined using the alpha-beta algorithm– assuming that nodes are examined in the left-to-right order
  • 30. 30 Search Efficiency (1/3) • Notation – b : depth of tree – d : number of successors of every node (except a tip node) – bd : number of tip nodes – Suppose that an alpha-beta procedure generated successors in the order of their true backed-up values. • This order maximizes the number of cut-offs that will minimizes the number of tip nodes generated. • Nd : this minimal number of tip nodes
  • 31. 31 Search Efficiency (2/3) • [Slager & Dixon 1969, Knuth & Moore 1975] – The number of tip nodes of depth d that would be generated by optimal alpha-beta search is about the same as the number of tip nodes that would have been generated at depth d / 2 without alpha-beta. – Alpha-beta, with perfect ordering, reduces the effective branching factor from b to approximately . • [Pearl 1982] – The average branching factor is reduced to approximately.        dbb db N dd d d oddfor1 evenfor12 2/)1(2/)1( 2/ b 4 3 b
  • 32. 32 Search Efficiency (3/3) • The most straightforward method for ordering successor nodes – to use the static evaluation function. – Side effect of using a version of iterative deepening • Depending on the time resources available, search to deeper plys can be aborted at any time, and the move judged best by the search last completed can be made.
  • 33. 33 Other Important Matters (1/2) • Various Difficulties – Search might end at a position in which MAX (or MIN) is able to make a great move. – Make sure that a position is quiescent before ending search at that position. – Quiescent position • Its static value is not much different from what its backed-up value would be by looking a move or two ahead.
  • 34. 34 Other Important Matters (2/2) • Horizon Effect – There can be situations in which disaster or success lurks just beyond the search horizon. • Both minimax and alpha-beta extension assume that the opposing player will always make its best move. – There are occasions in which this assumption is inappropriate. – Minimax would be inappropriate if one player had some kind of model of the other player’s strategy.
  • 35. 35 Games of Chance (1/2) • Backammon – MAX’s and MIN’s turns now each involve a throw of the die. – Imagine that at each dice throw, a fictitious third player, DICE, makes a move.
  • 36. 36 Games of Chance (2/2) • Expectimaxing – Back up the expected (average) values of the values of the successors instead of a maximum or minimum. – Back up the minimum value of the values of successors of nodes for which it is MIN’s move, the maximum value of the values of successors of nodes for which it is MAX’s move, and the expected value of the values of successors of nodes for which it is the DICE’s move. • Introducing a chance move often makes the game tree branch too much for effective searching. – Important to have a good static evaluation function
  • 37. 37 Learning Evaluation Functions (1/2) • TD-GAMMON – play Backgammon by training a layered, feedforward neural network – overall value of a board position • v = p1 + 2p2 - p3 -2p4
  • 38. 38 Learning Evaluation Functions (2/2) • Temporal difference training of the network – accomplished during actual play – vt+1: the estimate at time t + 1 – Wt: vector of all weights at time t – Training method have been performed by having the network play many hundreds of thousands of games against itself. – Performance of a well-trained network is at or near championship level. W W     t ttt v vvc )( 1
  • 39. Games : State-of-the-Art • Checkers: Chinook ended 40-year-reign of human world champion Marion Tinsley in 1994. Used an endgame database defining perfect play for all positions involving 8 or fewer pieces on the board, a total of 443,748,401,247 positions. Checkers is now solved! • Chess: Deep Blue defeated human world champion Gary Kasparov in a six- game match in 1997. Deep Blue examined 200 million positions per second, used very sophisticated evaluation and undisclosed methods for extending some lines of search up to 40 ply. Current programs are even better, if less historic. • Othello: In 1997, Logistello defeated human champion by six games to none. Human champions refuse to compete against computers, which are too good. • Go: Human champions are beginning to be challenged by machines, In Go, b > 300, so most programs use pattern knowledge bases to suggest plausible moves, along with aggressive pruning. • Backgammon: Neural-net learning program TDGammon one of world’s top 3 players.
  • 40. COSC 159 - Fundamentals of AI 40 Exercise • Matchstick game – Start randomly with 14-28 matchsticks – Player can remove 1,2,or 3 matchsticks – Player removing last matchstick loses • Formulate this as an adversarial search – What are the states? – What are the termination states? – What are the actions and their behavior?