(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
Unit 1.ppt
1.
2. Views of AI fall into four categories:
The branch of computer science that is
concerned with the automation of intelligent
behaviour.
Thinking humanly Thinking rationally
Acting humanly Acting rationally
The textbook advocates "acting rationally"
3. Foundations of AI
Philosophy: logic, philosophy of mind, science,
mathematics.
Mathematics: logic, probability theory, theory
of computability
Psychology: behaviorism, cognitive
psychology
Computer Science & Engineering: hardware,
algorithms, computational complexity theory
-Linguistics: theory of grammar, syntax,
semantics
4. Turing (1950) "Computing machinery and intelligence":
"Can machines think?" "Can machines behave intelligently?"
Operational test for intelligent behavior: the Imitation Game
Predicted that by 2000, a machine might have a 30% chance of
fooling a lay person for 5 minutes
Anticipated all major arguments against AI in following 50 years
Suggested major components of AI: knowledge, reasoning,
language understanding, learning
5. Requires scientific theories of internal activities of
the brain
How to validate? Requires
1) Predicting and testing behavior of human subjects
(top-down)
2) Direct identification from neurological data (bottom-
up)
Both approaches
Cognitive Science and
Cognitive Neuroscience are now distinct from AI
6. Aristotle: what are correct arguments/thought
processes?
Several Greek schools developed various forms of
logic: notation and rules of derivation for thoughts
Direct line through mathematics and philosophy to
modern AI
Problems:
1. Not all intelligent behavior is mediated by logical
deliberation
2. What is the purpose of thinking? What thoughts
should I have?
7. Rational behavior: doing the right thing
The right thing: that which is expected to
maximize goal achievement, given the
available information
Doesn't necessarily involve thinking – e.g.,
blinking reflex – but thinking should be in
the service of rational action
8. An agent is an entity that perceives and acts
This course is about designing rational agents
Abstractly, an agent is a function from percept histories to
actions:
[f: P* A]
For any given class of environments and tasks, we seek
the agent (or class of agents) with the best performance
Caveat: computational limitations make perfect rationality
unachievable
design best program for given machine resources
9. Intelligent Agents
What is an agent ?
An agent is anything that perceiving its
environment through sensors and acting upon
that environment through actuators
Example:
Human is an agent
A robot is also an agent with cameras and motors
A thermostat detecting room temperature.
12. Simple Terms
Percept
Agent’s perceptual inputs at any given instant
Percept sequence
Complete history of everything that the agent
has ever perceived.
13. Agent function & program
Agent’s behavior is mathematically
described by
Agent function
A function mapping any given percept
sequence to an action
Practically it is described by
An agent program
The real implementation
16. Program implements the agent
function
Function Reflex-Vacuum-Agent([location,statuse])
return an action
If status = Dirty then return Suck
else if location = A then return Right
else if location = B then return left
17. Concept of Rationality
Rational agent
One that does the right thing
= every entry in the table for the agent
function is correct (rational).
What is correct?
The actions that cause the agent to be most
successful
So we need ways to measure success.
18. Performance measure
Performance measure
An objective function that determines
How the agent does successfully
E.g., 90% or 30% ?
An agent, based on its percepts
action sequence :
if desirable, it is said to be performing well.
No universal performance measure for all
agents
19. Performance measure
A general rule:
Design performance measures according to
What one actually wants in the environment
Rather than how one thinks the agent should
behave
E.g., in vacuum-cleaner world
We want the floor clean, no matter how the
agent behave
We don’t restrict how the agent behaves
20. Rationality
What is rational at any given time depends
on four things:
The performance measure defining the criterion
of success
The agent’s prior knowledge of the environment
The actions that the agent can perform
The agents’s percept sequence up to now
21. Rational agent
For each possible percept sequence,
an rational agent should select
an action expected to maximize its performance
measure, given the evidence provided by the
percept sequence and whatever built-in knowledge
the agent has
E.g., an exam
Maximize marks, based on
the questions on the paper & your knowledge
22. Example of a rational agent
Performance measure
Awards one point for each clean square
at each time step, over 10000 time steps
Prior knowledge about the environment
The geography of the environment
Only two squares
The effect of the actions
23. Actions that can perform
Left, Right, Suck and NoOp
Percept sequences
Where is the agent?
Whether the location contains dirt?
Under this circumstance, the agent is
rational.
Example of a rational agent
24. An omniscient agent
Knows the actual outcome of its actions in
advance
No other possible outcomes
An example
crossing a street but died of the fallen cargo
door from 33,000ft irrational?
Omniscience
25. Based on the circumstance, it is rational.
As rationality maximizes
Expected performance
Perfection maximizes
Actual performance
Hence rational agents are not omniscient.
Omniscience
26. Learning
Does a rational agent depend on only
current percept?
No, the past percept sequence should also be
used
This is called learning
After experiencing an episode, the agent
should adjust its behaviors to perform better for the
same job next time.
27. Autonomy
If an agent just relies on the prior knowledge of its
designer rather than its own percepts then the
agent lacks autonomy
A rational agent should be autonomous- it
should learn what it can to compensate for
partial or incorrect prior knowledge.
E.g., a clock
No input (percepts)
Run only but its own algorithm (prior knowledge)
No learning, no experience, etc.
28. Sometimes, the environment may not be
the real world
E.g., flight simulator, video games, Internet
They are all artificial but very complex
environments
Those agents working in these environments
are called
Software agent (softbots)
Because all parts of the agent are software
Software Agents
29. Task environments
Task environments are the problems
While the rational agents are the solutions
Specifying the task environment
PEAS description as fully as possible
Performance
Environment
Actuators
Sensors
Use automated taxi driver as an example
30. Task environments
Performance measure
How can we judge the automated driver?
Which factors are considered?
getting to the correct destination
minimizing fuel consumption
minimizing the trip time and/or cost
minimizing the violations of traffic laws
maximizing the safety and comfort, etc.
31. Environment
A taxi must deal with a variety of roads
Traffic lights, other vehicles, pedestrians, stray
animals, road works, police cars, etc.
Interact with the customer
Task environments
32. Actuators (for outputs)
Control over the accelerator, steering, gear
shifting and braking
A display to communicate with the customers
Sensors (for inputs)
Detect other vehicles, road situations
GPS (Global Positioning System) to know
where the taxi is
Many more devices are necessary
Task environments
33. A sketch of automated taxi driver
Task environments
34. Properties of task environments
Fully observable vs. Partially observable
If an agent’s sensors give it access to the
complete state of the environment at each point
in time then the environment is effectively and
fully observable
if the sensors detect all aspects
That are relevant to the choice of action
35. Partially observable
• An environment might be Partially observable
because of noisy and inaccurate sensors or
because parts of the state are simply missing
from the sensor data.
Example:
A local dirt sensor of the cleaner cannot tell
Whether other squares are clean or not
36. Deterministic vs. stochastic
next state of the environment completely
determined by the current state and
the actions executed by the agent, then the
environment is deterministic, otherwise, it is
Stochastic.
-Cleaner and taxi driver are:
Stochastic because of some unobservable aspects
noise or unknown
Properties of task environments
37. Episodic vs. sequential
An episode = agent’s single pair of perception & action
The quality of the agent’s action does not depend on
other episodes
Every episode is independent of each other
Episodic environment is simpler
The agent does not need to think ahead
Sequential
Current action may affect all future decisions
-Ex. Taxi driving and chess.
Properties of task environments
38. Static vs. dynamic
A dynamic environment is always changing
over time
E.g., the number of people in the street
While static environment
E.g., the destination
Semidynamic
environment is not changed over time
but the agent’s performance score does
Properties of task environments
39. Discrete vs. continuous
If there are a limited number of distinct states,
clearly defined percepts and actions, the
environment is discrete
E.g., Chess game
Continuous: Taxi driving
Properties of task environments
40. Single agent VS. multiagent
Playing a crossword puzzle – single agent
Chess playing – two agents
Competitive multiagent environment
Chess playing
Cooperative multiagent environment
Automated taxi driver
Avoiding collision
Properties of task environments
41. Properties of task environments
Known vs. unknown
This distinction refers not to the environment itself but to
the agent’s state of knowledge about the environment.
• In known environment, the outcomes for all actions are
given.
( example: solitaire card games).
• If the environment is unknown, the agent will have to learn
how it works in order to make good decisions.
( example: new video game).
44. Structure of agents
Agent = architecture + program
Architecture = some sort of computing device
(sensors + actuators)
(Agent) Program = some function that
implements the agent mapping = “?”
Agent Program = Job of AI
45. Agent programs
Input for Agent Program
Only the current percept
Input for Agent Function
The entire percept sequence
The agent must remember all of them
Implement the agent program as
A look up table (agent function)
46. Agent Programs
P = the set of possible percepts
T= lifetime of the agent
The total number of percepts it receives
Size of the look up table
Consider playing chess
P =10, T=150
Will require a table of at least 10150 entries
T
t
t
P
1
47. Agent programs
Despite of huge size, look up table does
what we want.
The key challenge of AI
Find out how to write programs that, to the
extent possible, produce rational behavior
From a small amount of code
Rather than a large amount of table entries
E.g., a five-line program of Newton’s Method
50. Simple reflex agents
It uses just condition-action rules
The rules are like the form “if … then …”
efficient but have narrow range of applicability
Because knowledge sometimes cannot be
stated explicitly
Work only
if the environment is fully observable
53. Model-based Reflex Agents
For the world that is partially observable
the agent has to keep track of an internal state
That depends on the percept history
Reflecting some of the unobserved aspects
E.g., driving a car and changing lane
Requiring two types of knowledge
How the world evolves independently of the
agent
How the agent’s actions affect the world
54. Example Table Agent
With Internal State
Saw an object ahead,
and turned right, and
it’s now clear ahead
Go straight
Saw an object on my
right, turned right, and
object ahead again
Halt
See no objects ahead Go straight
See an object ahead Turn randomly
IF THEN
55. Example Reflex Agent With Internal State:
Wall-Following
Actions: left, right, straight, open-door
Rules:
1. If open(left) & open(right) and open(straight) then
choose randomly between right and left
2. If wall(left) and open(right) and open(straight) then straight
3. If wall(right) and open(left) and open(straight) then straight
4. If wall(right) and open(left) and wall(straight) then left
5. If wall(left) and open(right) and wall(straight) then right
6. If wall(left) and door(right) and wall(straight) then open-door
7. If wall(right) and wall(left) and open(straight) then straight.
8. (Default) Move randomly
start
58. Goal-based agents
Current state of the environment is
always not enough
The goal is another issue to achieve
Judgment of rationality / correctness
Actions chosen goals, based on
the current state
the current percept
59. Goal-based agents
Conclusion
Goal-based agents are less efficient
but more flexible
Agent Different goals different tasks
Search and planning
two other sub-fields in AI
to find out the action sequences to achieve its goal
61. Utility-based agents
Goals alone are not enough
to generate high-quality behavior
E.g. meals in Canteen, good or not ?
Many action sequences the goals
some are better and some worse
If goal means success,
then utility means the degree of success
(how successful it is)
63. Utility-based agents
it is said state A has higher utility
If state A is more preferred than others
Utility is therefore a function
that maps a state onto a real number
the degree of success
64. Utility-based agents (3)
Utility has several advantages:
When there are conflicting goals,
Only some of the goals but not all can be
achieved
utility describes the appropriate trade-off
When there are several goals
None of them are achieved certainly
utility provides a way for the decision-making
65. Learning Agents
After an agent is programmed, can it
work immediately?
No, it still need teaching
In AI,
Once an agent is done
We teach it by giving it a set of examples
Test it by using another set of examples
We then say the agent learns
A learning agent
66. Learning Agents
Four conceptual components
Learning element
Making improvement
Performance element
Selecting external actions
Critic
Tells the Learning element how well the agent is doing with
respect to fixed performance standard.
(Feedback from user or examples, good or not?)
Problem generator
Suggest actions that will lead to new and informative
experiences.
69. 69
Problem types
Deterministic, fully observable single-state problem
Agent knows exactly which state it will be in; solution is a
sequence
Non-observable sensorless problem (conformant
problem)
Agent may have no idea where it is; solution is a sequence
Nondeterministic and/or partially observable
contingency problem
percepts provide new information about current state
often interleave} search, execution
Unknown state space exploration problem
71. 71
Example: vacuum world
Single-state, start in #5.
Solution? [Right, Suck]
Sensorless, start in
{1,2,3,4,5,6,7,8} e.g.,
Right goes to {2,4,6,8}
Solution?
72. 72
Example: vacuum world
Sensorless, start in
{1,2,3,4,5,6,7,8} e.g.,
Right goes to {2,4,6,8}
Solution?
[Right,Suck,Left,Suck]
Contingency
Nondeterministic: Suck may
dirty a clean carpet
Partially observable: location, dirt at current location.
Percept: [L, Clean], i.e., start in #5 or #7
Solution?
73. 73
Example: vacuum world
Sensorless, start in
{1,2,3,4,5,6,7,8} e.g.,
Right goes to {2,4,6,8}
Solution?
[Right,Suck,Left,Suck]
Contingency
Nondeterministic: Suck may
dirty a clean carpet
Partially observable: location, dirt at current location.
Percept: [L, Clean], i.e., start in #5 or #7
Solution? [Right, if dirt then Suck]
74. 14 Jan 2004 CS 3243 - Blind Search 74
Example: The 8-puzzle
states?
actions?
goal test?
path cost?
75. 14 Jan 2004 CS 3243 - Blind Search 75
Example: The 8-puzzle
states? locations of tiles
actions? move blank left, right, up, down
goal test? = goal state (given)
path cost? 1 per move
[Note: optimal solution of n-Puzzle family is NP-hard]
76. 76
Search strategies
A search strategy is defined by picking the order of
node expansion
Strategies are evaluated along the following
dimensions:
completeness: does it always find a solution if one exists?
time complexity: number of nodes generated
space complexity: maximum number of nodes in memory
optimality: does it always find a least-cost solution?
Time and space complexity are measured in terms of
b: maximum branching factor of the search tree
d: depth of the least-cost solution
m: maximum depth of the state space
77. Uninformed vs Informed
search strategies
Depth-first, breadth-first and uniform-
cost searches are uninformed.
Informed search there is an estimate
available of the cost (distance) from
each state (city) to the goal.
Heuristic embodied in function h(n),
estimate of remaining cost from search
node n to the least cost goal.
78. Cont..
Graph being searched is a graph of
states.
Search algorithm defines a tree of
search nodes.
Two paths to the same state generate
two different search nodes.
Heuristic could be defined on underlying
state; the path to a state does not affect
estimate of distance to the goal.
79. Uninformed search strategies
Breadth- First search
Uniform-cost search
Depth- First search
Depth-limited search
Iterative deepening search
Bidirectional search
80. Breadth- First search
Breadth- first search on a simple binary
tree.
At each stage, the node to be expanded
next is indicated by a marker.
The nodes that are already explored are
gray.
The nodes with dashed lines are not
generated yet.
89. Depth-limited Search
It is equivalent to depth- first search with
depth limit /, i.e., nodes at depth / have
no successors
Implementation: a recursive
implementation.
90. Iterative deepening search
• Do iterations of depth-limited search starting
with a limit of 0.
• If you fail to find a goal with a particular depth
limit, increment it and continue with the
iterations.
• Terminate when a solution is found or if the
depth-limited search returns failure, meaning
that no solution exists.
• Combines the linear space complexity of DFS
with the completeness property of BFS.
93. Bidirectional Search
Run two simultaneous states:
one forward from the initial state
one backward from the goal state
Implementation: Replace the goal check
with a check to see whether the
frontiers of the searches intersect
96. Informed Search Strategies
A search strategy which searches the most
promising branches of the state-space first can:
find a solution more quickly,
find solutions even when there is limited time
available,
often find a better solution
A search strategy which is better than another at
identifying the most promising branches of a
search-space is said to be more informed.
97. A* Search
• It is best-known form of Best First search.
• It avoids expanding paths that are already
expensive, but expands most promising paths
first.
• f(n) = g(n) + h(n), where
– g(n) the cost (so far) to reach the node
– h(n) estimated cost to get from the node to the
goal
– f(n) estimated total cost of path through n to goal.
It is implemented using priority queue by
increasing f(n).
98. Greedy Best First Search
It expands the node that is estimated to
be closest to goal.
It expands nodes based on f(n) = h(n).
It is implemented using priority queue.
Disadvantage
It can get stuck in loops.
It is not optimal.
99. January 31, 2006
AI: Chapter 4: Informed Search and
Exploration 99
Greedy Best-First Search
100. January 31, 2006
AI: Chapter 4: Informed Search and
Exploration 100
A Quick Review - Again
g(n) = cost from the initial state to the
current state n
h(n) = estimated cost of the cheapest
path from node n to a goal node
f(n) = evaluation function to select a
node for expansion (usually the lowest
cost node)
101. January 31, 2006
AI: Chapter 4: Informed Search and
Exploration 101
A* Search
A* (A star) is the most widely known
form of Best-First search
It evaluates nodes by combining g(n) and
h(n)
f(n) = g(n) + h(n)
Where
g(n) = cost so far to reach n
h(n) = estimated cost to goal from n
f(n) = estimated total cost of path through n
102. January 31, 2006
AI: Chapter 4: Informed Search and
Exploration 102
A* Search
When h(n) = actual cost to goal
Only nodes in the correct path are
expanded
Optimal solution is found
When h(n) < actual cost to goal
Additional nodes are expanded
Optimal solution is found
When h(n) > actual cost to goal
Optimal solution can be overlooked
103. January 31, 2006
AI: Chapter 4: Informed Search and
Exploration 103
Greedy Best-First Search
104. 104
Memory-Bounded Heuristic
Search
Iterative Deepening A* (IDA*)
Similar to Iterative Deepening Search, but cut
off at (g(n)+h(n)) > max instead of depth > max
At each iteration, cutoff is the first f-cost that
exceeds the cost of the node at the previous
iteration
Simple Memory Bounded A* (SMA*)
Set max to some memory bound
If the memory is full, to add a node drop the
worst (g+h) node that is already stored
Expands newest best leaf, deletes oldest worst
leaf
105. 105
Optimization Problems
Instead of considering the whole state
space, consider only the current state
Limits necessary memory; paths not
retained
Amenable to large or continuous
(infinite) state spaces where exhaustive
search algorithms are not possible
Local search algorithms can’t backtrack
106. 106
Local Search Algorithms
They are useful for solving optimization
problems
Aim is to find a best state according to an
objective function
Many optimization problems do not fit the
standard search model outlined in chapter 3
E.g. There is no goal test or path cost in Darwinian
evolution
State space landscape
107. 107
Optimization Problems
Given measure of goodness (of fit)
Find optimal parameters (e.g correspondences)
That maximize goodness measure (or minimize
badness measure)
Optimization techniques
Direct (closed-form)
Search (generate-test)
Heuristic search (e.g Hill Climbing)
Genetic Algorithm
109. 109
Constraint satisfaction problems (CSPs)
Standard search problem:
state is a "black box“ – any data structure that supports
successor function, heuristic function, and goal test
CSP:
state is defined by variables Xi with values from domain Di
goal test is a set of constraints specifying allowable
combinations of values for subsets of variables
Simple example of a formal representation language
Allows useful general-purpose algorithms with more
power than standard search algorithms
110. 110
Example: Map-Coloring
Variables WA, NT, Q, NSW, V, SA, T
Domains Di = {red,green,blue}
Constraints: adjacent regions must have different colors
e.g., WA ≠ NT, or (WA,NT) in {(red,green),(red,blue),(green,red),
(green,blue),(blue,red),(blue,green)}
112. 112
Constraint graph
Binary CSP: each constraint relates two variables
Constraint graph: nodes are variables, arcs are
constraints
113. 113
Varieties of CSPs
Discrete variables
finite domains:
n variables, domain size d O(dn) complete assignments
e.g., Boolean CSPs, incl.~Boolean satisfiability (NP-complete)
infinite domains:
integers, strings, etc.
e.g., job scheduling, variables are start/end days for each job
need a constraint language, e.g., StartJob1 + 5 ≤ StartJob3
Continuous variables
e.g., start/end times for Hubble Space Telescope observations
linear constraints solvable in polynomial time by linear
programming
114. 114
Varieties of constraints
Unary constraints involve a single variable,
e.g., SA ≠ green
Binary constraints involve pairs of variables,
e.g., SA ≠ WA
Higher-order constraints involve 3 or more
variables,
e.g., cryptarithmetic column constraints
115. 115
Example: Cryptarithmetic
Variables: F T U W
R O X1 X2 X3
Domains: {0,1,2,3,4,5,6,7,8,9}
Constraints: Alldiff (F,T,U,W,R,O)
O + O = R + 10 · X1
X1 + W + W = U + 10 · X2
X2 + T + T = O + 10 · X3
X3 = F, T ≠ 0, F ≠ 0
116. 116
Real-world CSPs
Assignment problems
e.g., who teaches what class
Timetabling problems
e.g., which class is offered when and where?
Transportation scheduling
Factory scheduling
Notice that many real-world problems involve real-
valued variables
117. 117
Standard search formulation (incremental)
Let's start with the straightforward approach, then fix it
States are defined by the values assigned so far
Initial state: the empty assignment { }
Successor function: assign a value to an unassigned variable that does
not conflict with current assignment
fail if no legal assignments
Goal test: the current assignment is complete
1. This is the same for all CSPs
2. Every solution appears at depth n with n variables
use depth-first search
3. Path is irrelevant, so can also use complete-state formulation
4. b = (n - l )d at depth l, hence n! · dn leave
118. 118
Backtracking search
Variable assignments are commutative}, i.e.,
[ WA = red then NT = green ] same as [ NT = green then WA = red
]
Only need to consider assignments to a single variable at each
node
b = d and there are $d^n$ leaves
Depth-first search for CSPs with single-variable assignments is
called backtracking search
Backtracking search is the basic uninformed algorithm for CSPs
Can solve n-queens for n ≈ 25
123. 123
Improving backtracking efficiency
General-purpose methods can give
huge gains in speed:
Which variable should be assigned next?
In what order should its values be tried?
Can we detect inevitable failure early?
124. 124
Most constrained variable
Most constrained variable:
choose the variable with the fewest legal
values
a.k.a. minimum remaining values (MRV)
heuristic
125. 125
Most constraining variable
Tie-breaker among most constrained
variables
Most constraining variable:
choose the variable with the most
constraints on remaining variables
126. CS 3243 - Constraint Satisfaction 126
Least constraining value
Given a variable, choose the least
constraining value:
the one that rules out the fewest values in
the remaining variables
127. 127
Forward checking
Idea:
Keep track of remaining legal values for unassigned
variables
Terminate search when any variable has no legal values
128. 128
Forward checking
Idea:
Keep track of remaining legal values for unassigned
variables
Terminate search when any variable has no legal values
129. 129
Forward checking
Idea:
Keep track of remaining legal values for unassigned
variables
Terminate search when any variable has no legal values
130. 130
Forward checking
Idea:
Keep track of remaining legal values for unassigned
variables
Terminate search when any variable has no legal values
131. 131
Constraint propagation
Forward checking propagates information from
assigned to unassigned variables, but doesn't
provide early detection for all failures:
NT and SA cannot both be blue!
Constraint propagation repeatedly enforces
constraints locally
132. 132
Arc consistency
Simplest form of propagation makes each arc
consistent
X Y is consistent iff
for every value x of X there is some allowed y
133. 133
Arc consistency
Simplest form of propagation makes each arc
consistent
X Y is consistent iff
for every value x of X there is some allowed y
134. 134
Arc consistency
Simplest form of propagation makes each arc
consistent
X Y is consistent iff
for every value x of X there is some allowed y
If X loses a value, neighbors of X need to be
rechecked
135. 135
Arc consistency
Simplest form of propagation makes each arc consistent
X Y is consistent iff
for every value x of X there is some allowed y
If X loses a value, neighbors of X need to be rechecked
Arc consistency detects failure earlier than forward
checking
Can be run as a preprocessor or after each assignment
136. 136
Local search for CSPs
Hill-climbing, simulated annealing typically work with
"complete" states, i.e., all variables assigned
To apply to CSPs:
allow states with unsatisfied constraints
operators reassign variable values
Variable selection: randomly select any conflicted
variable
Value selection by min-conflicts heuristic:
choose value that violates the fewest constraints
i.e., hill-climb with h(n) = total number of violated constraints
137. 137
Example: 4-Queens
States: 4 queens in 4 columns (44 = 256 states)
Actions: move queen in column
Goal test: no attacks
Evaluation: h(n) = number of attacks
Given random initial state, can solve n-queens in almost
constant time for arbitrary n with high probability (e.g., n =
10,000,000)
138. 138
Summary
CSPs are a special kind of problem:
states defined by values of a fixed set of variables
goal test defined by constraints on variable values
Backtracking = depth-first search with one variable assigned per
node
Variable ordering and value selection heuristics help
significantly
Forward checking prevents assignments that guarantee later
failure
Constraint propagation (e.g., arc consistency) does additional
work to constrain values and detect inconsistencies
Iterative min-conflicts is usually effective in practice