This presentation introduces the fundamentals of contemporary AI research and highlights a significant challenge that we have still not addressed - namely that we have to trade quality of decision making against speed of decision making.
It goes on to discuss the concepts behind the "Integrated Influence Architecture", a new approach to making high-speed and high-quality decisions currently under development at University of Strathclyde.
3. What is AI?
• Any time a computer makes any sort of decision
between a number of options, it can be thought of
as acting “intelligently”.
2
4. What is AI?
• Any time a computer makes any sort of decision
between a number of options, it can be thought of
as acting “intelligently”.
• Whether or not those decisions are the right ones
is how “good” the intelligence is.
2
14. Basics
• Broadly, two conceptual paradigms in AI
‣ Reaction
‣ Deliberation
4
15. Basics
• Broadly, two conceptual paradigms in AI
‣ Reaction
‣ Deliberation
• Reaction aims to program “instinctive” reactions to
minimal subsets of stimuli.
4
16. Basics
• Broadly, two conceptual paradigms in AI
‣ Reaction
‣ Deliberation
• Reaction aims to program “instinctive” reactions to
minimal subsets of stimuli.
• Deliberation describes reasoning-based approaches,
using all the information available.
4
19. Automated Planning
• AP is a deliberative technique
• Given a description of
5
20. Automated Planning
• AP is a deliberative technique
• Given a description of
‣ Current state of the world
5
21. Automated Planning
• AP is a deliberative technique
• Given a description of
‣ Current state of the world
‣ Actions that can be applied and the way they affect the
world
5
22. Automated Planning
• AP is a deliberative technique
• Given a description of
‣ Current state of the world
‣ Actions that can be applied and the way they affect the
world
‣ A set of goals to be achieved
5
23. Automated Planning
• AP is a deliberative technique
• Given a description of
‣ Current state of the world
‣ Actions that can be applied and the way they affect the
world
‣ A set of goals to be achieved
• Automatically determines a sequence of actions that
will complete the task.
5
26. PDDL
• Planning Domain Description Language
• Propositional representation of world
6
27. PDDL
• Planning Domain Description Language
• Propositional representation of world
• All things not asserted true are false
6
28. PDDL
• Planning Domain Description Language
• Propositional representation of world
• All things not asserted true are false
• All things true now will remain true unless negated
6
29. PDDL
• Planning Domain Description Language
• Propositional representation of world
• All things not asserted true are false
• All things true now will remain true unless negated
• Extensions deal with a variety of extras
6
30. PDDL
• Planning Domain Description Language
• Propositional representation of world
• All things not asserted true are false
• All things true now will remain true unless negated
• Extensions deal with a variety of extras
‣ e.g. Numerical values, Temporal actions, Continuous
effects etc.
6
41. The Core of Planning
• Previous example made it look so easy.
9
42. The Core of Planning
• Previous example made it look so easy.
• Trivial example, worked out in advance.
9
43. The Core of Planning
• Previous example made it look so easy.
• Trivial example, worked out in advance.
• At every action layer, many choices don’t help
9
44. The Core of Planning
• Previous example made it look so easy.
• Trivial example, worked out in advance.
• At every action layer, many choices don’t help
‣ Easy to disappear down a rabbit hole
9
45. The Core of Planning
• Previous example made it look so easy.
• Trivial example, worked out in advance.
• At every action layer, many choices don’t help
‣ Easy to disappear down a rabbit hole
• Planning all about guiding search across the action/
fact layer space.
9
46. The Core of Planning
• Previous example made it look so easy.
• Trivial example, worked out in advance.
• At every action layer, many choices don’t help
‣ Easy to disappear down a rabbit hole
• Planning all about guiding search across the action/
fact layer space.
‣ Different heuristics, search strategies, pruning techniques
9
49. Problems
• Search space is massive.
‣ Computational complexity high
10
50. Problems
• Search space is massive.
‣ Computational complexity high
‣ Processing time required also high
10
51. Problems
• Search space is massive.
‣ Computational complexity high
‣ Processing time required also high
• Not only that but models used are “abstractions”
10
52. Problems
• Search space is massive.
‣ Computational complexity high
‣ Processing time required also high
• Not only that but models used are “abstractions”
‣ Typically removes chance to fail an action
10
53. Problems
• Search space is massive.
‣ Computational complexity high
‣ Processing time required also high
• Not only that but models used are “abstractions”
‣ Typically removes chance to fail an action
‣ Typically removes other agents and the consequences of
their actions
10
54. Problems
• Search space is massive.
‣ Computational complexity high
‣ Processing time required also high
• Not only that but models used are “abstractions”
‣ Typically removes chance to fail an action
‣ Typically removes other agents and the consequences of
their actions
‣ Typically removes a lot of the detail e.g. Driver for truck
10
56. Time Constraints
• International Planning Competition entrants get
around 30m to generate a plan for a single problem.
11
57. Time Constraints
• International Planning Competition entrants get
around 30m to generate a plan for a single problem.
• AAAI General Game Playing entrants get 5-10s to
decide on their next move.
11
58. Time Constraints
• International Planning Competition entrants get
around 30m to generate a plan for a single problem.
• AAAI General Game Playing entrants get 5-10s to
decide on their next move.
• Games Industry aims for 60fps execution - 16ms
per frame.
11
59. Time Constraints
• International Planning Competition entrants get
around 30m to generate a plan for a single problem.
• AAAI General Game Playing entrants get 5-10s to
decide on their next move.
• Games Industry aims for 60fps execution - 16ms
per frame.
‣ Most of that is spent on graphics, physics etc.
11
60. Time Constraints
• International Planning Competition entrants get
around 30m to generate a plan for a single problem.
• AAAI General Game Playing entrants get 5-10s to
decide on their next move.
• Games Industry aims for 60fps execution - 16ms
per frame.
‣ Most of that is spent on graphics, physics etc.
‣ AI gets maybe 1ms to work out everything it needs to
11
62. Reactive AI
• Reactive AI makes snap decisions based on current
state of the world.
12
63. Reactive AI
• Reactive AI makes snap decisions based on current
state of the world.
• More tolerant to action failure - one action isn’t
part of a long chain of actions that depend on it.
12
64. Reactive AI
• Reactive AI makes snap decisions based on current
state of the world.
• More tolerant to action failure - one action isn’t
part of a long chain of actions that depend on it.
• Typically gives a very good response time to input
received from the environment.
12
67. Subsumption Arch.
• Quintessential reactive approach.
• Library of behaviours ordered by priority.
13
68. Subsumption Arch.
• Quintessential reactive approach.
• Library of behaviours ordered by priority.
• Each behaviour maps detected input to relevant
response.
13
69. Subsumption Arch.
• Quintessential reactive approach.
• Library of behaviours ordered by priority.
• Each behaviour maps detected input to relevant
response.
• Higher priority behaviours are able to “subsume” or
override the output of the lower priority ones.
13
73. Influence Maps
• Much more simplistic approach
• Influence radiates from objects similarly to magnetic
fields.
15
74. Influence Maps
• Much more simplistic approach
• Influence radiates from objects similarly to magnetic
fields.
• Good things attract the agent, bad things repel the
agent.
15
75. Influence Maps
• Much more simplistic approach
• Influence radiates from objects similarly to magnetic
fields.
• Good things attract the agent, bad things repel the
agent.
• Interaction of influence is typically (but not
necessarily) additive.
15
80. Stateful vs. Stateless
• Deliberative reasoning is by its nature stateful
• Reactive systems typically are stateless
18
81. Stateful vs. Stateless
• Deliberative reasoning is by its nature stateful
• Reactive systems typically are stateless
• Trying to retrofit them to include states typically
adds the type of complexity they were designed to
avoid.
18
82. Stateful vs. Stateless
• Deliberative reasoning is by its nature stateful
• Reactive systems typically are stateless
• Trying to retrofit them to include states typically
adds the type of complexity they were designed to
avoid.
‣ E.g. Trying to capture state in a NN involves a separate
NN designed to have a delayed feedback into the input
18
83. Stateful vs. Stateless
• Deliberative reasoning is by its nature stateful
• Reactive systems typically are stateless
• Trying to retrofit them to include states typically
adds the type of complexity they were designed to
avoid.
‣ E.g. Trying to capture state in a NN involves a separate
NN designed to have a delayed feedback into the input
• Reactive Systems struggle to make long term plans
18
85. Limitations of AI
• Contemporary AI is capable of very sophisticated
insightful decision making.
‣ ....eventually
• Also able to make very rapid decision making.
‣ ...at the expense of long-term decision quality
• Range of problems require both high quality and
short time frame decisions.
19
94. Integrated Influence
• My work focuses on trying to bridge the gap
between reaction and deliberation in novel ways
21
95. Integrated Influence
• My work focuses on trying to bridge the gap
between reaction and deliberation in novel ways
• Previous approaches have typically either :
21
96. Integrated Influence
• My work focuses on trying to bridge the gap
between reaction and deliberation in novel ways
• Previous approaches have typically either :
‣ Created an agent that deliberates about certain aspects
of the world and reacts to others
21
97. Integrated Influence
• My work focuses on trying to bridge the gap
between reaction and deliberation in novel ways
• Previous approaches have typically either :
‣ Created an agent that deliberates about certain aspects
of the world and reacts to others
‣ Created an agent that reacts within the parameters of a
deiberatively generated trajectory
21
98. Integrated Influence
• My work focuses on trying to bridge the gap
between reaction and deliberation in novel ways
• Previous approaches have typically either :
‣ Created an agent that deliberates about certain aspects
of the world and reacts to others
‣ Created an agent that reacts within the parameters of a
deiberatively generated trajectory
• Neither approach has proven particularly robust
21
100. Concept
• We take the view that many aspects of the world
can’t be tackled by one or other paradigm, but
require both.
22
101. Concept
• We take the view that many aspects of the world
can’t be tackled by one or other paradigm, but
require both.
• To this end, our architecture aims to constantly use
all information available, both deliberative and
reactive, to make decisions
22
103. Search vs. Evaluation
• Searching spaces is a complex task.
‣ Typically at least NP-Hard, PDDL domains can be as
complex as PSPACE-Complete
• What if, instead of performing search, we could
reformulate the problem into something closer to
function evaluation?
23
105. Propositions
• PDDL’s propositional representation gives a state
representation of very high dimension, with each
dimension having exactly two possible values.
• Can we do better with another representation
format?
24
108. SAS+
• SAS+ groups mutually exclusive PDDL props
together.
‣ Propositional - at(P1, L1), at(P1, L2), at(P1, L3), in(P1, T1)
‣ SAS+ - locationP1 ∈ {L1, L2, L3, T1}
• Also captures the ordering that the values take
‣ E.g. From any Lx to Ly, locationP1 take value T1 between
25
109. SAS+
• SAS+ groups mutually exclusive PDDL props
together.
‣ Propositional - at(P1, L1), at(P1, L2), at(P1, L3), in(P1, T1)
‣ SAS+ - locationP1 ∈ {L1, L2, L3, T1}
• Also captures the ordering that the values take
‣ E.g. From any Lx to Ly, locationP1 take value T1 between
• Identifies the dependencies between different types
of object
25
112. Influence Landscapes
• Introduced the concept of Influence Map earlier
• Influence Landscapes extend the idea away from a
purely spatial representation and into a conceptual
representation.
27
113. Influence Landscapes
• Introduced the concept of Influence Map earlier
• Influence Landscapes extend the idea away from a
purely spatial representation and into a conceptual
representation.
• Allows for the same function-based approach to be
applied to reasoning.
27
117. Caveats
• It isn’t quite this easy
• Need to consider ‘Causal Links’
29
118. Caveats
• It isn’t quite this easy
• Need to consider ‘Causal Links’
• To load the package at L1, the truck needs to be at
L1 too.
29
119. Caveats
• It isn’t quite this easy
• Need to consider ‘Causal Links’
• To load the package at L1, the truck needs to be at
L1 too.
• Interlinked set of DTGs allow this to be captured
and represented.
29
123. Landscape Generators
• Previous example generated using a simple critical
path analysis.
31
124. Landscape Generators
• Previous example generated using a simple critical
path analysis.
• Need to get much more informed view of the
world around the agent.
31
125. Landscape Generators
• Previous example generated using a simple critical
path analysis.
• Need to get much more informed view of the
world around the agent.
• Illustrates the concept though - also useful for
providing the “naive” view of the world.
31
126. Landscape Generators
• Previous example generated using a simple critical
path analysis.
• Need to get much more informed view of the
world around the agent.
• Illustrates the concept though - also useful for
providing the “naive” view of the world.
• Critical path analysis gives some info about the
structure of the world, but is not fully informed.
31
128. Stacks
• Stacks are the name given to Landscape Generators
32
129. Stacks
• Stacks are the name given to Landscape Generators
• Each stack is tasked with assigning a numerical value
to every node within the DTG/CG space.
32
130. Stacks
• Stacks are the name given to Landscape Generators
• Each stack is tasked with assigning a numerical value
to every node within the DTG/CG space.
‣ Remember that this is a smaller space than propositional
32
131. Stacks
• Stacks are the name given to Landscape Generators
• Each stack is tasked with assigning a numerical value
to every node within the DTG/CG space.
‣ Remember that this is a smaller space than propositional
• Each stack deals with a specific aspect of the world
or a specific approach
32
132. Stacks
• Stacks are the name given to Landscape Generators
• Each stack is tasked with assigning a numerical value
to every node within the DTG/CG space.
‣ Remember that this is a smaller space than propositional
• Each stack deals with a specific aspect of the world
or a specific approach
‣ E.g. Reactive, Deliberative (or some other information)
32
134. Deliberative - Plan
• We can guide the search by bringing in the
information a deliberative reasoner provides.
33
135. Deliberative - Plan
• We can guide the search by bringing in the
information a deliberative reasoner provides.
‣ E.g. Automated Planning
33
136. Deliberative - Plan
• We can guide the search by bringing in the
information a deliberative reasoner provides.
‣ E.g. Automated Planning
• We can implement this as a stack generating a
landscape reflecting the influence the plan exerts on
our agent.
33
137. Deliberative - Plan
• We can guide the search by bringing in the
information a deliberative reasoner provides.
‣ E.g. Automated Planning
• We can implement this as a stack generating a
landscape reflecting the influence the plan exerts on
our agent.
• Typically this will be a best-case assumption.
33
141. Tight Conformity
• Reward every node the plan requires the agent to
visit in the graph, Royal Road style.
35
142. Tight Conformity
• Reward every node the plan requires the agent to
visit in the graph, Royal Road style.
• Visualise landscape as a ridge to the summit
35
143. Tight Conformity
• Reward every node the plan requires the agent to
visit in the graph, Royal Road style.
• Visualise landscape as a ridge to the summit
‣ Excellent in best-cases
35
144. Tight Conformity
• Reward every node the plan requires the agent to
visit in the graph, Royal Road style.
• Visualise landscape as a ridge to the summit
‣ Excellent in best-cases
‣ Poor when flexibility required
35
145. Tight Conformity
• Reward every node the plan requires the agent to
visit in the graph, Royal Road style.
• Visualise landscape as a ridge to the summit
‣ Excellent in best-cases
‣ Poor when flexibility required
• After deviating from the plan, best approach seems
to be to rejoin the ridge - this may not be the case.
35
147. Loose Conformity
• Use the plan to mark out a general path without
strictly defining each node of the plan.
36
148. Loose Conformity
• Use the plan to mark out a general path without
strictly defining each node of the plan.
• Much more flexible approach, guides the agent
rather than dictating to it.
36
149. Loose Conformity
• Use the plan to mark out a general path without
strictly defining each node of the plan.
• Much more flexible approach, guides the agent
rather than dictating to it.
• But how do you determine which nodes to mark
and which to ignore?
36
151. Focal Nodes
• Focal Nodes are these waypoints in the plan.
37
152. Focal Nodes
• Focal Nodes are these waypoints in the plan.
• Previous work with SAS+ has shown that a DTG
can be deformed to be laid out in any way.
37
153. Focal Nodes
• Focal Nodes are these waypoints in the plan.
• Previous work with SAS+ has shown that a DTG
can be deformed to be laid out in any way.
‣ Logistics-style domain overlaid on a map of Europe.
37
154. Focal Nodes
• Focal Nodes are these waypoints in the plan.
• Previous work with SAS+ has shown that a DTG
can be deformed to be laid out in any way.
‣ Logistics-style domain overlaid on a map of Europe.
• Highlights by inspection clumps of nodes and
connections between them
37
155. Focal Nodes
• Focal Nodes are these waypoints in the plan.
• Previous work with SAS+ has shown that a DTG
can be deformed to be laid out in any way.
‣ Logistics-style domain overlaid on a map of Europe.
• Highlights by inspection clumps of nodes and
connections between them
‣ E.g. Channel Tunnel, Dover ferry etc.
37
157. Clustering
• We can pick out the FNs by hand, but that’s no fun.
38
158. Clustering
• We can pick out the FNs by hand, but that’s no fun.
• Instead, using the structure of the graph to find
them automagically.
38
159. Clustering
• We can pick out the FNs by hand, but that’s no fun.
• Instead, using the structure of the graph to find
them automagically.
• Clustering the nodes of the graph allows us to
group nodes together by proximity
38
160. Clustering
• We can pick out the FNs by hand, but that’s no fun.
• Instead, using the structure of the graph to find
them automagically.
• Clustering the nodes of the graph allows us to
group nodes together by proximity
• Fuzzy Clustering allows us to identify nodes that lie
between groups
38
164. Using Focal Nodes
• FNs can be identified offline for every DTG in the
domain.
41
165. Using Focal Nodes
• FNs can be identified offline for every DTG in the
domain.
• FNs that the plan indicates should be passed
through then become “Activated”.
41
166. Using Focal Nodes
• FNs can be identified offline for every DTG in the
domain.
• FNs that the plan indicates should be passed
through then become “Activated”.
• These nodes are given influence in the landscape
and this is propagated out across the graph to guide
the agent to these key nodes.
41
169. Environmental Data
• We can overcome the deficiencies in deliberative
landscape by bringing in data about the environment
43
170. Environmental Data
• We can overcome the deficiencies in deliberative
landscape by bringing in data about the environment
• Gives the kind of insight that a reactive system
would use to make decisions.
43
171. Environmental Data
• We can overcome the deficiencies in deliberative
landscape by bringing in data about the environment
• Gives the kind of insight that a reactive system
would use to make decisions.
• Allows us to inform the agent about things that may
require it to deviate from the planned trajectory
43
173. Preferences
• Preferences allow the agent to bias the influence of
nodes in the graph at execution time based on data
being sensed.
44
174. Preferences
• Preferences allow the agent to bias the influence of
nodes in the graph at execution time based on data
being sensed.
• Can be either positive or negative.
44
175. Preferences
• Preferences allow the agent to bias the influence of
nodes in the graph at execution time based on data
being sensed.
• Can be either positive or negative.
• Applies an influence of appropriate strength to the
target node and then propagates that out
44
177. Road Blocks
• Road Block is an edge that should be in the domain
but for some reason is not traversable at this time.
45
178. Road Blocks
• Road Block is an edge that should be in the domain
but for some reason is not traversable at this time.
• Two conceptual models
45
179. Road Blocks
• Road Block is an edge that should be in the domain
but for some reason is not traversable at this time.
• Two conceptual models
‣ Cancelled flight - this edge will never be traversable
45
180. Road Blocks
• Road Block is an edge that should be in the domain
but for some reason is not traversable at this time.
• Two conceptual models
‣ Cancelled flight - this edge will never be traversable
‣ Blocked road - this edge may be traversable later
45
181. Road Blocks
• Road Block is an edge that should be in the domain
but for some reason is not traversable at this time.
• Two conceptual models
‣ Cancelled flight - this edge will never be traversable
‣ Blocked road - this edge may be traversable later
• Should the edge be removed?
45
182. Road Blocks
• Road Block is an edge that should be in the domain
but for some reason is not traversable at this time.
• Two conceptual models
‣ Cancelled flight - this edge will never be traversable
‣ Blocked road - this edge may be traversable later
• Should the edge be removed?
‣ Opted to implement the ‘Road Block’ model as this
allows resensing later to check the state of the edge.
45
184. Integrated Landscape
• By combining the landscape from each of the
individual stacks, we get the “Integrated Influence
Landscape” from which the architecture draws its
name.
46
185. Integrated Landscape
• By combining the landscape from each of the
individual stacks, we get the “Integrated Influence
Landscape” from which the architecture draws its
name.
• Currently using an unweighted additive model for
combining
46
186. Integrated Landscape
• By combining the landscape from each of the
individual stacks, we get the “Integrated Influence
Landscape” from which the architecture draws its
name.
• Currently using an unweighted additive model for
combining
‣ This may prove to be sub-optimal in further testing
46
188. Using the IIL
• Given an IIL, the agent can then climb the gradient
towards the goal, changing between DTGs as
required by the Causal Graph.
47
189. Using the IIL
• Given an IIL, the agent can then climb the gradient
towards the goal, changing between DTGs as
required by the Causal Graph.
• Hill Climbing algorithm obvious choice, but gets
stuck at local maxima
47
190. Using the IIL
• Given an IIL, the agent can then climb the gradient
towards the goal, changing between DTGs as
required by the Causal Graph.
• Hill Climbing algorithm obvious choice, but gets
stuck at local maxima
‣ Experimented with Forced-movement Hill Climbing
47
191. Using the IIL
• Given an IIL, the agent can then climb the gradient
towards the goal, changing between DTGs as
required by the Causal Graph.
• Hill Climbing algorithm obvious choice, but gets
stuck at local maxima
‣ Experimented with Forced-movement Hill Climbing
‣ Also Neighbourhood-bounded A*
47
192. Using the IIL
• Given an IIL, the agent can then climb the gradient
towards the goal, changing between DTGs as
required by the Causal Graph.
• Hill Climbing algorithm obvious choice, but gets
stuck at local maxima
‣ Experimented with Forced-movement Hill Climbing
‣ Also Neighbourhood-bounded A*
‣ Currently using Forced HC with ties broken randomly
47
196. Bonus Feature!
• Brought up earlier time constraints of the Game
Industry - around 1ms per frame.
50
197. Bonus Feature!
• Brought up earlier time constraints of the Game
Industry - around 1ms per frame.
• Games give us a great context for testing “real
world” style applications, and often already have the
fast/smart requirements
50
198. Bonus Feature!
• Brought up earlier time constraints of the Game
Industry - around 1ms per frame.
• Games give us a great context for testing “real
world” style applications, and often already have the
fast/smart requirements
• Fully controllable simulation environment
50
199. Bonus Feature!
• Brought up earlier time constraints of the Game
Industry - around 1ms per frame.
• Games give us a great context for testing “real
world” style applications, and often already have the
fast/smart requirements
• Fully controllable simulation environment
• Very pretty demos
50
201. Parrallelisation
• By its nature, the stack paradigm is very flexible
51
202. Parrallelisation
• By its nature, the stack paradigm is very flexible
• Designed for each stack to be updated
asynchronously
51
203. Parrallelisation
• By its nature, the stack paradigm is very flexible
• Designed for each stack to be updated
asynchronously
• Very suitable to parallel execution
51
204. Parrallelisation
• By its nature, the stack paradigm is very flexible
• Designed for each stack to be updated
asynchronously
• Very suitable to parallel execution
‣ Increasingly a big factor in modern computing
51
206. Vector Operations
• The vast majority of the maths mentioned can be
stated as vector and matrix operations.
52
207. Vector Operations
• The vast majority of the maths mentioned can be
stated as vector and matrix operations.
• Makes the whole architecture very suitable to
execution on Cell SPUs.
52
208. Vector Operations
• The vast majority of the maths mentioned can be
stated as vector and matrix operations.
• Makes the whole architecture very suitable to
execution on Cell SPUs.
‣ Synergistic Processing Units are vector-based
coprocessors in Cell-based systems such as PS3.
52
209. Vector Operations
• The vast majority of the maths mentioned can be
stated as vector and matrix operations.
• Makes the whole architecture very suitable to
execution on Cell SPUs.
‣ Synergistic Processing Units are vector-based
coprocessors in Cell-based systems such as PS3.
• SPUs are typically not used efficiently or fully.
52
210. Vector Operations
• The vast majority of the maths mentioned can be
stated as vector and matrix operations.
• Makes the whole architecture very suitable to
execution on Cell SPUs.
‣ Synergistic Processing Units are vector-based
coprocessors in Cell-based systems such as PS3.
• SPUs are typically not used efficiently or fully.
‣ Effectively “free” processing power.
52
212. Experiments
• Majority of work to date has been conceptual
53
213. Experiments
• Majority of work to date has been conceptual
• Initial prototype has been developed based on a
number of assumptions
53
214. Experiments
• Majority of work to date has been conceptual
• Initial prototype has been developed based on a
number of assumptions
• Work currently ongoing to make sure all of these
assumptions are valid.
53
215. Experiments
• Majority of work to date has been conceptual
• Initial prototype has been developed based on a
number of assumptions
• Work currently ongoing to make sure all of these
assumptions are valid.
• Experiments being run on small problem instances
as translation from SAS+ encoding to internal
architecture rep. currently by hand.
53
217. Results
• Early tests have shown some promising results
54
218. Results
• Early tests have shown some promising results
‣ Noticeable decrease in time taken to make decisions
over a purely deliberative method.
54
219. Results
• Early tests have shown some promising results
‣ Noticeable decrease in time taken to make decisions
over a purely deliberative method.
- 1-2ms for a 12 decision point execution with 6ms of processing
in advance.
54
220. Results
• Early tests have shown some promising results
‣ Noticeable decrease in time taken to make decisions
over a purely deliberative method.
- 1-2ms for a 12 decision point execution with 6ms of processing
in advance.
‣ Increased robustness to changes detected in the domain
54
221. Results
• Early tests have shown some promising results
‣ Noticeable decrease in time taken to make decisions
over a purely deliberative method.
- 1-2ms for a 12 decision point execution with 6ms of processing
in advance.
‣ Increased robustness to changes detected in the domain
- Rapid discovery of alternative paths through the space
54
222. Results
• Early tests have shown some promising results
‣ Noticeable decrease in time taken to make decisions
over a purely deliberative method.
- 1-2ms for a 12 decision point execution with 6ms of processing
in advance.
‣ Increased robustness to changes detected in the domain
- Rapid discovery of alternative paths through the space
• Much more rigorous testing required.
54
224. Future Work
• A lot of work remains to develop this into a
polished technique that will revolutionise AI.
55
225. Future Work
• A lot of work remains to develop this into a
polished technique that will revolutionise AI.
‣ Further testing on wider range of problems
55
226. Future Work
• A lot of work remains to develop this into a
polished technique that will revolutionise AI.
‣ Further testing on wider range of problems
‣ Full testing of all assumptions made and techniques
chosen without substantiation
55
227. Future Work
• A lot of work remains to develop this into a
polished technique that will revolutionise AI.
‣ Further testing on wider range of problems
‣ Full testing of all assumptions made and techniques
chosen without substantiation
‣ Development into a working system, rather than proof-
of-concept prototype
55
228. Future Work
• A lot of work remains to develop this into a
polished technique that will revolutionise AI.
‣ Further testing on wider range of problems
‣ Full testing of all assumptions made and techniques
chosen without substantiation
‣ Development into a working system, rather than proof-
of-concept prototype
‣ Proof of extensibility of system by addition of additional
Stacks representing other sources of information.
55