Your SlideShare is downloading.
×

×

Saving this for later?
Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.

Text the download link to your phone

Standard text messaging rates apply

Like this presentation? Why not share!

- Introduction To My Graduation Project by Abdelrahman Al-Ogail 2525 views
- Rts by Hung Duong 88 views
- Lecture 4 - Opponent Modelling by Luke Dicken 1091 views
- Timeline Presentation by Abdelrahman Al-Ogail 864 views
- German timeline by AshJLD 328 views
- Synapseindia dot net development ab... by Synapseindiappsde... 213 views

913

Published on

No Downloads

Total Views

913

On Slideshare

0

From Embeds

0

Number of Embeds

1

Shares

0

Downloads

9

Comments

0

Likes

1

No embeds

No notes for slide

- 1. Introduction to the HAHRL-RTS Platform<br />Omar Enayet<br />Amr Saqr<br />AbdelRahman Al-Ogail<br />Ahmed Atta<br />
- 2. Agenda<br />Complexity of RTS Games.<br />Analysis of the Strategy Game.<br />The HAHRL-RTS Platform.<br />The Hierarchy.<br />Heuristic Algorithms .<br />Function Approximation.<br />References.<br />
- 3. Complexity of RTS Games<br />There’s no doubt that strategy games are complex domains:<br />Gigantic set of allowed Actions (almost infinite)<br />Gigantic set of Game States (almost infinite)<br />imperfect information<br />nondeterministic behavior<br />However : Real-time Planning and Reactions are required !<br />
- 4. Complexity of RTS Games<br />No Model of the Game i.e.: we don’t know exactly how can we go from a state to another.<br />Infinite number of states and actions<br />Result : Infeasible Learning with Raw Reinforcement Learning <br />
- 5. Solution<br />Solution : <br />Approximation of state space, action space, and value functions.<br />Hierarchical Reinforcement Learning<br />Applying heuristics<br />Others <br />
- 6. Analysis of the Strategy Game<br />
- 7. Primitive Actions<br />Primary Primitive Actions<br />Move a unit<br />Train/Upgrade a unit<br />Gather a resource<br />Make a unit attack<br />Make a unit defend<br />Build a building<br />Repair a building<br />NB: Upgrading units or buildings is not available in BosWars but found in most RTS Games.<br />
- 8. Wining a Game<br />Any player wins by doing 2 types of actions simultaneously, either an action that strengthens him or an action that weakens his enemy (Fig 1).<br />
- 9. Wining a Game<br />
- 10. 6 Main Sub-Strategies<br />When a human plays a strategy game, he doesn’t learn everything at the same time. He learns each of the following 6 independent sub-strategies separately:<br />
- 11. 1-Train What Units ?<br />Train/Build/Upgrade attacking Units:What unit does he need to train??<br />Will he depend on fast cheep units to perform successive fast attacks or powerful expensive slow units to perform one or two brutal attacks to finish his enemy? Or will it be a combination of the two which is often a better choice?<br />Does his enemy have some weak points concerning a certain unit? Or his enemy has units which can infiltrate his defenses so he must train their anti-units?<br />Does he prefer to spend his money on expensive upgrades or spend it on more amounts of non-upgraded units?<br />NB: I deal with attacking Buildings as static attacking units<br />
- 12. 2- How to Defend ?<br />Defend:How will he use his current units to defend? <br />Will he concentrate all his units in one force stuck to each other or will he stretch his units upon his borders? Or a mix of the two approaches?<br />Will he keep the defending units (which maybe an attacking building) around his buildings or will he make them guard far from the base to stop the enemy early. Or a mix of the two approaches?<br />If he detects an attack on his radar, will he order the units to attack them at once, or will he wait for the opponent to come to his base and be crushed? Or a mix of the two approaches?<br />How will he defend un-armed units? Will he place armed units near them to for protection or will he prefer to use the armed units in another useful thing? If an un-armed unit is under attack how will he react?<br />What are his reactions to different events while defending?<br />
- 13. 3- How to Attack ?<br />Attack:How will he use his current units to attack? <br />Will he attack the important buildings first? Or will he prefer to crush all the defensive buildings and units first? Or a mix of the two approaches?<br />Will he divide his attacking force to separate small forces to attack from different places, or will he attack with one big solid force? Or a mix of the two approaches?<br />What are his reactions to different events while attacking?<br />
- 14. 4- How to Gather Resources ?<br />Gather Resources: How will he gather the resources?<br />Will he train a lot of gatherers to have a large rate of gathering resources? Or will he train a limited amount because it would be a waste of money and he wants to rush (attack early) in the beginning of the game so he needs that money? Or a mix of the two approaches?<br />Will he start gathering the far resources first because the near resources are more guaranteed? Or will he be greedy and acquire the resources the nearer the first? Or a mix of the two approaches?<br />
- 15. 5- How to construct buildings ?<br />Construct Buildings:How does he place his buildings? Will he stick them to each other in order to defend them easily? Or will he leave large spaces between them to make it harder for the opponent to destroy them? Or a mix of the two approaches?<br />
- 16. 6- How to Repair ?<br />Repair:How will he do the repairing? Although it’s a minor thing, but different approaches are used. Will he place a repairing unit near every building in case of having an attack, or will he just order the nearest one to repair the building being attacked? Or a mix of the two approaches?<br />
- 17. Heuristically accelerated Hierarchical RL in RTS Games<br />
- 18. The Hierarchy<br />Since the 6 sub-strategies do not depend on each other (think of it and you’ll find them nearly independent), So, I will divide the AI system to a hierarchy as shown in figure 1, each child node is by itself a Semi-Marcov decision process (SMDP) where Heuristically Accelerated Reinforcement Learning Techniques will be applied. Each child node will be later divided into other sub-nodes of SMDPs.<br />
- 19. Heuristic Algorithms<br />Aheuristic, is an algorithm that is able to produce an acceptable solution to a problem in many practical scenarios, in the fashion of a general heuristic, but for which there is no formal proof of its correctness. Alternatively, it may be correct, but may not be proven to produce an optimal solution, or to use reasonable resources.<br />
- 20. Heuristic Algorithms (Cont’d)<br />Firstly : The Splitting of the learning into learning the six sub-strategies is a heuristic<br />Secondly : Using Case-Based Reasoning when choosing actions is a heuristic.<br />Why Heuristics ??<br />Because they will accelerate the learning dramatically.<br />They will decrease the non-determination of the AI so Testing is easier.<br />Why not Heuristics ? : Programming Increases<br />
- 21. Feature-Based Function Approximation<br />The Problem: <br />The State-action Space is infinite <br />The Goal: <br />We want to approximate the state-action space but reinforcement learning still becomes efficient. <br />
- 22. The Approach<br />If the actions are infinite, make them discrete with any appropriate way. For example: In the Resource Gathering Problem, the actions are joining more N number of gatherers to gather this resource, N this could be any number, we will convert it to discrete values such as : [0,1] [2,4] [5,8] [9,15] [16,22] [22,35] Only. Notice that its rare cases when u need to join more than 35 gatherers to the already-working gatherers to gather a resource. <br />
- 23. The Approach (Cont’d)<br />The states won’t be represented explicitly, but depending on their features. For example: In the Resource Gathering Problem, the states are infinite depending on the combinations of following features: number of gatherers, relative distant between each gatherer and the resource, available resources, wanted resources …etc. which is a huge number, instead we will use features themselves as you will see <br />
- 24. The Approach (Cont’d)<br />
- 25. The Approach (Cont’d)<br />
- 26. Result of Approximation<br />So the complexity won’t depend on the number of states*number of actions, Instead it will depend on the number of features*number of actions, so in the Resource Gathering Problem, if we have 6 distinct actions and we approximated the infinite number of states to at least 100 we will learn the Values of at least 600 Q-Value Pairs, but by using this approach if we have 5 features and 6 distinct actions, we will learn 5*6=30 thetas only. <br />We approximated only state space not action space, infinite states to definite number of features. But still exists a problem if the action space is large. <br />
- 27. References<br />Andrew G. Barto, Sridhar Mahadevan, 2003, Recent Advances in Hierarchical Reinforcement Learning<br />Marina Irodova and Robert H. Sloa, 2005, Reinforcement Learning and Function Approximation<br />Reinaldo A. C. Bianchi, Raquel Ros and Ram´on L´opez de M´antaras, 2009, Improving Reinforcement Learning by using Case Based Heuristics<br />Richard S. Sutton and Andrew G. Barto, 1998, Reinforcement Learning: An Introduction<br />Wikipedia<br />

Be the first to comment