Parallel searching

3,119 views

Published on

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
3,119
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
59
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Parallel searching

  1. 1. Class Assignment CLASS ASSIGNMENT-01 Parallel Searching AlgorithmsINTRODUCTION:Parallel Search, also known as Multithreaded Search or SMP Search, is a way to increasesearch speed by using additional processors. This topic that has been gaining popularityrecently with multiprocessor computers becoming widely available.Actually, a parallel algorithm is an algorithm which can be executed a piece at a time onmany different processing devices, and then put back together again at the end to get thecorrect result.The cost or complexity of serial algorithms is estimated in terms of the space (memory)and time (processor cycles) that they take. Parallel algorithms need to optimize one moreresource, the communication between different processors. There are two ways parallelprocessors communicating, shared memory or message passing.This document gives a brief summary of four types SMP algorithms which are classified bytheir scalability (trend in search speed as the number of processors becomes large) andtheir speedup (change in time to complete a search). Typically, programmers use scalingto mean change in nodes per second (NPS) rates, and speedup to mean change in time todepth. The algorithms are described below in brief: Page 1 of 6
  2. 2. Class AssignmentALPHA – BETA SEARCH:The Alpha-Beta algorithm (Alpha-Beta Pruning, Alpha-Beta Heuristic) is a significantenhancement to the minimax search algorithm that eliminates the need to search largeportions of the game tree applying a branch-and-bound technique. Remarkably, it doesthis without any potential of overlooking a better move. If one already has found a quitegood move and search for alternatives, one refutation is enough to avoid it. No need to lookfor even stronger refutations.Actually, the algorithm maintains two values, alpha and beta. They represent the minimumscore that the maximizing player is assured of and the maximum score that the minimizingplayer is assured of respectively.IMPLEMENTATION:int alphaBetaMax( int alpha, int beta, int depthleft ){if ( depthleft == 0 ) return evaluate();for ( all moves){score = alphaBetaMin( alpha, beta, depthleft - 1 );if( score >= beta )return beta; // fail hard beta-cutoffif( score > alpha )alpha = score; // alpha acts like max in MiniMax}return alpha;}int alphaBetaMin( int alpha, int beta, int depthleft ){if ( depthleft == 0 ) return -evaluate();for ( all moves) {score = alphaBetaMax( alpha, beta, depthleft - 1 );if( score <= alpha )return alpha; // fail hard alpha-cutoffif( score < beta ) Page 2 of 6
  3. 3. Class Assignmentbeta = score; // beta acts like min in MiniMax}return beta;}JAMBOREE SEARCH:Jamboree Search was introduced by Bradley Kuszmaul in his 1994 thesis, SynchronizedMIMD Computing. This algorithm is actually a parallelized version of the Scout searchalgorithm. The idea is that all of the testing of any child that is not the first one is done inparallel and any test that fail are sequentially valued.Jamboree was used in the massive parallel chess programs StarTech and Socrates. Itsequentialize full-window searches for values, because, while their authors are willing totake a chance that an empty window search will be squandered work, they are not willingto take the chance that a full-window search (which does not prune very much) will besquandered work.IMPLEMENTATION:int jamboree(CNode n, int α, int β){if (n is leaf) return static_eval(n);c[ ] = the childen of n;b = -jamboree(c[0], -β, -α);if (b >= β) return b;if (b > α) α = b;In Parallel: for (i=1; i < |c[ ]|; i++){s = -jamboree(c[i], -α - 1, -α);if (s > b) b = s;if (s >= β) abort_and_return s;if (s > α) {s = -jamboree(c[i], -β, -α);if (s >= β) abort_and_return s;if (s > α) α = s; Page 3 of 6
  4. 4. Class Assignmentif (s > b) b = s;}}return b;}DEPTH – FIRST SEARCH:We start the graph traversal at an arbitrary vertex and go down a particular branch untilwe reach a dead end. Then we back up and go as deep possible. In this way we visit allvertices and edges as well.The search is similar to searching maze of hallways, edges, and rooms, vertices, with astring and paint. We fix the string in the starting we room and mark the room with thepaint as visited we then go down the an incident hallway into the next room. We mark thatroom and go to the next room always marking the rooms as visited with the paint. Whenwe get to a dead end or a room we have already visited we follow the string back a roomthat has a hall way we have not gone through yet. This graph traversal is very similar to a tree traversal; either post order or preorder, in factif the graph is a tree then the traversal is same. The algorithm is naturally recursive, just asthe tree traversal. The algorithm is forecast here:IMPLEMENTATION:Algorithm DFS (graph G, Vertex v)// Recursive algorithmfor all edges e in G.incidentEdges(v) doif edge e is unexplored thenw = G.opposite(v, e)if vertex w is unexplored thenlabel e as discovery edge Page 4 of 6
  5. 5. Class Assignmentrecursively call DFS(G, w)elselabel e back edge.PVS SEARCH:The best-known early attempt at searching such trees in parallel was the PrincipalVariation Splitting (PVS) algorithm. This was both simple to understand and easy toimplement.When starting an N-ply search, one processor generates the moves at the root position,makes the first move (leading to what is often referred to as the left-most descendentposition), then generates the moves at ply=2, makes the first move again, and continuesthis until reaching ply=N. At this point, the processor pool searches all of the moves at this ply (N) in parallel, and thebest value is backed up to ply N-1. Now that the lower bound for ply N-1 is known, the restof the moves at N-1 are searched in parallel, and the best value again backed up to N-2. Thiscontinues until the first root move has been searched and the value is known. Theremainder of the root moves is searched in parallel, until none are left. The next iteration isstarted and the process repeats for depth N+1.Performance analysis with this algorithm (PVS) produced speedups given below in table 1. +-------------+-----+-----+-----+-----+-----+ |# processors | 1 | 2 | 4 | 8 | 16 | +-------------+-----+-----+-----+-----+-----+ |speedup | 1.0 | 1.8 | 3.0 | 4.1 | 4.6 | +-------------+-----+-----+-----+-----+-----+ Table 1 PVS performance results Page 5 of 6
  6. 6. Class AssignmentDRAWBACKS:Firstly,All of the processors work together at a single node, searching descendent positions inparallel. If the number of possible moves is small, or the number of processors is large,some have nothing to do. Second, every branch from a given position does not produce atree of equal size, since some branches may grow into complicated positions with lots ofchecks and search extensions that make the tree very large, while other branches grow intosimple positions that are searched quickly. This leads to a load balancing problem whereone processor begins searching a very large tree and the others finish the easy moves andhave to wait for the remaining processor to slowly traverse the tree.Secondly,With a reasonable number of processors, the speedup can look very bad if most of the timemany of the processors are waiting on one last node to be completed before they can backup to ply N-1 and start to work there.REFERENCE:[1] http://chessprogramming.wikispaces.com/Parallel+Search[2] http://chessprogramming.wikispaces.com/Jamboree[3] http://chessprogramming.wikispaces.com/Alpha-Beta[4] http://www.netlib.org/utk/lsi/pcwLSI/text/node350.html[5] http://www-turbul.ifh.uni-karlsruhe.de/uhlmann/mpi3/report_6.html[6] http://www.cis.uab.edu/hyatt/search.html……………………………………………………..X………………………………………………………. Page 6 of 6

×