Search Algorithms for Discrete Optimization Problems Ananth Grama, Anshul Gupta, George Karypis, and Vipin Kumar


The parallel run time will be at least ntaccess



Yüklə 0,99 Mb.
səhifə23/23
tarix26.12.2016
ölçüsü0,99 Mb.
#3369
1   ...   15   16   17   18   19   20   21   22   23

The parallel run time will be at least ntaccess.

  • Upper bound on the speedup is (taccess+ texp)/taccess



  • Parallel Best-First Search

    • Avoid contention by having multiple open lists.

    • Initially, the search space is statically divided across these open lists.

    • Processors concurrently operate on these open lists.

    • Since the heuristic values of nodes in these lists may diverge significantly, we must periodically balance the quality of nodes in each list.

    • A number of balancing strategies based on ring, blackboard, or random communications are possible.



    Parallel Best-First Search

    • A message-passing implementation of parallel best-first search using the ring communication strategy.



    Parallel Best-First Search

    • An implementation of parallel best-first search using the blackboard communication strategy.



    Parallel Best-First Graph Search

    • Graph search involves a closed list, where the major operation is a lookup (on a key corresponding to the state).

    • The classic data structure is a hash.

    • Hashing can be parallelized by using two functions - the first one hashes each node to a processor, and the second one hashes within the processor.

    • This strategy can be combined with the idea of multiple open lists.

    • If a node does not exist in a closed list, it is inserted into the open list at the target of the first hash function.

    • In addition to facilitating lookup, randomization also equalizes quality of nodes in various open lists.



    Speedup Anomalies in Parallel Search

    • Since the search space explored by processors is determined dynamically at runtime, the actual work might vary significantly.

    • Executions yielding speedups greater than p by using p processors are referred to as acceleration anomalies. Speedups of less than p using p processors are called deceleration anomalies.

    • Speedup anomalies also manifest themselves in best-first search algorithms.

    • If the heuristic function is good, the work done in parallel best-first search is typically more than that in its serial counterpart.



    Speedup Anomalies in Parallel Search

    • The difference in number of nodes searched by sequential and parallel formulations of DFS. For this example, parallel DFS reaches a goal node after searching fewer nodes than sequential DFS.



    Speedup Anomalies in Parallel Search



    Yüklə 0,99 Mb.

    Dostları ilə paylaş:
    1   ...   15   16   17   18   19   20   21   22   23




    Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©azkurs.org 2024
    rəhbərliyinə müraciət

    gir | qeydiyyatdan keç
        Ana səhifə


    yükləyin