Data-Intensive Computing for Competent Genetic Algorithms: A Pilot Study using Meandre

1,283 views

Published on

Data-intensive computing has positioned itself as a valuable programming paradigm to efficiently approach problems requiring processing very large volumes of data. This paper presents a pilot study about how to apply the data-intensive computing paradigm to evolutionary computation algorithms. Two representative cases (selectorecombinative genetic algorithms and estimation of distribution algorithms) are presented, analyzed, and discussed. This study shows that equivalent data-intensive computing evolutionary computation algorithms can be easily developed, providing robust and scalable algorithms for the multicore-computing era. Experimental results show how such algorithms scale with the number of available cores without further modification.

Published in: Education, Technology
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
1,283
On SlideShare
0
From Embeds
0
Number of Embeds
86
Actions
Shares
0
Downloads
42
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Data-Intensive Computing for Competent Genetic Algorithms: A Pilot Study using Meandre

  1. 1. Data-Intensive Computing for Competent Genetic Algorithms: A Pilot Study using Meandre Xavier Llorà National Center for Supercomputing Applications University of Illinois at Urbana-Champaign Urbana, Illinois, 61801 xllora@ncsa.illinois.edu http://www.ncsa.illinois.edu/~xllora
  2. 2. Outline • Data-intensive computing and HPC? • Is this related at all to evolutionary computation? • Data-intensive computing with Meandre • GAs and competent GAs • Data-intensive computing for GAs
  3. 3. 2 Minute HPC History • The eighties and early nineties picture • Commodity hardware rare, slow, and costly • Supercomputers were extremely expensive • Most of them hand crafted and only few units • Two competing families • CISC (e.g. Cray C90 with up to 16 processors) • RISC (e.g. Connection Machine CM-5 with up 4,096 processors) • Late nineties commodity hardware hit main stream • Start becoming popular, cheaper, and faster • Economy of scale • Massive parallel computers build from commodity components become a viable option
  4. 4. Two Visions • C90 like supercomputers were like a comfy pair of trainers • Oriented to scientific computing • Complex vector oriented supercomputers • Shared memory (lots of them) • Multiprocessor enabled via some intercommunication networks • Single system image • CM5 like computers did not get massive traction, but a bit • General purpose (as long as you can chop the work in simple units) • Lots of simple processors available • Distributed memory pushed new programming models (message passing) • Complex interconnection networks • NCSA have shared memory, distributed memory, and gpgpu based
  5. 5. Miniaturization Building Bridges • Multicores and gpgpus are reviving the C90 flavor • The CM-5 flavor now survives as distributed clusters of not so simple units
  6. 6. Control Models of Parallelization in EC Run 1 Run 5 Run 9 Master Individual Evaluation Run 2 Run 6 Run 10 Run 3 Run 7 Run 11 Slave Slave Slave Migration
  7. 7. But Data is also Part of the Equation • Google and Yahoo! revived an old route • Usually refers to: • Infrastructure • Programming techniques/paradigms • Google made it main stream after their MapReduce model • Yahoo! provides and open source implementation • Hadoop (MapReduce) • HDFS (Hadoop distributed filesystem) • Store petabytes reliably on commodity hardware (fault tolerant) • Programming model • Map: Equivalent to the map operation on functional programming • Reduce: The reduction phase after maps are computed
  8. 8. A Simple Example n 2 x → reduce(map(x, sqr), sum) i=0 x x x x map map map map x2 x2 x2 x2 reduce reduce reduce reduce sum
  9. 9. Is This Related to EC? • How can we easily benefit of the current core race painlessly? • NCSA’s Blue Waters estimated may top on 100K • Yes on several facets • Large optimization problems need to deal with large population sizes (Sastry, Goldberg & Llorà, 2007) • Large-scale data mining using genetic-based machine learning (Llorà et al. 2007) • Competent GAs model building extremely costly and data rich (Pelikan et al. 2001) • The goal? • Rethink parallelization as data flow processes • Show that traditional models can be map to data-intensive computing models • Foster you curiosity
  10. 10. Data-Intensive Computing with Meandre
  11. 11. The Meandre Infrastructure Challenges • NCSA infrastructure effort on data-intensive computing • Transparency • From a single laptop to a HPC cluster • Not bound to a particular computation fabric • Allow heterogeneous development • Intuitive programming paradigm • Modular Components assembled into Flows • Foster Collaboration and Sharing • Open Source • Service Orientated Architecture (SOA)
  12. 12. Basic Infrastructure Philosophy • Dataflow execution paradigm • Semantic-web driven • Web oriented • Facilitate distributed computing • Support publishing services • Promote reuse, sharing, and collaboration • More information at http://seasr.org/meandre
  13. 13. Data Flow Execution in Meandre • A simple example c ← a+b • A traditional control-driven language a = 1 b = 2 c = a+b • Execution following the sequence of instructions • One step at a time • a+b+c+d requires 3 steps • Could be easily parallelized
  14. 14. Data Flow Execution in Meandre • Data flow execution is driven by data • The previous example may have 2 possible data flow versions Stateless data flow value(a) + value(c) value(b) State-based data flow value(b) value(a) + value(c) ?
  15. 15. The Basic Building Blocks: Components Component RDF descriptor of the The component components behavior implementation
  16. 16. Go with the Flow: Creating Complex Tasks • Directed multigraph of components creates a flow Push Text Concatenate To Upper Print Text Case Text Text Push Text
  17. 17. Automatic Parallelization: Speed and Robustness • Meandre ZigZag language allow automatic parallelization To Upper Case Text Push Text To Upper Concatenate Case Text Print Text Text Push Text To Upper Case Text
  18. 18. GAs and Competent GAs
  19. 19. Selectorecombinative GAs 1. Initialize the population with random individuals 2. Evaluate the fitness value of the individuals 3. Select good solutions by using s-wise tournament selection without replacement (Goldberg, Korb & Deb, 1989) 4. Create new individuals by recombining the selected population using uniform crossover (Sywerda, 1989) 5. Evaluate the fitness valueof all offspring 6. Repeat steps 3-5 until convergence criteria are met
  20. 20. Extended Compact Genetic Algorithm • Harik et al. 2006 • Initialize the population (usually random initialization) • Evaluate the fitness of individuals • Select promising solutions (e.g., tournament selection) • Build the probabilistic model • Optimize structure & parameters to best fit selected individuals • Automatic identification of sub-structures • Sample the model to create new candidate solutions • Effective exchange of building blocks • Repeat steps 2–7 till some convergence criteria are met
  21. 21. eCGA Model Building Process • Use model-building procedure of extended compact GA • Partition genes into (mutually) independent groups • Start with the lowest complexity model • Search for a least-complex, most-accurate model Model Structure Metric [X0] [X1] [X2] [X3] [X4] [X5] [X6] [X7] [X8] [X9] [X10] [X11] 1.0000 [X0] [X1] [X2] [X3] [X4X5] [X6] [X7] [X8] [X9] [X10] [X11] 0.9933 [X0] [X1] [X2] [X3] [X4X5X7] [X6] [X8] [X9] [X10] [X11] 0.9819 [X0] [X1] [X2] [X3] [X4X5X6X7] [X8] [X9] [X10] [X11] 0.9644 … [X0] [X1] [X2] [X3] [X4X5X6X7] [X8X9X10X11] 0.9273 … [X0X1X2X3] [X4X5X6X7] [X8X9X10X11] 0.8895
  22. 22. Data-Intensive Flows for Competent GAs
  23. 23. Selectorecombinative GA
  24. 24. sGAs Execution Profile and Parallelization • Intel 2.8Ghz QuadCore, 4Gb RAM. Average of 20 runs. sbp sbp soed eps/mapper eps/paralell/0 soed eps/paralell/1 eps/paralell/2 eps eps/paralell/3 eps/reducer twrops twrops ucbps/mapper ucbps/paralell/0 ucbps/paralell/1 noit ucbps/paralell/2 ucbps/paralell/3 ucbps/reducer ucbps noit 0 5000 10000 15000 20000 0 5000 10000 15000 20000
  25. 25. eCGA Model Model building
  26. 26. eCGA Execution Profile and Parallelization • Intel 2.8Ghz QuadCore, 4Gb RAM. Average of 20 runs. init_ecga init_ecga update_partitions/mapper update_partitions/mapper update_partitions/mapper update_partitions update_partitions/paralell/0 update_partitions/paralell/1 update_partitions/paralell/2 greedy_ecga_mb update_partitions/paralell/3 update_partitions/reduce greedy_ecga_mb print_model print_model 0 10000 20000 30000 40000 50000 0 10000 20000 30000 40000 50000
  27. 27. eCGA Model Building Speedup • Intel 2.8Ghz QuadCore, 4Gb RAM. Average of 20 runs. • Speedup against original eCGA model building 5 ● Speedup vs. Original eCGA Model Building 4 ● 3 ● 2 ● 1 1 2 3 4 Number of cores
  28. 28. Scalability on NUMA Systems • Run on NCSA’s SGI Altix Cobalt • 1,120 processors and up to 5 TB of RAM • SGI NUMAlink • NUMA architecture • Test for speedup behavior • Average of 20 independent runs • Automatic parallelization of the partition evaluation • Results still show the linear trend (despite the NUMA) • 16 processors, speedup = 14.01 • 32 processors, speedup = 27.96
  29. 29. Wrapping Up
  30. 30. Summary • Evolutionary computation is data rich • Data-intensive computing can provide to EC: • Tap into parallelism quite painless • Provide a simple programming and modeling • Boost reusability • Tackle otherwise intractable problems • Shown that equivalent data-intensive computing versions of traditional algorithms exist • Linear parallelism can be tap transparently
  31. 31. Data-Intensive Computing for Competent Genetic Algorithms: A Pilot Study using Meandre Xavier Llorà National Center for Supercomputing Applications University of Illinois at Urbana-Champaign Urbana, Illinois, 61801 xllora@ncsa.illinois.edu http://www.ncsa.illinois.edu/~xllora

×