Data-intensive computing has positioned itself as a valuable programming paradigm to efficiently approach problems requiring processing very large volumes of data. This paper presents a pilot study about how to apply the data-intensive computing paradigm to evolutionary computation algorithms. Two representative cases (selectorecombinative genetic algorithms and estimation of distribution algorithms) are presented, analyzed, and discussed. This study shows that equivalent data-intensive computing evolutionary computation algorithms can be easily developed, providing robust and scalable algorithms for the multicore-computing era. Experimental results show how such algorithms scale with the number of available cores without further modification.
The venerable MapReduce framework has allowed Hadoop to prove its worth in the big data space, and to store and analyze much larger data sets than was possible before. But there is a lot of activity in the big data ecosystem currently surrounding other major categories of workflows beyond batch.
These emerging tools include low latency i/o (HBase), interactive queries (Drill), stream processing (Storm), and text processing / indexing (Solr). This talk discusses some of the more interesting developments in Drill and Storm, their capabilities, and how they are being put to use in real world situations.
The venerable MapReduce framework has allowed Hadoop to prove its worth in the big data space, and to store and analyze much larger data sets than was possible before. But there is a lot of activity in the big data ecosystem currently surrounding other major categories of workflows beyond batch.
These emerging tools include low latency i/o (HBase), interactive queries (Drill), stream processing (Storm), and text processing / indexing (Solr). This talk discusses some of the more interesting developments in Drill and Storm, their capabilities, and how they are being put to use in real world situations.
Hanborq Optimizations on Hadoop MapReduceHanborq Inc.
A Hanborq optimized Hadoop Distribution, especially with high performance of MapReduce. It's the core part of HDH (Hanborq Distribution with Hadoop for Big Data Engineering).
Accelerating Spark MLlib and DataFrame with Vector Processor “SX-Aurora TSUBASA”Databricks
NEC has recently released new vector system "SX-Aurora TSUBASA". This system is usually used for HPC, but is also designed for data analytics by building the vector processor as a PCIe-attached accelerator. In comparison with GPGPU, it suits for memory intensive workloads, often see at statistical machine learning and data frame processing. To accelerate data analytics on Spark, we have created acceleration framework "Frovedis" for SX-Aurora TSUBASA. It supports several machine learning algorithms on MLlib and Data Frame processing that are fully optimized for the vector processor. It is also optimized for distributed systems with multiple vector processors, and has API that is mostly the same with Spark MLlib and Data Frame. These features enables Spark developers to use multiple vector processors seamlessly from Spark and get a huge performance improvement. The performance evaluation shows that the "Frovedis" on the vector processor shows 10x to 50x speedup on several machine learning and data frame kernels compared with a Spark on Xeon Gold.
Hadoop 0.23 contains major architectural changes in both HDFS and Map-Reduce frameworks. The fundamental changes include HDFS (Hadoop Distributed File System) Federation and YARN (Yet Another Resource Negotiator) to overcome the current scalability limitations of both HDFS and Job Tracker. Despite major architectural changes, the impact on user applications and programming model has been kept to a minimal to ensure that existing user Hadoop applications written in Hadoop 20 will continue to function with minimal changes. In this talk we will discuss the architectural changes which Hadoop 23 introduces and compare it to Hadoop 20. Since this is the biggest major release of Hadoop that has been adopted at Yahoo! (after Hadoop 20) in 3 years, we will talk about the customer impact and potential deployment issues of Hadoop 23 and its ecosystems. The deployment of Hadoop 23 at Yahoo! is an ongoing process and is being conducted in a phased manner on our clusters.
Presenter: Viraj Bhat, Principal Engineer, Yahoo!
Hadoop World 2011: The Hadoop Stack - Then, Now and in the Future - Eli Colli...Cloudera, Inc.
Many people refer to Apache Hadoop as their system of choice for big data management but few actually use just Apache Hadoop. Hadoop has become a proxy for a much larger system which has HDFS storage at its core. The Apache Hadoop based "big data stack" has changed dramatically over the past 24 months and will chance even more over the next 24 months. This talk talks about trends in the evolution of the Hadoop stack, change in architecture and changes in the kinds of use cases that are supported. It will also talk about the role of interoperability and cohesion in the Apache Hadoop stack and the role of Apache Bigtop in this regard.
SF Big Analytics & SF Machine Learning Meetup: Machine Learning at the Limit ...Chester Chen
Machine Learning at the Limit
John Canny, UC Berkeley
How fast can machine learning and graph algorithms be? In "roofline" design, every kernel is driven toward the limits imposed by CPU, memory, network etc. This can lead to dramatic improvements: BIDMach is a toolkit for machine learning that uses rooflined design and GPUs to achieve two- to three-orders of magnitude improvements over other toolkits on single machines. These speedups are larger than have been reported for *cluster* systems (e.g. Spark/MLLib, Powergraph) running on hundreds of nodes, and BIDMach with a GPU outperforms these systems for most common machine learning tasks. For algorithms (e.g. graph algorithms) which do require cluster computing, we have developed a rooflined network primitive called "Kylix". We can show that Kylix approaches the rooline limits for sparse Allreduce, and empirically holds the record for distributed Pagerank. Beyond rooflining, we believe there are great opportunities from deep algorithm/hardware codesign. Gibbs Sampling (GS) is a very general tool for inference, but is typically much slower than alternatives. SAME (State Augmentation for Marginal Estimation) is a variation of GS which was developed for marginal parameter estimation. We show that it has high parallelism, and a fast GPU implementation. Using SAME, we developed a GS implementation of Latent Dirichlet Allocation whose running time is 100x faster than other samplers, and within 3x of the fastest symbolic methods. We are extending this approach to general graphical models, an area where there is currently a void of (practically) fast tools. It seems at least plausible that a general-purpose solution based on these techniques can closely approach the performance of custom algorithms.
Bio
John Canny is a professor in computer science at UC Berkeley. He is an ACM dissertation award winner and a Packard Fellow. He is currently a Data Science Senior Fellow in Berkeley's new Institute for Data Science and holds a INRIA (France) International Chair. Since 2002, he has been developing and deploying large-scale behavioral modeling systems. He designed and protyped production systems for Overstock.com, Yahoo, Ebay, Quantcast and Microsoft. He currently works on several applications of data mining for human learning (MOOCs and early language learning), health and well-being, and applications in the sciences.
Scaling Big Data Mining Infrastructure Twitter ExperienceDataWorks Summit
The analytics platform at Twitter has experienced tremendous growth over the past few years in terms of size, complexity, number of users, and variety of use cases. In this talk, we’ll discuss the evolution of our infrastructure and the development of capabilities for data mining on “big data”. We’ll share our experiences as a case study, but make recommendations for best practices and point out opportunities for future work.
Cloudera Sessions - Clinic 1 - Getting Started With HadoopCloudera, Inc.
If you are interested in Hadoop and its capabilities, but you are not sure where to begin, this is the session for you. Learn the basics of Hadoop, see how to spin up a development cluster in the cloud or on-premise, and start exploring ETL processing with SQL and other familiar tools
Hanborq Optimizations on Hadoop MapReduceHanborq Inc.
A Hanborq optimized Hadoop Distribution, especially with high performance of MapReduce. It's the core part of HDH (Hanborq Distribution with Hadoop for Big Data Engineering).
Accelerating Spark MLlib and DataFrame with Vector Processor “SX-Aurora TSUBASA”Databricks
NEC has recently released new vector system "SX-Aurora TSUBASA". This system is usually used for HPC, but is also designed for data analytics by building the vector processor as a PCIe-attached accelerator. In comparison with GPGPU, it suits for memory intensive workloads, often see at statistical machine learning and data frame processing. To accelerate data analytics on Spark, we have created acceleration framework "Frovedis" for SX-Aurora TSUBASA. It supports several machine learning algorithms on MLlib and Data Frame processing that are fully optimized for the vector processor. It is also optimized for distributed systems with multiple vector processors, and has API that is mostly the same with Spark MLlib and Data Frame. These features enables Spark developers to use multiple vector processors seamlessly from Spark and get a huge performance improvement. The performance evaluation shows that the "Frovedis" on the vector processor shows 10x to 50x speedup on several machine learning and data frame kernels compared with a Spark on Xeon Gold.
Hadoop 0.23 contains major architectural changes in both HDFS and Map-Reduce frameworks. The fundamental changes include HDFS (Hadoop Distributed File System) Federation and YARN (Yet Another Resource Negotiator) to overcome the current scalability limitations of both HDFS and Job Tracker. Despite major architectural changes, the impact on user applications and programming model has been kept to a minimal to ensure that existing user Hadoop applications written in Hadoop 20 will continue to function with minimal changes. In this talk we will discuss the architectural changes which Hadoop 23 introduces and compare it to Hadoop 20. Since this is the biggest major release of Hadoop that has been adopted at Yahoo! (after Hadoop 20) in 3 years, we will talk about the customer impact and potential deployment issues of Hadoop 23 and its ecosystems. The deployment of Hadoop 23 at Yahoo! is an ongoing process and is being conducted in a phased manner on our clusters.
Presenter: Viraj Bhat, Principal Engineer, Yahoo!
Hadoop World 2011: The Hadoop Stack - Then, Now and in the Future - Eli Colli...Cloudera, Inc.
Many people refer to Apache Hadoop as their system of choice for big data management but few actually use just Apache Hadoop. Hadoop has become a proxy for a much larger system which has HDFS storage at its core. The Apache Hadoop based "big data stack" has changed dramatically over the past 24 months and will chance even more over the next 24 months. This talk talks about trends in the evolution of the Hadoop stack, change in architecture and changes in the kinds of use cases that are supported. It will also talk about the role of interoperability and cohesion in the Apache Hadoop stack and the role of Apache Bigtop in this regard.
SF Big Analytics & SF Machine Learning Meetup: Machine Learning at the Limit ...Chester Chen
Machine Learning at the Limit
John Canny, UC Berkeley
How fast can machine learning and graph algorithms be? In "roofline" design, every kernel is driven toward the limits imposed by CPU, memory, network etc. This can lead to dramatic improvements: BIDMach is a toolkit for machine learning that uses rooflined design and GPUs to achieve two- to three-orders of magnitude improvements over other toolkits on single machines. These speedups are larger than have been reported for *cluster* systems (e.g. Spark/MLLib, Powergraph) running on hundreds of nodes, and BIDMach with a GPU outperforms these systems for most common machine learning tasks. For algorithms (e.g. graph algorithms) which do require cluster computing, we have developed a rooflined network primitive called "Kylix". We can show that Kylix approaches the rooline limits for sparse Allreduce, and empirically holds the record for distributed Pagerank. Beyond rooflining, we believe there are great opportunities from deep algorithm/hardware codesign. Gibbs Sampling (GS) is a very general tool for inference, but is typically much slower than alternatives. SAME (State Augmentation for Marginal Estimation) is a variation of GS which was developed for marginal parameter estimation. We show that it has high parallelism, and a fast GPU implementation. Using SAME, we developed a GS implementation of Latent Dirichlet Allocation whose running time is 100x faster than other samplers, and within 3x of the fastest symbolic methods. We are extending this approach to general graphical models, an area where there is currently a void of (practically) fast tools. It seems at least plausible that a general-purpose solution based on these techniques can closely approach the performance of custom algorithms.
Bio
John Canny is a professor in computer science at UC Berkeley. He is an ACM dissertation award winner and a Packard Fellow. He is currently a Data Science Senior Fellow in Berkeley's new Institute for Data Science and holds a INRIA (France) International Chair. Since 2002, he has been developing and deploying large-scale behavioral modeling systems. He designed and protyped production systems for Overstock.com, Yahoo, Ebay, Quantcast and Microsoft. He currently works on several applications of data mining for human learning (MOOCs and early language learning), health and well-being, and applications in the sciences.
Scaling Big Data Mining Infrastructure Twitter ExperienceDataWorks Summit
The analytics platform at Twitter has experienced tremendous growth over the past few years in terms of size, complexity, number of users, and variety of use cases. In this talk, we’ll discuss the evolution of our infrastructure and the development of capabilities for data mining on “big data”. We’ll share our experiences as a case study, but make recommendations for best practices and point out opportunities for future work.
Cloudera Sessions - Clinic 1 - Getting Started With HadoopCloudera, Inc.
If you are interested in Hadoop and its capabilities, but you are not sure where to begin, this is the session for you. Learn the basics of Hadoop, see how to spin up a development cluster in the cloud or on-premise, and start exploring ETL processing with SQL and other familiar tools
Sharding Containers: Make Go Apps Computer-Friendly Again by Andrey Sibiryov Docker, Inc.
Go is, without doubt, a great language for writing massively concurrent programs. Nevertheless, our experience running Go under extreme load shows that there comes a point where assumptions and decisions made in Go runtime bite back on its users and lead to inferior performance, especially in high-throughput & high-load applications. This talk covers main reasons for this to happen and explores an interesting way to work around this issue: automatic local sharding with Docker.
Using Docker, local load-balancer and creativity, we can automatically shard & pin our apps in such a way so that the external observer (client, another microservice) would never see any difference. The result is that apps run faster, resource utilization is better, engineers are not frustrated when their Go suddenly breaks down and runs slow because they have a solution!
This presentation will give you Information about :
1.Configuring HDFS
2.Interacting With HDFS
3.HDFS Permissions and Security
4.Additional HDFS Tasks
HDFS Overview and Architecture
5.HDFS Installation
6.Hadoop File System Shell
7.File System Java API
Big Data is everywhere these days. But what is it and how can you use it to fuel your business? Data is as important to organizations as labour and capital, and if organizations can effectively capture, analyze, visualize and apply big data insights to their business goals, they can differentiate themselves from their competitors and outperform them in terms of operational efficiency and the bottom line.
Join this session to understand the different AWS Big Data and Analytics services such as Amazon Elastic MapReduce (Hadoop), Amazon Redshift (Data Warehouse) and Amazon Kinesis (Streaming), when to use them and how they work together.
Reasons to attend:
Learn how AWS can help you process and make better use of your data with meaningful insights.
Learn about Amazon Elastic MapReduce and Amazon Redshift, fully managed petabyte-scale data warehouse solutions.
Learn about real time data processing with Amazon Kinesis.
AWS Webcast - An Introduction to High Performance Computing on AWSAmazon Web Services
High Performance Computing (HPC) allows scientists and engineers to solve complex science, engineering, and business problems using applications that require high bandwidth, low latency networking, and very high compute capabilities. Learn how the AWS cloud can cost- effectively provide the scalable computing resources, storage services, and analytic tools that enable running various kinds of HPC workloads. Who should attend? Engineers, architects, product managers, data scientists, high performance computing specialists, and researchers from industry and academia, along with technically-minded business stakeholders looking to put data to work for their organization.
Similar to Data-Intensive Computing for Competent Genetic Algorithms: A Pilot Study using Meandre (20)
A quick overview of the seed for Meandre 2.0 series. It covers the main motivations moving forward and the disruptive changes introduced via the use of Scala and MongoDB
From Galapagos to Twitter: Darwin, Natural Selection, and Web 2.0Xavier Llorà
One hundred and fifty years have passed since the publication of Darwin's world-changing manuscript "The Origins of Species by Means of Natural Selection". Darwin's ideas have proven their power to reach beyond the biology realm, and their ability to define a conceptual framework which allows us to model and understand complex systems. In the mid 1950s and 60s the efforts of a scattered group of engineers proved the benefits of adopting an evolutionary paradigm to solve complex real-world problems. In the 70s, the emerging presence of computers brought us a new collection of artificial evolution paradigms, among which genetic algorithms rapidly gained widespread adoption. Currently, the Internet has propitiated an exponential growth of information and computational resources that are clearly disrupting our perception and forcing us to reevaluate the boundaries between technology and social interaction. Darwin's ideas can, once again, help us understand such disruptive change. In this talk, I will review the origin of artificial evolution ideas and techniques. I will also show how these techniques are, nowadays, helping to solve a wide range of applications, from life science problems to twitter puzzles, and how high performance computing can make Darwin ideas a routinary tool to help us model and understand complex systems.
Large Scale Data Mining using Genetics-Based Machine LearningXavier Llorà
We are living in the peta-byte era.We have larger and larger data to analyze, process and transform into useful answers for the domain experts. Robust data mining tools, able to cope with petascale volumes and/or high dimensionality producing human-understandable solutions are key on several domain areas. Genetics-based machine learning (GBML) techniques are perfect candidates for this task, among others, due to the recent advances in representations, learning paradigms, and theoretical modeling. If evolutionary learning techniques aspire to be a relevant player in this context, they need to have the capacity of processing these vast amounts of data and they need to process this data within reasonable time. Moreover, massive computation cycles are getting cheaper and cheaper every day, allowing researchers to have access to unprecedented parallelization degrees. Several topics are interlaced in these two requirements: (1) having the proper learning paradigms and knowledge representations, (2) understanding them and knowing when are they suitable for the problem at hand, (3) using efficiency enhancement techniques, and (4) transforming and visualizing the produced solutions to give back as much insight as possible to the domain experts are few of them.
This tutorial will try to answer this question, following a roadmap that starts with the questions of what large means, and why large is a challenge for GBML methods. Afterwards, we will discuss different facets in which we can overcome this challenge: Efficiency enhancement techniques, representations able to cope with large dimensionality spaces, scalability of learning paradigms. We will also review a topic interlaced with all of them: how can we model the scalability of the components of our GBML systems to better engineer them to get the best performance out of them for large datasets. The roadmap continues with examples of real applications of GBML systems and finishes with an analysis of further directions.
Linkage Learning for Pittsburgh LCS: Making Problems TractableXavier Llorà
Presentation by Xavier Llorà, Kumara Sastry, & David E. Goldberg showing how linkage learning is possible on Pittsburgh style learning classifier systems
Do not Match, Inherit: Fitness Surrogates for Genetics-Based Machine Learning...Xavier Llorà
A byproduct benefit of using probabilistic model-building genetic algorithms is the creation of cheap and accurate surrogate models. Learning classifier systems---and genetics-based machine learning in general---can greatly benefit from such surrogates which may replace the costly matching procedure of a rule against large data sets. In this paper we investigate the accuracy of such surrogate fitness functions when coupled with the probabilistic models evolved by the x-ary extended compact classifier system (xeCCS). To achieve such a goal, we show the need that the probabilistic models should be able to represent all the accurate basis functions required for creating an accurate surrogate. We also introduce a procedure to transform populations of rules based into dependency structure matrices (DSMs) which allows building accurate models of overlapping building blocks---a necessary condition to accurately estimate the fitness of the evolved rules.
Towards Better than Human Capability in Diagnosing Prostate Cancer Using Infr...Xavier Llorà
Cancer diagnosis is essentially a human task. Almost universally, the process requires the extraction of tissue (biopsy) and examination of its microstructure by a human. To improve diagnoses based on limited and inconsistent morphologic knowledge, a new approach has recently been proposed that uses molecular spectroscopic imaging to utilize microscopic chemical composition for diagnoses. In contrast to visible imaging, the approach results in very large data sets as each pixel contains the entire molecular vibrational spectroscopy data from all chemical species. Here, we propose data handling and analysis strategies to allow computer-based diagnosis of human prostate cancer by applying a novel genetics-based machine learning technique ({\tt NAX}). We apply this technique to demonstrate both fast learning and accurate classification that, additionally, scales well with parallelization. Preliminary results demonstrate that this approach can improve current clinical practice in diagnosing prostate cancer.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
The Art Pastor's Guide to Sabbath | Steve ThomasonSteve Thomason
What is the purpose of the Sabbath Law in the Torah. It is interesting to compare how the context of the law shifts from Exodus to Deuteronomy. Who gets to rest, and why?
How to Create Map Views in the Odoo 17 ERPCeline George
The map views are useful for providing a geographical representation of data. They allow users to visualize and analyze the data in a more intuitive manner.
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
How to Split Bills in the Odoo 17 POS ModuleCeline George
Bills have a main role in point of sale procedure. It will help to track sales, handling payments and giving receipts to customers. Bill splitting also has an important role in POS. For example, If some friends come together for dinner and if they want to divide the bill then it is possible by POS bill splitting. This slide will show how to split bills in odoo 17 POS.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
Ethnobotany and Ethnopharmacology:
Ethnobotany in herbal drug evaluation,
Impact of Ethnobotany in traditional medicine,
New development in herbals,
Bio-prospecting tools for drug discovery,
Role of Ethnopharmacology in drug evaluation,
Reverse Pharmacology.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
We all have good and bad thoughts from time to time and situation to situation. We are bombarded daily with spiraling thoughts(both negative and positive) creating all-consuming feel , making us difficult to manage with associated suffering. Good thoughts are like our Mob Signal (Positive thought) amidst noise(negative thought) in the atmosphere. Negative thoughts like noise outweigh positive thoughts. These thoughts often create unwanted confusion, trouble, stress and frustration in our mind as well as chaos in our physical world. Negative thoughts are also known as “distorted thinking”.
Data-Intensive Computing for Competent Genetic Algorithms: A Pilot Study using Meandre
1. Data-Intensive Computing for
Competent Genetic Algorithms:
A Pilot Study using Meandre
Xavier Llorà
National Center for Supercomputing Applications
University of Illinois at Urbana-Champaign
Urbana, Illinois, 61801
xllora@ncsa.illinois.edu
http://www.ncsa.illinois.edu/~xllora
2. Outline
• Data-intensive computing and HPC?
• Is this related at all to evolutionary computation?
• Data-intensive computing with Meandre
• GAs and competent GAs
• Data-intensive computing for GAs
3. 2 Minute HPC History
• The eighties and early nineties picture
• Commodity hardware rare, slow, and costly
• Supercomputers were extremely expensive
• Most of them hand crafted and only few units
• Two competing families
• CISC (e.g. Cray C90 with up to 16 processors)
• RISC (e.g. Connection Machine CM-5 with up 4,096 processors)
• Late nineties commodity hardware hit main stream
• Start becoming popular, cheaper, and faster
• Economy of scale
• Massive parallel computers build from commodity components become a
viable option
4. Two Visions
• C90 like supercomputers were like a comfy pair of trainers
• Oriented to scientific computing
• Complex vector oriented supercomputers
• Shared memory (lots of them)
• Multiprocessor enabled via some intercommunication networks
• Single system image
• CM5 like computers did not get massive traction, but a bit
• General purpose (as long as you can chop the work in simple units)
• Lots of simple processors available
• Distributed memory pushed new programming models (message passing)
• Complex interconnection networks
• NCSA have shared memory, distributed memory, and gpgpu based
5. Miniaturization Building Bridges
• Multicores and gpgpus are reviving the C90 flavor
• The CM-5 flavor now survives as distributed clusters of not so
simple units
6. Control Models of Parallelization in EC
Run 1 Run 5 Run 9 Master
Individual Evaluation
Run 2 Run 6 Run 10
Run 3 Run 7 Run 11
Slave Slave Slave
Migration
7. But Data is also Part of the Equation
• Google and Yahoo! revived an old route
• Usually refers to:
• Infrastructure
• Programming techniques/paradigms
• Google made it main stream after their MapReduce model
• Yahoo! provides and open source implementation
• Hadoop (MapReduce)
• HDFS (Hadoop distributed filesystem)
• Store petabytes reliably on commodity hardware (fault tolerant)
• Programming model
• Map: Equivalent to the map operation on functional programming
• Reduce: The reduction phase after maps are computed
8. A Simple Example
n
2
x → reduce(map(x, sqr), sum)
i=0
x x x x
map map map map
x2 x2 x2 x2
reduce reduce reduce reduce
sum
9. Is This Related to EC?
• How can we easily benefit of the current core race painlessly?
• NCSA’s Blue Waters estimated may top on 100K
• Yes on several facets
• Large optimization problems need to deal with large population sizes
(Sastry, Goldberg & Llorà, 2007)
• Large-scale data mining using genetic-based machine learning (Llorà et
al. 2007)
• Competent GAs model building extremely costly and data rich (Pelikan
et al. 2001)
• The goal?
• Rethink parallelization as data flow processes
• Show that traditional models can be map to data-intensive computing
models
• Foster you curiosity
11. The Meandre Infrastructure Challenges
• NCSA infrastructure effort on data-intensive computing
• Transparency
• From a single laptop to a HPC cluster
• Not bound to a particular computation fabric
• Allow heterogeneous development
• Intuitive programming paradigm
• Modular Components assembled into Flows
• Foster Collaboration and Sharing
• Open Source
• Service Orientated Architecture (SOA)
12. Basic Infrastructure Philosophy
• Dataflow execution paradigm
• Semantic-web driven
• Web oriented
• Facilitate distributed computing
• Support publishing services
• Promote reuse, sharing, and collaboration
• More information at http://seasr.org/meandre
13. Data Flow Execution in Meandre
• A simple example c ← a+b
• A traditional control-driven language
a = 1
b = 2
c = a+b
• Execution following the sequence of instructions
• One step at a time
• a+b+c+d requires 3 steps
• Could be easily parallelized
14. Data Flow Execution in Meandre
• Data flow execution is driven by data
• The previous example may have 2 possible data flow versions
Stateless data flow
value(a)
+ value(c)
value(b)
State-based data flow
value(b) value(a) + value(c)
?
15. The Basic Building Blocks: Components
Component
RDF descriptor of the The component
components behavior implementation
16. Go with the Flow: Creating Complex Tasks
• Directed multigraph of components creates a flow
Push
Text
Concatenate To Upper Print
Text Case Text Text
Push
Text
17. Automatic Parallelization:
Speed and Robustness
• Meandre ZigZag language allow automatic parallelization
To Upper
Case Text
Push
Text To Upper
Concatenate Case Text Print
Text Text
Push
Text
To Upper
Case Text
19. Selectorecombinative GAs
1. Initialize the population with random individuals
2. Evaluate the fitness value of the individuals
3. Select good solutions by using s-wise tournament selection
without replacement (Goldberg, Korb & Deb, 1989)
4. Create new individuals by recombining the selected population
using uniform crossover (Sywerda, 1989)
5. Evaluate the fitness valueof all offspring
6. Repeat steps 3-5 until convergence criteria are met
20. Extended Compact Genetic Algorithm
• Harik et al. 2006
• Initialize the population (usually random initialization)
• Evaluate the fitness of individuals
• Select promising solutions (e.g., tournament selection)
• Build the probabilistic model
• Optimize structure & parameters to best fit selected individuals
• Automatic identification of sub-structures
• Sample the model to create new candidate solutions
• Effective exchange of building blocks
• Repeat steps 2–7 till some convergence criteria are met
21. eCGA Model Building Process
• Use model-building procedure of extended compact GA
• Partition genes into (mutually) independent groups
• Start with the lowest complexity model
• Search for a least-complex, most-accurate model
Model Structure
Metric
[X0] [X1] [X2] [X3] [X4] [X5] [X6] [X7] [X8] [X9] [X10] [X11]
1.0000
[X0] [X1] [X2] [X3] [X4X5] [X6] [X7] [X8] [X9] [X10] [X11]
0.9933
[X0] [X1] [X2] [X3] [X4X5X7] [X6] [X8] [X9] [X10] [X11]
0.9819
[X0] [X1] [X2] [X3] [X4X5X6X7] [X8] [X9] [X10] [X11]
0.9644
…
[X0] [X1] [X2] [X3] [X4X5X6X7] [X8X9X10X11]
0.9273
…
[X0X1X2X3] [X4X5X6X7] [X8X9X10X11]
0.8895
27. eCGA Model Building Speedup
• Intel 2.8Ghz QuadCore, 4Gb RAM. Average of 20 runs.
• Speedup against original eCGA model building
5 ●
Speedup vs. Original eCGA Model Building
4
●
3
●
2
●
1
1 2 3 4
Number of cores
28. Scalability on NUMA Systems
• Run on NCSA’s SGI Altix Cobalt
• 1,120 processors and up to 5 TB of RAM
• SGI NUMAlink
• NUMA architecture
• Test for speedup behavior
• Average of 20 independent runs
• Automatic parallelization of the partition evaluation
• Results still show the linear trend (despite the NUMA)
• 16 processors, speedup = 14.01
• 32 processors, speedup = 27.96
30. Summary
• Evolutionary computation is data rich
• Data-intensive computing can provide to EC:
• Tap into parallelism quite painless
• Provide a simple programming and modeling
• Boost reusability
• Tackle otherwise intractable problems
• Shown that equivalent data-intensive computing versions of
traditional algorithms exist
• Linear parallelism can be tap transparently
31. Data-Intensive Computing for
Competent Genetic Algorithms:
A Pilot Study using Meandre
Xavier Llorà
National Center for Supercomputing Applications
University of Illinois at Urbana-Champaign
Urbana, Illinois, 61801
xllora@ncsa.illinois.edu
http://www.ncsa.illinois.edu/~xllora