• Like

OSCON 2014: Data Workflows for Machine Learning

  • 8,564 views
Uploaded on

 

More in: Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
8,564
On Slideshare
0
From Embeds
0
Number of Embeds
18

Actions

Shares
Downloads
101
Comments
0
Likes
29

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Data Workflows for 
 Machine Learning ! Paco Nathan
 @pacoid http://liber118.com/pxn/
  • 2. Why is this talk here? Machine Learning in production apps is less and less about algorithms (even though that work is quite fun and vital) ! Performing real work is more about: ! socializing a problem within an organization ! feature engineering (“Beyond Product Managers”) ! tournaments in CI/CD environments ! operationalizing high-ROI apps at scale ! etc. ! So I’ll just crawl out on a limb and state that leveraging great frameworks to build data workflows is more important than 
 chasing after diminishing returns on highly nuanced algorithms. ! Because Interwebs!
  • 3. Data Workflows for Machine Learning Middleware has been evolving for Big Data, and there are some great examples — we’ll review several. Process has been evolving too, right along with the use cases. ! Popular frameworks typically provide some Machine Learning capabilities within their core components, or at least among their major use cases. ! Let’s consider features from Enterprise DataWorkflows as a basis 
 for what’s needed in Data Workflows for Machine Learning. ! Their requirements for scale, robustness, cost trade-offs, interdisciplinary teams, etc., serve as guides in general.
  • 4. Caveat Auditor I won’t claim to be expert with each of the frameworks and environments described in this talk. Expert with a few of them perhaps, but more to the point: embroiled in many use cases. ! This talk attempts to define a “scorecard” for evaluating important ML data workflow features: what’s needed for 
 use cases, compare and contrast of what’s available, plus 
 some indication of which frameworks are likely to be best 
 for a given scenario. ! Seriously, this is a work in progress.
  • 5. Outline • Definition: Machine Learning • Definition: Data Workflows • A whole bunch o’ examples across several platforms • Nine points to discuss, leading up to a scorecard • Because Notebooks • Questions, comments, flying tomatoes…
  • 6. Data Workflows for 
 Machine Learning: 
 Frame the question… ! “A Basis for What’s Needed” 

  • 7. “Machine learning algorithms can figure out how to perform
 important tasks by generalizing from examples.This is often
 feasible and cost-effective where manual programming is not. 
 As more data becomes available, more ambitious problems 
 can be tackled. As a result, machine learning is widely used 
 in computer science and other fields.” Pedro Domingos, U Washington
 A Few Useful Things to Know about Machine Learning Definition: Machine Learning • overfitting (variance) • underfitting (bias) • “perfect classifier” (no free lunch) [learning as generalization] =
 [representation] + [evaluation] + [optimization]
  • 8. Definition: Machine Learning … Conceptually
 ! ! 1. real-world data 2. representation – as graphs, 
 time series, geo, etc. 3. convert to sparse matrix for production work 
 leveraging abstract algebra + func programming 4. cost-effective parallel processing for ML apps 
 at scale, live use cases
  • 9. discovery discovery modeling modeling integration integration appsapps systems systems business process, stakeholder data prep, discovery, modeling, etc. software engineering, automation systems engineering, availability data science Data Scientist App Dev Ops Domain Expert Definition: Machine Learning … Interdisciplinary Teams, Needs × Roles
  • 10. Definition: Machine Learning … Process, Feature Engineering evaluationrepresentation optimization use cases feature engineering data sources data sources data prep pipeline train test hold-outs learners classifiers classifiers classifiers data sources scoring
  • 11. Definition: Machine Learning … Process, Tournaments evaluationrepresentation optimization use cases feature engineering tournaments data sources data sources data prep pipeline train test hold-outs learners classifiers classifiers classifiers data sources scoring quantify and measure: • benefit? • risk? • operational costs? • obtain other data? • improve metadata? • refine representation? • improve optimization? • iterate with stakeholders? • can models (inference) inform first principles approaches?
  • 12. monoid (an alien life form, courtesy of abstract algebra): ! binary associative operator ! closure within a set of objects ! unique identity element what are those good for? ! composable functions in workflows ! compiler “hints” on steroids, to parallelize ! reassemble results minimizing bottlenecks at scale ! reusable components, like Lego blocks ! think: Docker for business logic check out http://justenoughmath.com/ kudos:Avi Bryant, Oscar Boykin, Sam Ritchie, Jimmy Lin, et al. Definition: Machine Learning … Invasion of the Monoids
  • 13. Definition: Machine Learning evaluationrepresentation optimization use cases feature engineering tournaments data sources data sources data prep pipeline train test hold-outs learners classifiers classifiers classifiers data sources scoring quantify and measure: • benefit? • risk? • operational costs? • obtain other data? • improve metadata? • refine representation? • improve optimization? • iterate with stakeholders? • can models (inference) inform first principles approaches? discovery discovery modeling modeling integration integration appsapps systems systems data science Data Scientist App Dev Ops Domain Expert introduced capability
  • 14. Definition evaluationrepresentation optimization use cases feature engineering tournaments data sources data sources data prep pipeline train test hold-outs learners classifiers classifiers classifiers data sources scoring quantify and measure: • benefit? • risk? • operational costs? • obtain other data? • improve metadata? • refine representation? • improve optimization? • iterate with stakeholders? • can models (inference) inform first principles approaches? discovery discovery modeling modeling integration integration appsapps systems systems data science Data Scientist App Dev Ops Domain Expert introduced capability Can we fit the required 
 process into generalized 
 workflow definitions?
  • 15. Definition: Data Workflows Middleware, effectively, is evolving for Big Data and Machine Learning…
 The following design pattern — a DAG — shows up in many places, 
 via many frameworks: ETL data prep predictive model data sources end uses
  • 16. Definition: Data Workflows definition of a typical Enterprise workflow which crosses through multiple departments, languages, and technologies… ETL data prep predictive model data sources end uses ANSI SQL for ETL
  • 17. Definition: Data Workflows definition of a typical Enterprise workflow which crosses through multiple departments, languages, and technologies… ETL data prep predictive model data sources end usesJava, Pig, etc., for business logic
  • 18. Definition: Data Workflows definition of a typical Enterprise workflow which crosses through multiple departments, languages, and technologies… ETL data prep predictive model data sources end uses SAS for predictive models

  • 19. Definition: Data Workflows definition of a typical Enterprise workflow which crosses through multiple departments, languages, and technologies… ETL data prep predictive model data sources end uses SAS for predictive models ANSI SQL for ETL most of the licensing costs…
  • 20. Definition: Data Workflows definition of a typical Enterprise workflow which crosses through multiple departments, languages, and technologies… ETL data prep predictive model data sources end usesJava, Pig, etc., most of the project costs…
  • 21. Definition: Data Workflows definition of a typical Enterprise workflow which crosses through multiple departments, languages, and technologies… ETL data prep predictive model data sources end usesJava, Pig, etc., most of the project costs… Something emerges 
 to fill these needs, 
 for instance…
  • 22. ETL data prep predictive model data sources end uses Lingual: DW → ANSI SQL Pattern: SAS, R, etc. → PMML business logic in Java, Clojure, Scala, etc. sink taps for Memcached, HBase, MongoDB, etc. source taps for Cassandra, JDBC, Splunk, etc. Definition: Data Workflows For example, Cascading and related projects implement the following components, based on 100% open source: cascading.org a compiler sees it all…
 one connected DAG: • troubleshooting • exception handling • notifications • some optimizations
  • 23. ETL data prep predictive model data sources end uses Lingual: DW → ANSI SQL Pattern: SAS, R, etc. → PMML business logic in Java, Clojure, Scala, etc. sink taps for Memcached, HBase, MongoDB, etc. source taps for Cassandra, JDBC, Splunk, etc. Definition: Data Workflows Cascading allows multiple departments to combine their workflow components into an integrated app – one among many, typically – based on 100% open source ! ! FlowDef flowDef = FlowDef.flowDef()! .setName( "etl" )! .addSource( "example.employee", emplTap )! .addSource( "example.sales", salesTap )! .addSink( "results", resultsTap );!  ! SQLPlanner sqlPlanner = new SQLPlanner()! .setSql( sqlStatement );!  ! flowDef.addAssemblyPlanner( sqlPlanner );! ! ! cascading.org
  • 24. ETL data prep predictive model data sources end uses Lingual: DW → ANSI SQL Pattern: SAS, R, etc. → PMML business logic in Java, Clojure, Scala, etc. sink taps for Memcached, HBase, MongoDB, etc. source taps for Cassandra, JDBC, Splunk, etc. Definition: Data Workflows Cascading allows multiple departments to combine their workflow components into an integrated app – one among many, typically – based on 100% open source ! ! FlowDef flowDef = FlowDef.flowDef()! .setName( "classifier" )! .addSource( "input", inputTap )! .addSink( "classify", classifyTap );!  ! PMMLPlanner pmmlPlanner = new PMMLPlanner()! .setPMMLInput( new File( pmmlModel ) )! .retainOnlyActiveIncomingFields();!  ! flowDef.addAssemblyPlanner( pmmlPlanner );! ! !
  • 25. Enterprise DataWorkflows with Cascading O’Reilly, 2013 shop.oreilly.com/product/ 0636920028536.do ETL data prep predictive model data sources end uses
  • 26. Enterprise DataWorkflows with Cascading O’Reilly, 2013 shop.oreilly.com/product/ 0636920028536.do ETL data prep predictive model data sources end uses This begs an update…
  • 27. Enterprise DataWorkflows with Cascading O’Reilly, 2013 shop.oreilly.com/product/ 0636920028536.do ETL data prep predictive model data sources end uses Because…
  • 28. Data Workflows for 
 Machine Learning: 
 Examples… ! “Neque per exemplum” 

  • 29. Example: KNIME “a user-friendly graphical workbench for the entire analysis
 process: data access, data transformation, initial investigation,
 powerful predictive analytics, visualisation and reporting.” • large number of integrations (over 1000 modules) • ranked #1 in customer satisfaction among 
 open source analytics frameworks • visual editing of reusable modules • leverage prior work in R, Perl, etc. • Eclipse integration • easily extended for new integrations knime.org
  • 30. Example: Python stack Python has much to offer – ranging across an 
 organization, not just for the analytics staff • ipython.org • pandas.pydata.org • scikit-learn.org • numpy.org • scipy.org • code.google.com/p/augustus • continuum.io • nltk.org • matplotlib.org
  • 31. Example: Julia “a high-level, high-performance dynamic programming language
 for technical computing, with syntax that is familiar to users of
 other technical computing environments” • significantly faster than most alternatives • built to leverage parallelism, cloud computing • still relatively new — one to watch! importall Base! ! type BubbleSort <: Sort.Algorithm end! ! function sort!(v::AbstractVector, lo::Int, hi::Int, ::BubbleSort, o::Sort.Ordering)!     while true!         clean = true!         for i = lo:hi-1!             if Sort.lt(o, v[i+1], v[i])!                 v[i+1], v[i] = v[i], v[i+1]!                 clean = false!             end!         end!         clean && break!     end!     return v! end! julialang.org
  • 32. Example: Summingbird “a library that lets you write streaming MapReduce programs
 that look like native Scala or Java collection transformations 
 and execute them on a number of well-known distributed
 MapReduce platforms like Storm and Scalding.” • switch between Storm, Scalding (Hadoop) • Spark support is in progress • leverage Algebird, Storehaus, Matrix API, etc. github.com/twitter/summingbird def wordCount[P <: Platform[P]]! (source: Producer[P, String], store: P#Store[String, Long]) =! source.flatMap { sentence => ! toWords(sentence).map(_ -> 1L)! }.sumByKey(store)
  • 33. Example: Scalding import com.twitter.scalding._!  ! class WordCount(args : Args) extends Job(args) {! Tsv(args("doc"),! ('doc_id, 'text),! skipHeader = true)! .read! .flatMap('text -> 'token) {! text : String => text.split("[ [](),.]")! }! .groupBy('token) { _.size('count) }! .write(Tsv(args("wc"), writeHeader = true))! } github.com/twitter/scalding
  • 34. Example: Scalding • extends the Scala collections API so that distributed lists become “pipes” backed by Cascading • code is compact, easy to understand • nearly 1:1 between elements of conceptual flow diagram 
 and function calls • extensive libraries are available for linear algebra, abstract algebra, machine learning – e.g., Matrix API, Algebird, etc. • significant investments by Twitter, Etsy, eBay, etc. • less learning curve than Cascalog • build scripts… those take a while to run : github.com/twitter/scalding
  • 35. Example: Cascalog (ns impatient.core!   (:use [cascalog.api]!         [cascalog.more-taps :only (hfs-delimited)])!   (:require [clojure.string :as s]!             [cascalog.ops :as c])!   (:gen-class))! ! (defmapcatop split [line]!   "reads in a line of string and splits it by regex"!   (s/split line #"[[](),.)s]+"))! ! (defn -main [in out & args]!   (?<- (hfs-delimited out)!        [?word ?count]!        ((hfs-delimited in :skip-header? true) _ ?line)!        (split ?line :> ?word)!        (c/count ?count)))! ! ; Paul Lam! ; github.com/Quantisan/Impatient cascalog.org
  • 36. Example: Cascalog • implements Datalog in Clojure, with predicates backed 
 by Cascading – for a highly declarative language • run ad-hoc queries from the Clojure REPL –
 approx. 10:1 code reduction compared with SQL • composable subqueries, used for test-driven development (TDD) practices at scale • Leiningen build: simple, no surprises, in Clojure itself • more new deployments than other Cascading DSLs – 
 Climate Corp is largest use case: 90% Clojure/Cascalog • has a learning curve, limited number of Clojure developers • aggregators are the magic, and those take effort to learn cascalog.org
  • 37. Example: Titan distributed graph database, 
 by Matthias Broecheler, Marko Rodriguez, et al. • scalable graph database optimized for storing and querying graphs • supports hundreds of billions of vertices and edges • transactional database that can support thousands of concurrent users executing complex graph traversals • supports search through Lucene, ElasticSearch • can be backed by HBase, Cassandra, BerkeleyDB • TinkerPop native graph stack with Gremlin, etc. // who is hercules' paternal grandfather?! g.V('name','hercules').out('father').out('father').name thinkaurelius.github.io/titan
  • 38. Example: MBrace “a .NET based software stack that enables easy large-scale
 distributed computation and big data analysis.” • declarative and concise distributed algorithms in 
 F# asynchronous workflows • scalable for apps in private datacenters or public 
 clouds, e.g.,Windows Azure or Amazon EC2 • tooling for interactive, REPL-style deployment, monitoring, management, debugging inVisual Studio • leverages monads (similar to Summingbird) • MapReduce as a library, but many patterns beyond m-brace.net let rec mapReduce (map: 'T -> Cloud<'R>)! (reduce: 'R -> 'R -> Cloud<'R>)! (identity: 'R)! (input: 'T list) =! cloud {! match input with! | [] -> return identity! | [value] -> return! map value! | _ ->! let left, right = List.split input! let! r1, r2 =! (mapReduce map reduce identity left)! <||>! (mapReduce map reduce identity right)! return! reduce r1 r2! }
  • 39. Example: Apache Spark in-memory cluster computing, 
 by Matei Zaharia, et al. • intended to make data analytics fast to write and to run • easy to use, at a higher level of abstraction • load data into memory and query it repeatedly, much more quickly than with Hadoop • APIs in Scala, Java and Python, shells in Scala and Python • integrations: Spark Streaming, MLlib, GraphX, Spark SQL, Tachyon, etc. // word count! val file = spark.textFile(“hdfs://…”)! val counts = file.flatMap(line => line.split(” “))! .map(word => (word, 1))! .reduceByKey(_ + _)! counts.saveAsTextFile(“hdfs://…”) spark-project.org The State of Spark, and WhereWe're Going Next Matei Zaharia Spark Summit (2013) youtu.be/nU6vO2EJAb4
  • 40. Example: Spark SQL “blurs the lines between RDDs and relational tables” intermix SQL commands to query external data, along with complex analytics, in a single app: • allows SQL extensions based on MLlib • Shark is being migrated to Spark SQL Spark SQL: Manipulating Structured Data Using Spark
 Michael Armbrust, Reynold Xin (2014-03-24)
 databricks.com/blog/2014/03/26/Spark- SQL-manipulating-structured-data-using- Spark.html
 people.apache.org/~pwendell/catalyst- docs/sql-programming-guide.html ! // data can be extracted from existing sources // e.g., Apache Hive val trainingDataTable = sql(""" SELECT e.action u.age, u.latitude, u.logitude FROM Users u JOIN Events e ON u.userId = e.userId""") ! // since `sql` returns an RDD, the results above // can be used in MLlib val trainingData = trainingDataTable.map { row => val features = Array[Double](row(1), row(2), row(3)) LabeledPoint(row(0), features) } ! val model = new LogisticRegressionWithSGD().run(trainingData) spark-project.org
  • 41. MapReduce General Batch Processing Pregel Giraph Dremel Drill Tez Impala GraphLab Storm S4 Specialized Systems: iterative, interactive, streaming, graph, etc. Example: Apache Spark spark-project.org 2002 2002 MapReduce @ Google 2004 MapReduce paper 2006 Hadoop @Yahoo! 2004 2006 2008 2010 2012 2014 2014 Apache Spark top-level 2010 Spark paper 2008 Hadoop Summit action value RDD RDD RDD transformations RDD How about a generalized engine for distributed, applicative systems – apps sharing code across multiple use cases: batch, iterative, streaming, etc.
  • 42. Data Workflows for Machine Learning: 
 Update the definitions… ! “Nine Points to Discuss”

  • 43. Data Workflows Formally speaking, workflows include people (cross-dept) 
 and define oversight for exceptional data …otherwise, data flow or pipeline would be more apt …examples include Cascading traps, with exceptional tuples 
 routed to a review organization, e.g., Customer Support includes people, defines oversight for exceptional data Hadoop Cluster source tap source tap sink tap trap tap customer profile DBsCustomer Prefs logs logs Logs Data Workflow Cache Customers Support Web App Reporting Analytics Cubes sink tap Modeling PMML 1
  • 44. Data Workflows Footnote – see also an excellent article related to this point 
 and the next: ! Therbligs for data science: 
 A nuts and bolts framework for accelerating data work Abe Gong @Jawbone blog.abegong.com/2014/03/therbligs-for-data-science.html strataconf.com/strata2014/public/schedule/detail/32291 brighttalk.com/webcast/9059/105119 ! ! ! includes people, defines oversight for exceptional data 1+
  • 45. Data Workflows Workflows impose a separation of concerns, allowing 
 for multiple abstraction layers in the tech stack …specify what is required, not how to accomplish it …articulating the business logic … Cascading leverages pattern language …related notions from Knuth, literate programming …not unlike BPM/BPEL for Big Data …examples: IPython, R Markdown, etc. separation of concerns, allows for literate programming 2
  • 46. Data Workflows Footnote – while discussing separation of concerns and a general design pattern, let’s consider data workflows in a broader context of probabilistic programming, since business data is heterogeneous and structured… 
 the data for every domain is heterogeneous. Paraphrasing Ginsberg: I’ve seen the best minds of my generation destroyed by madness, dragging themselves through quagmires of large LCD screens filled with Intellij debuggers and database command lines, yearning to fit real-world data into their preferred “deterministic” tools… separation of concerns, allows for literate programming 2+ Probabilistic Programming: 
 Why,What, How,When
 Beau Cronin
 Strata SC (2014)
 speakerdeck.com/beaucronin/ probabilistic-programming- strata-santa-clara-2014 Why Probabilistic Programming 
 Matters
 Rob Zinkov
 Convex Optimized (2012-06-27)
 zinkov.com/posts/ 2012-06-27-why-prob- programming-matters/
  • 47. Data Workflows Multiple abstraction layers in the tech stack are 
 needed, but still emerging …feedback loops based on machine data …optimizers feed on abstraction …metadata accounting: • track data lineage • propagate schema • model / feature usage • ensemble performance …app history accounting: • util stats, mixing workloads • heavy-hitters, bottlenecks • throughput, latency multiple abstraction layers 
 for metadata, feedback, 
 and optimization Cluster Scheduler Planners/ Optimizers Clusters DSLs Machine Data • app history • util stats • bottlenecksMixed Topologies Business Process Reusable Components Portable Models • data lineage • schema propagation • feature selection • tournaments 3
  • 48. Data Workflows Multiple needed, but still emerging …feedback loops based on …optimizers feed on abstraction …metadata accounting: • track data lineage • propagate • model / feature usage • ensemble performance …app history accounting: • util stats, mixing workloads • heavy-hitters, bottlenecks • throughput, latency multiple abstraction layers 
 for metadata, feedback, 
 and optimization Cluster Scheduler Planners/ Optimizers Clusters DSLs Machine Data • app history • util stats • bottlenecksMixed Topologies Business Process Reusable Components Portable Models • data lineage • schema propagation • feature selection • tournaments 3 Um, will compilers 
 ever look like this??
  • 49. Data Workflows Workflows must provide for test, which is no simple 
 matter …testing is required on three abstraction layers: • model portability/evaluation, ensembles, tournaments • TDD in reusable components and business process • continuous integration / continuous deployment …examples: Cascalog for TDD, PMML for model eval …still so much more to improve …keep in mind that workflows involve people, too testing: model evaluation, TDD, app deployment 4 Cluster Scheduler Planners/ Optimizers Clusters DSLs Machine Data • app history • util stats • bottlenecksMixed Topologies Business Process Reusable Components Portable Models • data lineage • schema propagation • feature selection • tournaments
  • 50. Data Workflows Workflows enable system integration …future-proof integrations and scale-out …build components and apps, not jobs and command lines …allow compiler optimizations across the DAG,
 i.e., cross-dept contributions …minimize operationalization costs: • troubleshooting, debugging at scale • exception handling • notifications, instrumentation …examples: KNIME, Cascading, etc. ! future-proof system integration, scale-out, ops Hadoop Cluster source tap source tap sink tap trap tap customer profile DBsCustomer Prefs logs logs Logs Data Workflow Cache Customers Support Web App Reporting Analytics Cubes sink tap Modeling PMML 5
  • 51. Data Workflows Visualizing workflows, what a great idea. …the practical basis for: • collaboration • rapid prototyping • component reuse …examples: KNIME wins best in category …Cascading generates flow diagrams, 
 which are a nice start visualizing allows people 
 to collaborate through code 6
  • 52. Data Workflows Abstract algebra, containerizing workflow metadata …monoids, semigroups, etc., allow for reusable components 
 with well-defined properties for running in parallel at scale …let business process be agnostic about underlying topologies, 
 with analogy to Linux containers (Docker, etc.) …compose functions, take advantage of sparsity (monoids) …because “data is fully functional” – FP for s/w eng benefits …aggregators are almost always “magic”, now we can solve 
 for common use cases …read Monoidify! Monoids as a Design Principle for Efficient MapReduce Algorithms by Jimmy Lin …Cascading introduced some notions of this circa 2007, 
 but didn’t make it explicit …examples: Algebird in Summingbird, Simmer, Spark, etc. abstract algebra and functional programming containerize business process 7
  • 53. Data Workflows Footnote – see also two excellent talks related to this point: ! Algebra for Analytics
 speakerdeck.com/johnynek/algebra-for-analytics
 Oscar Boykin, Strata SC (2014) Add ALL theThings:
 Abstract Algebra Meets Analytics
 infoq.com/presentations/abstract-algebra-analytics
 Avi Bryant, Strange Loop (2013) abstract algebra and functional programming containerize business process 7+
  • 54. Data Workflows Workflow needs vary in time, and need to blend time …something something batch, something something low-latency …scheduling batch isn’t hard; scheduling low-latency is 
 computationally brutal (see the Omega paper) …because people like saying “Lambda Architecture”, 
 it gives them goosebumps, or something …because real-time means so many different things …because batch windows are so 1964 …examples: Summingbird, Oryx blend results from different time scales: batch plus low- latency 8 Big Data
 Nathan Marz, James Warren
 manning.com/marz
  • 55. Data Workflows Workflows may define contexts in which model selection 
 possibly becomes a compiler problem …for example, see Boyd, Parikh, et al. …ultimately optimizing for loss function + regularization term …perhaps not ready for prime-time immediately …examples: MLbase optimize learners in context, to make model selection potentially a compiler problem 9 f(x): loss function g(z): regularization term
  • 56. Data Workflows For one of the best papers about what workflows really
 truly require, see Out of the Tar Pit by Moseley and Marks bonus
  • 57. Nine Points for Data Workflows — a wish list 1. includes people, defines oversight for exceptional data 2. separation of concerns, allows for literate programming 3. multiple abstraction layers for metadata, feedback, and optimization 4. testing: model evaluation, TDD, app deployment 5. future-proof system integration, scale-out, ops 6. visualizing workflows allows people to collaborate through code 7. abstract algebra and functional programming containerize business process 8. blend results from different time scales: 
 batch plus low-latency 9. optimize learners in context, to make model 
 selection potentially a compiler problem Cluster Scheduler Planners/ Optimizers Clusters DSLs Machine Data • app history • util stats • bottlenecksMixed Topologies Business Process Reusable Components Portable Models • data lineage • schema propagation • feature selection • tournaments
  • 58. Data Workflows for Machine Learning: 
 Summary… ! “feedback, discussion, outlook” 
 ?
  • 59. Nine Points for Data Workflows — a scorecard Spark Oryx Summing! bird Cascalog Cascading ! KNIME Py Data R Markdown MBrace includes people, exceptional data separation of concerns multiple 
 abstraction layers testing in depth future-proof 
 system integration visualize to collab can haz monoids blends batch + 
 “real-time” optimize learners 
 in context can haz PMML ? ✔ ✔ ✔ ✔ ✔
  • 60. Haskell Curry
 haskell.org Alonso Church
 wikipedia.org The General Case: Theory, Eight Decades Ago: Haskell Curry, known for seminal work on combinatory logic (1927) Alonzo Church, known for lambda calculus (1936) and much more! ! Both sought formal answers to the question, “What can be computed?”
  • 61. The General Case: Praxis, Four Decades Ago: Leveraging lambda calculus, combinators, etc., to increase the parallelism of apps as applicative systems John Backus
 acm.org David Turner
 wikipedia.org “Can Programming Be Liberated from the von Neumann
 Style? A Functional Style and Its Algebra of Programs”
 ACMTuring Award (1977)
 stanford.edu/class/cs242/readings/backus.pdf “A new implementation technique for applicative languages”
 Turner, D.A. (1979)
 Softw: Pract. Exper., 9: 31–49. doi: 10.1002/spe.4380090105
  • 62. Notebooks, cloud-based and otherwise… For now, the best practices appear to be heading toward 
 a world of notebooks… • IPython => Jupyter • can be containerized to run in a cloud • great for team collaboration • Don Knuth sez: try literate programming • great for dashboards, as end-use data products • also good as endpoints for service-oriented architectures
 (which are what ML use cases should be thinking about)
  • 63. Ali Ghodsi, Databricks Cloud youtu.be/lO7LhVZrNwA?t=23m27s Notebooks, cloud-based and otherwise…
  • 64. follow-ups
  • 65. monthly newsletter for updates, 
 events, conf summaries, etc.: liber118.com/pxn/ Enterprise Data Workflows with Cascading O’Reilly, 2013 shop.oreilly.com/product/0636920028536.do Just Enough Math O’Reilly, 2014 oreilly.com/go/enough_math/
 preview: youtu.be/TQ58cWgdCpA
  • 66. Scala by the Bay
 SF, Aug 8
 scalabythebay.org #MesosCon
 Chicago, Aug 21
 events.linuxfoundation.org/events/mesoscon Cassandra Summit
 SF, Sep 10
 cvent.com/events/cassandra-summit-2014 Strata NYC + Hadoop World
 NYC, Oct 15
 strataconf.com/stratany2014 Strata EU
 Barcelona, Nov 20
 strataconf.com/strataeu2014 Data Day Texas
 Austin, Jan 10
 datadaytexas.com calendar: