This document discusses scaling machine learning using Apache Spark. It covers several key topics:
1) Parallelizing machine learning algorithms and neural networks to distribute computation across clusters. This includes data, model, and parameter server parallelism.
2) Apache Spark's Resilient Distributed Datasets (RDDs) programming model which allows distributing data and computation across a cluster in a fault-tolerant manner.
3) Examples of very large neural networks trained on clusters, such as a Google face detection model using 1,000 servers and a IBM brain-inspired chip model using 262,144 CPUs.
3. Scaling computation
● Analytics tools with poor scalability and integration
● Manual processes
● Slow iterations
● Not suitable for large amounts of data
● We want fast iteration, reliability, integration
● Serial implementation
● Parallel
● GPUs
● Distributed
6. Artificial neural network
● Network training
○ Many “optimal” solutions
○ Optimization and training techniques - LBFGS,
Backpropagation, batch and online gradient
descent, Downpour SGD, Sandblaster LBFGS, …
○ Vanishing gradient, amplifying parameters, ...
○ New methods for large networks - deep learning
8. Scaling computation
● Different programming models, Different languages,
Different levels
● Sequential
○ R, Matlab, Python, Scala
● Parallel
○ Theano, Torch, Caffe, Tensor Flow, Deeplearning4j
Elapsed times for 20 PageRank iterations
[3, 4]
9. Machine learning
● Linear algebra
● Vectors, matrices, vector spaces, matrix transformations,
eigenvectors/values
● Many machine learning algorithms are optimization problems
● Goal is to solve them in reasonable (bounded) time
● Goal not always to find the best possible model (data size, feature
engineering vs. algorithm/model complexity)
● Goal is to solve them reliably, at scale, support application needs
and improve
[5]
11. Consistency, time and order in DS
● Sequential program always one total order of
operations
● No order guarantees in distributed system
● At-most-once. Messages may be lost.
● At-least-once. Messages may be duplicated but not
lost.
● Exactly-once.
12. Failure in distributed system
● Node failures, network partitions, message loss, split brains,
inconsistencies
● Microsoft's data centers average failure rate is 5.2 devices per day
and 40.8 links per day, with a median time to repair of approximately
five minutes (and a maximum of one week).
● Google new cluster over one year. Five times rack issues 40-80
machines seeing 50 percent packet loss. Eight network maintenance
events (four of which might cause ~30-minute random connectivity
losses). Three router failures (resulting in the need to pull traffic
immediately for an hour).
● CENIC 500 isolating network partitions with median 2.7 and 32
minutes; 95th percentile of 19.9 minutes and 3.7 days, respectively
for software and hardware problems
[6]
13. Failure in distributed system
● MongoDB separated primary from its 2 secondaries. 2 hours later the old
primary rejoined and rolled back everything on the new primary
● A network partition isolated the Redis primary from all secondaries. Every API
call caused the billing system to recharge customer credit cards automatically,
resulting in 1.1 percent of customers being overbilled over a period of 40
minutes.
● The partition caused inconsistency in the MySQL database. Because foreign key
relationships were not consistent, Github showed private repositories to the
wrong users' dashboards and incorrectly routed some newly created
repositories.
● For several seconds, Elasticsearch is happy to believe two nodes in the same
cluster are both primaries, will accept writes on both of those nodes, and later
discard the writes to one side.
● RabbitMQ lost ~35% of acknowledged writes under those conditions.
● Redis threw away 56% of the writes it told us succeeded.
● In Riak, last-write-wins resulted in dropping 30-70% of writes, even with the
strongest consistency settings
● MongoDB “strictly consistent” reads see stale versions of documents, but they
can also return garbage data from writes that never should have occurred.
[6]
20. Parameter server
● Model and data parallelism
● Failures and slow machines
● Additional stochasticity due to asynchrony (relaxed
consistency, not up to data parameters, ordering not
guaranteed, …)
[11]
21. Examples
“Their network for face detection from youtube comprised millions of
neurons and 1 billion connection weights. They trained it on a dataset of 10
million 200x200 pixel RGB images to learn 20,000 object categories. The
training simulation ran for three days on a cluster of 1,000 servers totaling
16,000 CPU cores. Each instantiation of the network spanned 170 servers”
Google.
“We demonstrate near-perfect weak scaling on a 16 rack IBM Blue Gene/Q
(262144 CPUs, 256 TB memory), achieving an unprecedented scale of 256
million neurosynaptic cores containing 65 billion neurons and 16 trillion
synapses“
TrueNorth, part of project IBM SyNAPSE.
[11, 12]
25. Data processing pipeline
● Whole lifecycle of data
● Data processing
● Data stores
● Integration
● Distributed computing primitives
● Cluster managers and task schedulers
● Deployment, configuration management and DevOps
● Data analytics and machine learning
29. Apache Spark
● In memory dataflow distributed data processing
framework, streaming and batch
● Distributes computation using a higher level API
● Load balancing
● Moves computation to data
● Fault tolerant
47. Muvr
● Classify finished (in progress) exercises
● Gather data for improved classification
● Predict next exercises
● Predict weights, intensity
● Design a schedule of exercises and improvements
(personal trainer)
● Monitor exercise quality
48. Scaling model training
val sc = new SparkContext("local[4]", "NN")
val data = ...
val layers = Array[Int](inputSize, 250, 50, outputSize)
val trainer = new MultilayerPerceptronClassifier()
.setLayers(layers)
.setBlockSize(128)
.setSeed(1234L)
.setMaxIter(100)
val model = trainer.fit(data)
val result = model.transform(data)
println(result.select(result("prediction")).foreach(println))
val predictionAndLabels = result.select("prediction", "label")
val evaluator = new MulticlassClassificationEvaluator()
.setMetricName("precision")
println("Precision:" + evaluator.evaluate(predictionAndLabels))
49. Scaling model training
● Deeplearning4j, Neon, Tensor flow on Spark
Model 1 training
Model 2 training
Model 3 training
Best model
54. val events = sc.eventTable().cache().toDF()
val lr = new LinearRegression()
val pipeline = new Pipeline().setStages(Array(new UserFilter(), new ZScoreNormalizer(),
new IntensityFeatureExtractor(), lr))
val paramGrid = new ParamGridBuilder()
.addGrid(lr.regParam, Array(0.1, 0.01))
.addGrid(lr.fitIntercept, Array(true, false))
getEligibleUsers(events, sessionEndedBefore)
.map { user =>
val trainValidationSplit =
new TrainValidationSplit()
.setEstimator(pipeline)
.setEvaluator(new RegressionEvaluator)
.setEstimatorParamMaps(paramGrid)
val model = trainValidationSplit.fit(
events,
ParamMap(ParamPair(userIdParam, user)))
val testData = // Prepare test data.
val predictions = model.transform(testData)
submitResult(userId, predictions, config)
}
55. Queries and analytics
val events: RDD[(JournalKey, Any)] = sc.eventTable().cache().filterClass
[EntireResistanceExerciseSession].flatMap(_.deviations)
val deviationsFrequency = sqlContext.sql(
"""SELECT planned.exercise, hour(time), COUNT(1)
FROM exerciseDeviations
WHERE planned.exercise = 'bench press'
GROUP BY planned.exercise, hour(time)""")
val deviationsFrequency2 = exerciseDeviationsDF
.where(exerciseDeviationsDF("planned.exercise")
=== "bench press")
.groupBy(
exerciseDeviationsDF("planned.exercise"),
exerciseDeviationsDF("time”))
.count()
val deviationsFrequency3 = exerciseDeviations
.filter(_.planned.exercise == "bench press")
.groupBy(d => (d.planned.exercise, d.time.getHours))
.map(d => (d._1, d._2.size))
56. Clustering
def toVector(user: User): mllib.linalg.Vector =
Vectors.dense(
user.frequency,
user.performanceIndex,
user.improvementIndex)
val events: RDD[(JournalKey, Any)] =
sc.eventTable().cache()
val users: RDD[User] = events.filterClass[User]
val kmeans = new KMeans()
.setK(5)
.set...
val clusters = kmeans.run(users.map(_.toVector))
57. Recommendations
val weight: RDD[(JournalKey, Any)] = sc.eventTable().cache()
val exerciseDeviations = events
.filterClass[EntireResistanceExerciseSession]
.flatMap(session =>
session.sets.flatMap(set =>
set.sets.map(
exercise => (session.id.id, exercise.exercise))))
.groupBy(e => e)
.map(g =>
Rating(normalize(g._1._1), normalize(g._1._2),
normalize(g._2.size)))
val model = new ALS().run(ratings)
val predictions = model.predict(recommend)
bench
press
bicep
curl
dead
lift
user 1 5 2
user 2 4 3
user 3 5 2
user 4 3 1
58. Graph analysis
val events: RDD[(JournalKey, Any)] =
sc.eventTable().cache()
val connections = events.filterClass[Connections]
val vertices: RDD[(VertexId, Long)] =
connections.map(c => (c.id, 1l))
val edges: RDD[Edge[Long]] = connections
.flatMap(c => c.connections
.map(Edge(c.id, _, 1l)))
val graph = Graph(vertices, edges)
val ranks = graph.pageRank(0.0001).vertices