2. What is Luigi
Luigi is a workflow engine
If you run 10,000+ Hadoop jobs every day, you need one
If you play around with batch processing just for fun, you want one
Doesn’t help you with the code, that’s what Scalding, Pig, or anything else is good at
It helps you with the plumbing of connecting lots of tasks into complicated pipelines,
especially if those tasks run on Hadoop
2
3. What do we use it for?
Music recommendations
A/B testing
Top lists
Ad targeting
Label reporting
Dashboards
… and a million other things!
3
4. Currently running 10,000+ Hadoop jobs every day
On average a Hadoop job is launched every 10s
There’s 2,000+ Luigi tasks in production
4
11. Ability to resume matters
When you are developing something interactively, you will try and fail a lot
Failures will happen, and you want to resume once you fixed it
You want the system to figure out exactly what it has to re-run and nothing else
Atomic file operations is crucial for the ability to resume
11
14. Generalization matters
You should be able to re-run your entire pipeline with a new value for a parameter
Command line integration means you can run interactive experiments
14
15. … now we’re getting
something
15
$ python run_everything.py --date-
first 2014-01-01 --date-last
2014-01-31 --n-trees 200
18. A lot of real-world
data pipelines are a
lot more complex
The ideal framework should make it
trivial to build up big data
pipelines where dependencies
are non-trivial (eg depend on date
algebra)
18
19. So I started thinking
Wanted to build something like GNU Make
19
20. What is Make and why is it pretty cool?
Build reusable rules
Specify what you want to build and then
backtrack to find out what you need
in order to get there
Reproducible runs
20
# the compiler: gcc for C program, define as g++ for C++
CC = gcc
!
# compiler flags:
# -g adds debugging information to the executable file
# -Wall turns on most, but not all, compiler warnings
CFLAGS = -g -Wall
!
# the build target executable:
TARGET = myprog
!
all: $(TARGET)
!
$(TARGET): $(TARGET).c
$(CC) $(CFLAGS) -o $(TARGET) $(TARGET).c
!
clean:
$(RM) $(TARGET)
21. We want something that works for a wide range of systems
We need to support lots of systems
“80% of data science is data munging”
21
22. Data processing needs to interact with lots of systems
Need to support practically any type of task:
Hadoop jobs
Database dumps
Ingest into Cassandra
Send email
SCP file somewhere else
22
23. My first attempt: builder
Use XML config to build up the dependency graph!
23
25. Dependencies need code
Pipelines deployed in production often have nontrivial ways they define dependencies between
tasks
!
!
!
!
!
!
!
!
… and many other cases
25
Recursion (and date algebra)
BloomFilter(date=2014-05-01)
BloomFilter(date=2014-04-30)
Log(date=2014-04-30)
Log(date=2014-04-29)
...
Date algebra
Toplist(date_interval=2014-01)
Log(date=2014-01-01)
Log(date=2014-01-02)
...
Log(date=2014-01-31)
Enum types
IdMap(type=artist) IdMap(type=track)
IdToIdMap(from_type=artist, to_type=track)
26. Don’t ever invent your own DSL
“It’s better to write domain specific code in a
general purpose language, than writing
general purpose code in a domain specific
language” – unknown author
!
!
Oozie is a good example of how messy it gets
26
27. 2009: builder2
Solved all the things I just mentioned
- Dependency graph specified in Python
- Support for arbitrary tasks
- Error emails
- Support for lots of common data plumbing stuff: Hadoop jobs, Postgres, etc
- Lots of other things :)
27
31. What were the good bits?
!
Build up dependency graphs and visualize them
Non-event to go from development to deployment
Built-in HDFS integration but decoupled from the core library
!
!
What went wrong?
!
Still too much boiler plate
Pretty bad command line integration
31
34. Luigi – History at Spotify
Late 2011: Me and Elias Freider build it, release it into the
wild at Spotify, people start using it
“The Python era”
!
Late 2012: Open source it
Early 2013: First known company outside of Spotify:
Foursquare
!
34
35. Luigi is your friendly plumber
Simple dependency definitions
Emphasis on Hadoop/HDFS integration
Atomic file operations
Data flow visualization
Command line integration
35
37. Luigi Task – breakdown 37
The business logic of the task Where it writes output What other tasks it depends on
Parameters for this task
38. Easy command line integration
So easy that you want to use Luigi for it
38
$ python my_task.py MyTask --param 43
INFO: Scheduled MyTask(param=43)
INFO: Scheduled SomeOtherTask(param=43)
INFO: Done scheduling tasks
INFO: [pid 20235] Running SomeOtherTask(param=43)
INFO: [pid 20235] Done SomeOtherTask(param=43)
INFO: [pid 20235] Running MyTask(param=43)
INFO: [pid 20235] Done MyTask(param=43)
INFO: Done
INFO: There are no more tasks to run at this time
INFO: Worker was stopped. Shutting down Keep-Alive thread
$ cat /tmp/foo/bar-43.txt
hello, world
$
39. Let’s go back to the example
39
Log d
Log d+1
...
Log d+k-1
Subsample
and extract
features
Subsampled
features
Train
classifier
Classifier
Look at the
output
45. Let’s make it more complicated – cross validation
45
Log d
Log d+1
...
Log d+k-1
Subsample
and extract
features
Subsampled
features
Train
classifier
Classifier
Log e
Log e+1
...
Log e+k-1
Subsample
and extract
features
Subsampled
features
Cross validation
57. Process synchronization
Luigi worker 1 Luigi worker 2
A
B C
A C
F
Luigi central planner
Prevents the same task from being run simultaneously, but all execution is being done by the
workers.
57
58. Luigi is a way of coordinating lots of different tasks
… but you still have to figure out how to implement and scale them!
58
60. Built-in support for HDFS & Hadoop
At Spotify we’re abandoning Python for batch processing tasks, replacing it with Crunch and
Scalding. Luigi is a great glue!
!
Our team, the Lambda team: 15 engs, running 1,000+ Hadoop jobs daily, having 400+ Luigi Tasks in
production.
!
Our recommendation pipeline is a good example: Python M/R jobs, ML algos in C++, Java M/R jobs,
Scalding, ML stuff in Python using scikit-learn, import stuff into Cassandra, import stuff into
Postgres, send email reports, etc.
60
61. The one time we accidentally deleted 50TB of data
We didn’t have to write a single line of code to fix it – Luigi rescheduled 1000s of task and ran it for 3
days
61
63. The missing parts
Execution is tied to scheduling – you can’t schedule something to run “in the cloud”
Visualization could be a lot more useful
There’s no built scheduling – have to rely on crontab
These are all things we have in the backlog
63
67. Luigi implements some core beliefs
The #1 focus is on removing all boiler plate
The #2 focus is to be as general as possible
The #3 focus is to make it easy to go from test to production
!
!
67