Companies like Buffer, SeatGeek, and Asana aren’t just talking about the value of data, they’re building data infrastructure that can actually deliver it. Join this 45-minute webinar to learn why these companies are investing in data and what you need to know to keep up.
Good afternoon, everyone! Thanks so much for joining us today. I’m going to introduce you to my co-host in just a second, but first, let me run through just a few housekeeping details.
We have a lot on the agenda for today. The core of our presentation is going to focus on how companies like yours are solving their data infrastructure challenges. We’re going to cover the challenges engineers should expect around data integration, why Amazon Redshift is quickly becoming the data warehouse of choice, cultural barriers to building a data-driven company, and a lot more.
First thing we’re going to cover is data infrastructure, or the actual architecture of legacy and modern data pipelines
For the last 30 years or so, really since the inception of modern databases, data warehousing has been the standard model to aggregate data and provide business-directed analytics Data is extracted from various sources…. databases, third-party applications, flat files, etc…. and transformed into a predefined model, then loaded into the data warehouse This ETL process results in data cubes and data silos, where analytics are separated by key groupings for various departments, such as marketing, product, sales, etc. This results in a few issues that are fundamentally prohibitive to creating a data-driven organization First, it’s very resource intensive (and expensive) to manage all of the transformations and data loading Second, it results in latency in the analytics process. End users only have access to pre-defined metrics, which are typically too broad or inflexible to guide nimble decision making. This means that end-users aren’t really getting any actionable insights from these metrics - they’re just looking at high level analysis Third, it restricts drilling. If an end-user finds an interesting piece of information…. say sales accelerated drastically for a certain user age group, and you want to know why… that end-user needs to rmake another data request from the ETL or IT team, who will then take some time to return the request. This latency constrains end users from making data-driven decisions. These were commonly recognized problems. So nowadays, as Shaun was mentioning, modern tech companies have reworked this process
Nowadays, companies are collecting more data than ever before Additionally, database technology has witnessed significant advances in the last several years... Databases themselves are now capable of performing sophisticated analysis very quickly This removes the need for data silos and data cubes - all analytics can be performed directly on the central database What this means, is that it now makes sense to shift the burden of complex transformations to the front of the pipeline - to the BI tool - where transformations can be performed on-the-fly, at query time
Several benefits to this approach, some of which I mentioned a minute ago but are worth repeating: First, you no longer require huge, resource-intensive engineering or ETL team to move all of your data - so it’s much cheaper on the resource side Secondly, Technical users can pull data in a language they’re used to, SQL…. and if you have a modeling layer, like Looker provides, then users can actually query the data directly from the UI, without any technical knowledge. Transformations aren’t being done by engineers on the backend, they’re being performed as the user pulls the data, so they’re much easier to repeat and easier to understand Lastly, this allows you to audit transformations, so you users understand the components behind analysis - they’ll understand how a metric is defined And Shaun has a few examples of this in practice
In the process of data engineering going from being a clumsy, multi-year project -- it’s gained some geek cred. Over the past year we’ve watched as one company after the next shared their “how we built our data infrastructure” blog posts. Yes, even looker. At some point data infrastructure gained geek cred. We were really interested in the details behind all these projects so we did a “meta-analysis” where we looked at how these companies solved core data engineering challenges.
We looked at Zulilly
Seatgeek, Buffer, Asana, and many more.
Some of these companies (like Netflix and Spotify) are building data products -- recommendation engines. That stack can look slightly different. For this event, we’re going to focus on companies who are building data infrastructure for analytics. And for these companies what we saw is that the process looks very much like what Dillon was just describing. First, they extract data from the variety of sources. Then they load it into the data warehouse. Then they do transformations on top of that.
Let’s start at the first part of the conversation. Extract & Load, or more simply, data integration.
And just to clarify, the reason this step is so important is because all future insights depend on it. Here are some of the use cases that the Asana team laid out.
“It’s difficult work – but an absolute requirement of great intelligence.”
Here are the most common data sources that we saw companies connecting to. Our analysis of how companies built their data infrastructure was based largely on blog posts (and some conversations) on the topic. One limitation there is that engineers tend to write these pieces fairly soon after completion of the project and there’s often the understanding that more data sources will be added on later. Asana built data connections to the most sources, but there’s an enormous amount of data that can be derived just from connecting ad spend to purchase history living in your production databases.
Now, for some audience participation, could you grab your mouse and fill in this poll? What top five data sources are a top priority for you to integrate nad keep integrated?
While you’re filling in your answers, let me just say that data consolidation comes with it’s own special challenges. When Asana first started building their data infrastructure they did it using Python scripts and MySQL. And if you’re just starting out this can work for you too, but you will outgrow it eventually. And I’m going to say more on that in a second, but first let’s take a look at the results.
So in the Asana teams own words, here are some of the challenges they faced during consolidation -- doubts about data integrity due to a lack of monitoring and logging, insights vs. bugs. Urgent fires when systems went down.
And this is from MetaMarkets. Braintree’s team said: deletes are nearly impossible to keep track of, you have to keep track of data that changed, batch updates are slow and it’s difficult to know how long they’ll take.
A big part of my job involves talking to people every day about their data infrastructure. These posts touch on some of the problems you can expect, but keep in mind -- these people are the successful ones. I’ve been on calls with many a frustrated engineer throwing in the towel on their data infrastructure projects after 1 year at the task. Data consolidation is hard. Here are 7 of the core challenges.
Early last month we released a SaaS product designed to solve this problem -- called Pipeline. It takes data from any number of integrations and that data flows into a datawarehouse with super low latency. We’re aggressively releasing new integrations each month, so if you need an integration you don’t see here today, let us know!
If you want to learn more about this, stick around at the end for a demo.
The next step in the process is data warehousing. Hands down the top pick for warehousing was Redshift.
Among the companies that we looked at, Redshift was the most popular choice for an analytics warehouse.
The most common reason? speed. People are seeing dramatic improvements in query time using Redshift.
Asana said that queries that were taking hours now take a few seconds. Similarly, seatgeek had a critical query that took 20 minutes, now takes half a minute in redshift.
Here are the results of AirBnB tests that show performance in both query time and cost.
Here’s some research from Periscope showing Redshift vs. Postgres shows similar performance gains.
And here is research from DiamondStream showing how much better their internal dashboards performed when built on Redshift vs. MS SQL. I think it’s this final reason why Looker is such a big fan of Redshift and recommends it to their clients.
Right, thanks Shaun... So earlier I talked a bit about the structural differences between old data architecture vs modern data architecture - now I’m going to elaborate a bit on how that architecture impacts business intelligence and analytics work flows
This slide shows workflows with the legacy architecture I described earlier As a reminder, with legacy architecture, each department is working in silos, all serviced by a central IT or Analyst team This is fundamentally prohibitive to a data-drive culture for a few reasons: First, it’s extremely resource-intensive for the central data team to service the needs of their business users. Second, it creates a bottleneck in the analytics process. You’ll see that the arrows are flowing away from the central data team, and that’s for a specific reason. The data team will provide pre-determined metrics for various departments, then rerun and distribute those metrics periodically. These metrics are typically overly broad and not actionable. If a user has further questions about the analysis…. and that is often the case. How do you know what questions to ask about the data, unless you’ve seen the data already?... Iif a user has a further question, they need to submit a request from the data team, who will may take a few days to turn it around. This latency restricts end-users from making quick, informed business decisions based on their data. Plus, in most companies, there is typically a hierarchy to who receives data. The Executive team can get all the data they want, while requests from sales reps, marketing managers, etc. are pushed to the back of the line. These groups rarely have the ability to make strategic decisions based on the analysis they request Lastly, this model results in disparate reporting. If 5 different departments request the same metric from 5 different database analysts, it’s highly likely that those analysts will have differing ideas about the appropriate way to calculate a metric. Especially when you get into the more sophisticated stuff - things like Affinity Analysis... if I buy X what is the likelihood I buy Y?.... There are a few statistically defensible ways to calculate that metrics. In practice, it’s very common for large organizations to have non-unified definitions, which leads to headaches, data chaos, and an inability to make decisions based on data
One of the factors that contributes to these workflow issues, which is sort of the last point I touched on, is the difficulty in consistently defining metrics across a company Part of this is because of the nature of SQL, the de facto language for querying databases SQL can be easy to write, but difficult to read / audit If you give 10 analysts the same metrics, you’ll very likely get 10 different queries, some of which may yield the same results, some of which may not In practice, this often results in data analysts recycling and slightly modifying old queries, without ever really understanding the inner workings of the query This then jeopardizes the integrity of the data, which makes it difficult to consistently interpret results
How do we solve this issue of one-off queries and silo’d reporting? We create a data model as an intermediary All definitions of metrics, and data transformations, are defined in one place, where all users can access and understand them Now, you don’t need those 10 analysts, you only need 1-2 who monitor the modeling layer, and you can be confident all users are working off of the same definitions and interpretations of the results You can also link together data from different sources, so you can link Salesforce marketo zendesk data together to get a comprehensive view of your customer This allows us to maintain “data governance”, which is a term that you probably hear a lot lately So, how does this modeling layer impact workflows?
This slide depicts BI and analysis workflows with modern architecture creates a, but creates a truly data-driven environment All users have equal access to the data through a UI, they don’t need to know SQL. So now Sales, marketing, finance, customer success, teams that previously could not directly access data, have the ability to explore their database in full detail Since everyone is looking at the same numbers and reports, business users can collaborate and facilitate meaningful conversations, based on shared insights Business users can make informed strategic decisions on the fly, which results in tangible, significant competitive advantages So, how do you set up this kind of architecture?
I think a good example of this is one of our customers, Infectious Media, who offer digital advertising for a myriad of Fortune companies. With Looker, their Sales optimization team has the ability to see, in real time, how various advertising campaigns are performing across every website and publisher. If a certain type of website is driving the most clicks or conversions, the optimization team can immediately determine why, then redirect future campaign efforts towards those specific websites or publishers, and perhaps new, similar ones. In a world where advertisements sometimes only last a week or two, the ability to constantly iterate on, and refine campaign strategy, results in tangible differences in top line sales. This represents the most significant competitive advantage a company in this space can possess… This model is required for a company to survive.
Now that we understand the benefits, I’ll explain how the set-up of these modern infrastructures is easier than ever. And I’ll illustrate this with an example using RJ Pipeline
Say you’re a company that collects data from a number of various sources, such as 3rd party applications Rather than needing to perform complex transformations (like with legacy architecture), you can dump all of your data directly into a centralized location using a middleware tool such as RJ Pipeline. This completely centralizes all of your data, and prepares it for analytics, with a few clicks. No need for heavy engineering resources and workloads Once the data is centralized, you can quickly add a tool with modeling layers to help distribute data to all of your end users (again, the modeling layer is key here) Working with a tool like Looker, for example, we have an offering called Looker Blocks, which is essentially pre-templated code for your modeling layer for all sorts of third-party applications and types of analysis…. These Blocks can be copied into your data model, so now even most of the actual data model development is initially taken care of for you The result, is going from having silo’d data in several disparate applications with unequal access for users…. to having data centralized in a modern database, with a full analytics suite on top, that can be accessed by any user What would have taken… quite literally…. months of intensive engineering efforts, is now accomplished in 1,2, or 3 weeks… Which is pretty astounding. That time-to-value from your data is something we’ve never really seen before in the data space.
How to Build a Data-Driven Company: From Infrastructure to Insights
What you’re going to learn
1 How top engineering organizations are building their
The 7 core challenges of data integration
Why companies like Asana, Buffer, and SeatGeek
choose Redshift for their analytics warehouse
...and much more!
Then and Now
The traditional approach: ETL Dillon
END USERBI TEAMETL TEAM EDW TEAM
ELT - Heavy Transformation Restricted Q&AOLAP / Silos
How companies are doing it today: ELT
Transform at Query
Viz & Exploration
Benefits of this approach
1.Redshift is performant enough to handle most
2.Users prefer performing transformations in a language
they already use (SQL) or with UI
3.Transformations are much simpler, more transparent
4.Performing transformations alongside raw data is great
Data infrastructure has geek cred Shaun
Data infrastructure has geek cred Shaun
Data infrastructure has geek cred Shaun
Data infrastructure has geek cred Shaun
What the stack looks likeShaun
Quick poll Shaun
What top five data sources are a top priority for you to
● production databases
● error logs
● email marketing
● a/b testing
“A year ago, we were facing a lot of stability problems with our data processing.
When there was a major shift in a graph, people immediately questioned the
data integrity. It was hard to distinguish interesting insights from
bugs. Data science is already an art so you need the infrastructure to give you
trustworthy answers to the questions you ask. 99% correctness is not
good enough. And on the data infrastructure team, we were spending a lot of
time churning on fighting urgent fires, and that prevented us from
making much long-term progress. It was painful.”
- Marco Gallotta, Asana, How to Build Stable, Accessible Data Infrastructure at a Startup
“Our story would end here if real-time processing were perfect. But it’s not: some
events can come in days late, some time ranges need to be re-
processed after initial ingestion due to code changes or data revisions, various
components of the real-time pipeline can fail, and so on.”
- Gian Merlino, MetaMarkets, Building a Data Pipeline That Handles Billions of Events in Real-Time
7 core challenges of data integration
Connections: Every API is a
unique and special snowflake
Accuracy: Ordering data on a
Latency: Large object data stores
(Amazon S3, Redshift) are
optimized for batches not streams
Scale: Data will grow
exponentially as your company
Flexibility: you’re interacting with
systems you don’t control
Monitoring: Notifications for
expired credentials, errors,
notifications of disruptions
investment in ongoing
Or...try Pipeline Shaun
Ad Platforms Customer SupportWeb Data
Test 1: 3 billion rows of data 28 minutes <6 minutes
Test 2: two joins with millions of rows 182 seconds 8 seconds
Cost $1.29/hour/node $0.85/hour/node
A broken model Dillon
● Feedback loop is broken
● Disparate reporting
● Non-unified decision
● Reusability is lost
Constraints of SQL Dillon
SQL is versatile, but shares the same flavor as
assembly-only languages such as Perl
Can write but not read
Promotes one-off, piecemeal analysis
The critical multiplier: modeling Dillon
Any SQL Data Warehouse
What’s our most
How does our Q4
Who are our
healthiest / happiest
● Data access
● Uniform definitions
● A Shared View
● Analytical Speed