Agenda :
- Data at Dailymotion
- Apache Airflow
- Airflow at Dailymotion
- Deployment
- Working on a DAG
- Example of a pipeline
Talk in french here : https://www.youtube.com/watch?v=NEtmrJWZbXQ
Building a Data Pipeline using Apache Airflow (on AWS / GCP)Yohei Onishi
This is the slide I presented at PyCon SG 2019. I talked about overview of Airflow and how we can use Airflow and the other data engineering services on AWS and GCP to build data pipelines.
Introduction to Apache Airflow, it's main concepts and features and an example of a DAG. Afterwards some lessons and best practices learned by from the 3 years I have been using Airflow to power workflows in production.
Agenda :
- Data at Dailymotion
- Apache Airflow
- Airflow at Dailymotion
- Deployment
- Working on a DAG
- Example of a pipeline
Talk in french here : https://www.youtube.com/watch?v=NEtmrJWZbXQ
Building a Data Pipeline using Apache Airflow (on AWS / GCP)Yohei Onishi
This is the slide I presented at PyCon SG 2019. I talked about overview of Airflow and how we can use Airflow and the other data engineering services on AWS and GCP to build data pipelines.
Introduction to Apache Airflow, it's main concepts and features and an example of a DAG. Afterwards some lessons and best practices learned by from the 3 years I have been using Airflow to power workflows in production.
From business requirements to working pipelines with apache airflowDerrick Qin
In this talk we will be building Airflow pipelines. We’ll look at real business requirements and walk through pipeline design, implementation, testing, deployment and troubleshooting - all that by adhering to idempotency and ability to replay your past data through the pipelines.
We will introduce Airflow, an Apache Project for scheduling and workflow orchestration. We will discuss use cases, applicability and how best to use Airflow, mainly in the context of building data engineering pipelines. We have been running Airflow in production for about 2 years, we will also go over some learnings, best practices and some tools we have built around it.
Speakers: Robert Sanders, Shekhar Vemuri
Contributing to Apache Airflow | Journey to becoming Airflow's leading contri...Kaxil Naik
From not knowing Python (let alone Airflow), and from submitting the first PR that fixes typo to becoming Airflow Committer, PMC Member, Release Manager, and #1 Committer this year, this talk walks through Kaxil’s journey in the Airflow World.
The second part of this talk explains:
how you can also start your OSS journey by contributing to Airflow
Expanding familiarity with a different part of the Airflow codebase
Continue committing regularly & steadily to become Airflow Committer. (including talking about current Guidelines of becoming a Committer)
Different mediums of communication (Dev list, users list, Slack channel, Github Discussions etc)
Presentation given on the 15th July 2021 at the Airflow Summit 2021
Conference website: https://airflowsummit.org/sessions/2021/clearing-airflow-obstructions/
Recording: https://www.crowdcast.io/e/airflowsummit2021/40
Fyber - airflow best practices in productionItai Yaffe
Eran Shemesh @ Fyber:
Fyber uses airflow to manage its entire big data pipelines including monitoring and auto-fix, the session will describe best practices that we implemented in production
A successful pipeline moves data efficiently, minimizing pauses and blockages between tasks, keeping every process along the way operational. Apache Airflow provides a single customizable environment for building and managing data pipelines, eliminating the need for a hodge-podge collection of tools, snowflake code, and homegrown processes. Using real-world scenarios and examples, Data Pipelines with Apache Airflow teaches you how to simplify and automate data pipelines, reduce operational overhead, and smoothly integrate all the technologies in your stack.
Check out the contents on our browser-based liveBook reader here: https://livebook.manning.com/book/data-pipelines-with-apache-airflow/
Slide deck for the fourth data engineering lunch, presented by guest speaker Will Angel. It covered the topic of using Airflow for data engineering. Airflow is a scheduling tool for managing data pipelines.
In the session, we discussed the End-to-end working of Apache Airflow that mainly focused on "Why What and How" factors. It includes the DAG creation/implementation, Architecture, pros & cons. It also includes how the DAG is created for scheduling the Job and what all steps are required to create the DAG using python script & finally with the working demo.
Apache Airflow (incubating) NL HUG Meetup 2016-07-19Bolke de Bruin
Introduction to Apache Airflow (Incubating), best practices and roadmap. Airflow is a platform to programmatically author, schedule and monitor workflows.
From AWS Data Pipeline to Airflow - managing data pipelines in Nielsen Market...Itai Yaffe
Tal Sharon (Software Architect), Aviel Buskila (DevOps Engineer) and Max Peres (Data Engineer) @ Nielsen:
At the Nielsen Marketing Cloud, we used to manage our data pipelines via AWS Data Pipeline. Over the years, we’ve encountered several issues with this tool, and a year ago we decided to embark on a journey to replace it with a tool more suitable for our needs.
In this session, we’ll discuss how we actually migrated to Airflow, what challenges we faced and how we mitigated them (and even contributed to the open-source project along the way). We’ll also provide some helpful tips for Airflow users
Presentation given at Coolblue B.V. demonstrating Apache Airflow (incubating), what we learned from the underlying design principles and how an implementation of these principles reduce the amount of ETL effort. Why choose Airflow? Because it makes your engineering life easier, more people can contribute to how data flows through the organization, so that you can spend more time applying your brain to more difficult problems like Machine Learning, Deep Learning and higher level analysis.
Wouldn’t it be great if we lived in a frictionless world where data engineers and data scientists built a perfect common ground for efficient exchanges? Unfortunately we’re not quite there yet, but we’ve been analyzing Dailymotion’s recent Data journey that focuses on how data engineers work with data scientists to improve production release.
The first part of the talk is about our machine-learning blueprint and the common ground we found between data scientist and data engineer to maximize productivity. The second part of the talk illustrates the first part by describing a use-case we implemented around catalog categorization.
Group of Airflow core committers talking about what's coming with Airflow 2.0!
Speakers: Ash Berlin-Taylor, Kaxil Naik, Kamil Breguła Jarek Potiuk, Daniel Imberman and Tomasz Urbaszek.
From Relational Database Management to Big Data: Solutions for Data Migration...Cognizant
Big data migration testing for transferring relational database management files is a very time-consuming, high-compute task; we offer a hands-on, detailed framework for data validation in an open source (Hadoop) environment incorporating Amazon Web Services (AWS) for cloud capacity, S3 (Simple Storage Service) and EMR (Elastic MapReduce), Hive tables, Sqoop tools, PIG scripting and Jenkins Slave Machines.
From business requirements to working pipelines with apache airflowDerrick Qin
In this talk we will be building Airflow pipelines. We’ll look at real business requirements and walk through pipeline design, implementation, testing, deployment and troubleshooting - all that by adhering to idempotency and ability to replay your past data through the pipelines.
We will introduce Airflow, an Apache Project for scheduling and workflow orchestration. We will discuss use cases, applicability and how best to use Airflow, mainly in the context of building data engineering pipelines. We have been running Airflow in production for about 2 years, we will also go over some learnings, best practices and some tools we have built around it.
Speakers: Robert Sanders, Shekhar Vemuri
Contributing to Apache Airflow | Journey to becoming Airflow's leading contri...Kaxil Naik
From not knowing Python (let alone Airflow), and from submitting the first PR that fixes typo to becoming Airflow Committer, PMC Member, Release Manager, and #1 Committer this year, this talk walks through Kaxil’s journey in the Airflow World.
The second part of this talk explains:
how you can also start your OSS journey by contributing to Airflow
Expanding familiarity with a different part of the Airflow codebase
Continue committing regularly & steadily to become Airflow Committer. (including talking about current Guidelines of becoming a Committer)
Different mediums of communication (Dev list, users list, Slack channel, Github Discussions etc)
Presentation given on the 15th July 2021 at the Airflow Summit 2021
Conference website: https://airflowsummit.org/sessions/2021/clearing-airflow-obstructions/
Recording: https://www.crowdcast.io/e/airflowsummit2021/40
Fyber - airflow best practices in productionItai Yaffe
Eran Shemesh @ Fyber:
Fyber uses airflow to manage its entire big data pipelines including monitoring and auto-fix, the session will describe best practices that we implemented in production
A successful pipeline moves data efficiently, minimizing pauses and blockages between tasks, keeping every process along the way operational. Apache Airflow provides a single customizable environment for building and managing data pipelines, eliminating the need for a hodge-podge collection of tools, snowflake code, and homegrown processes. Using real-world scenarios and examples, Data Pipelines with Apache Airflow teaches you how to simplify and automate data pipelines, reduce operational overhead, and smoothly integrate all the technologies in your stack.
Check out the contents on our browser-based liveBook reader here: https://livebook.manning.com/book/data-pipelines-with-apache-airflow/
Slide deck for the fourth data engineering lunch, presented by guest speaker Will Angel. It covered the topic of using Airflow for data engineering. Airflow is a scheduling tool for managing data pipelines.
In the session, we discussed the End-to-end working of Apache Airflow that mainly focused on "Why What and How" factors. It includes the DAG creation/implementation, Architecture, pros & cons. It also includes how the DAG is created for scheduling the Job and what all steps are required to create the DAG using python script & finally with the working demo.
Apache Airflow (incubating) NL HUG Meetup 2016-07-19Bolke de Bruin
Introduction to Apache Airflow (Incubating), best practices and roadmap. Airflow is a platform to programmatically author, schedule and monitor workflows.
From AWS Data Pipeline to Airflow - managing data pipelines in Nielsen Market...Itai Yaffe
Tal Sharon (Software Architect), Aviel Buskila (DevOps Engineer) and Max Peres (Data Engineer) @ Nielsen:
At the Nielsen Marketing Cloud, we used to manage our data pipelines via AWS Data Pipeline. Over the years, we’ve encountered several issues with this tool, and a year ago we decided to embark on a journey to replace it with a tool more suitable for our needs.
In this session, we’ll discuss how we actually migrated to Airflow, what challenges we faced and how we mitigated them (and even contributed to the open-source project along the way). We’ll also provide some helpful tips for Airflow users
Presentation given at Coolblue B.V. demonstrating Apache Airflow (incubating), what we learned from the underlying design principles and how an implementation of these principles reduce the amount of ETL effort. Why choose Airflow? Because it makes your engineering life easier, more people can contribute to how data flows through the organization, so that you can spend more time applying your brain to more difficult problems like Machine Learning, Deep Learning and higher level analysis.
Wouldn’t it be great if we lived in a frictionless world where data engineers and data scientists built a perfect common ground for efficient exchanges? Unfortunately we’re not quite there yet, but we’ve been analyzing Dailymotion’s recent Data journey that focuses on how data engineers work with data scientists to improve production release.
The first part of the talk is about our machine-learning blueprint and the common ground we found between data scientist and data engineer to maximize productivity. The second part of the talk illustrates the first part by describing a use-case we implemented around catalog categorization.
Group of Airflow core committers talking about what's coming with Airflow 2.0!
Speakers: Ash Berlin-Taylor, Kaxil Naik, Kamil Breguła Jarek Potiuk, Daniel Imberman and Tomasz Urbaszek.
From Relational Database Management to Big Data: Solutions for Data Migration...Cognizant
Big data migration testing for transferring relational database management files is a very time-consuming, high-compute task; we offer a hands-on, detailed framework for data validation in an open source (Hadoop) environment incorporating Amazon Web Services (AWS) for cloud capacity, S3 (Simple Storage Service) and EMR (Elastic MapReduce), Hive tables, Sqoop tools, PIG scripting and Jenkins Slave Machines.
Thomas Weise, Apache Apex PMC Member and Architect/Co-Founder, DataTorrent - ...Dataconomy Media
Thomas Weise, Apache Apex PMC Member and Architect/Co-Founder of DataTorrent presented "Streaming Analytics with Apache Apex" as part of the Big Data, Berlin v 8.0 meetup organised on the 14th of July 2016 at the WeWork headquarters.
Semantic Validation: Enforcing Kafka Data Quality Through Schema-Driven Verif...HostedbyConfluent
"Incorrect data produced into Kafka can be a poison pill that has the potential to disrupt businesses built upon Kafka. The “Semantic Validation” feature is designed to address the challenges posed by incorrect or unexpected data in Kafka’s data processing pipelines, with the goal of mitigating such disruptions. By allowing users to define robust field constraints directly within schemas, such as Avro, we aim to enhance data quality and minimize the downstream impacts of inaccurate data in Kafka.
Furthermore, this feature can be expanded to include offline data processing, in addition to Kafka and Flink real-time processing. By combining real-time processing, batch analytics, and AI data pipelines, a global semantic validation system can be built.
In our upcoming talk, we will delve into the use cases of this feature, discuss its architecture, provide examples of defining rules, and explain how we enforce these rules. Ultimately, we will demonstrate how this feature can significantly enhance reliability and trustworthiness in Uber’s data processing pipelines."
Transcend Automation is the authorized business partners for Kepware Technologies in India. We Market, Promote, Integrate their products for customers in India
We explain how we use Grakn as part of a wider solution to deliver next generation Data Operations (Data Ops) tooling, enabling us to deliver sophisticated "Run Graph Analytics".
The Run Graph is a component to passively track and trace our data assets as they move across the organisation, and is used to quickly reverse engineer our global flows of data to better plan change and understand hidden dependencies. When operational failures do arise, we demonstrate how Grakn quickly allows us to assess the inferred impacts downstream, and to prioritise and communicate the impacts of outages to stakeholders.
A whitepaper is about Qubole on AWS provides end-to-end data lake services such as AWS infrastructure management, data management, continuous data engineering, analytics, & ML with zero administration
https://www.qubole.com/resources/white-papers/qubole-on-aws
Implement Test Harness For Streaming Data PipelinesKnoldus Inc.
We will discuss data pipelines and its use cases, and will try to understand the Testing Challenges and testing strategy and results with a business case study.
Emerging Trends in Marketing-Role of AI & Data ScienceDigital Vidya
Today marketing without the use of technology cannot exist. Both MarTech and AdTech are important cogs in the marketing wheel. However, the technological landscape is changing every rapidly and keeping track of the emerging trends in marketing is becoming very difficult and tiresome. This webinar will address the emerging technological landscape in marketing and what one should know about them.
Key Takeaways:
1. Ad Tech and Martech
2. AI in Marketing
3. Use of Videos
4. Hyperpersonilsation
5. Social Media Evolution
6. Change in the lead nurturing process
Digital Marketing Beyond Facebook & GoogleDigital Vidya
With Google & Facebook taking up a large percentage of the digital advertising investment in India, often marketers tend to focus only on these two platforms. But there are a lot more platforms out there today that can be leveraged to build a brand and to acquire more customers. Be it LinkedIn, Native, Platforms like Quora, Industry specific platforms like 99 Acres, CarDekho etc, multilingual platforms and publishers, audio apps and programmatic to name a few. Learn how these can be leveraged to achieve different marketing goals.
Key Takeaways:
1. Understand key platforms outside of the Google-FB ecosystem
2. Reasons why over-reliance on Google-FB ecosystem is not ideal
3. Best case practices for key platforms
4. Learn how Native & Content will play a key role in the future
5. See case studies of brands who have got ROI via these newer channels
In the recent past, we have learnt that data is the lifeline of any business and it is really important to collect data, more and more of it. But no one is telling us what to do with large volumes of data.
Shailendra has successfully delivered over One Billion Dollars in incremental value and will spend 30 minutes in showcasing how many large organisations are using data to their advantage by creating value through generating incremental revenue and optimising costs using analytics techniques.
Key Takeaways:
(i) Demystify the myths of analytics
(ii) Walkthrough a step-by-step approach to delivering successful projects that created an incremental value of hundreds and millions of dollars.
(iii) Three use cases where large organisations are using analytics to their advantage by creating value by generating incremental revenue and optimising costs.
Welcome to the world of NoSQL. NoSQL market is now expected to reach 4.2 billion dollar business in itself by 2020. If you are still confused by what does this term means then you are not ready for the Big Data world. However, just knowing the term is not enough.
Due to the enormous numbers of No SQL platforms out there, one of the key challenges is not how to use them but when to use what. In this webinar session, we will start with a small description of the NoSQL and try to understand why it was introduced after all. Then we will look into the four different types of NoSQL frameworks and some tips on how to choose what.
Key Takeaways:
1. Understanding NoSQL
2. SQL to NoSQL: Why the Need is There
3. The Four Main Types of NoSQL
4. How to Make the Best Choice
5. NoSQL User Stories & Deployment of Best Practices
Persuasion Strategies That Work Building Influence To Open Up Your Revenue St...Digital Vidya
Effective persuasion techniques not only help Marketing & Salespeople to generate more customers. Whether done internally or via Influencers, these persuasion techniques are very important to trigger sales.
This webinar session will discuss several persuasion techniques and as well as provide an understanding to learn more about human behaviour. It will also highlight what exactly triggers people to make purchase decisions.
Key Takeaways:
1) Understanding Why People Buy: Key motivation and Drivers
2) How to Develop the Art of Persuasion: Become a Key Influencer
3) Learn how Major Brands in India and Across the World have used these Principles
How To Set-up An SEO Agency From Scratch As A NewbieDigital Vidya
Know the process of 'How to set up an seo agency from scratch as a newbie'. Gain insights from the webinar led by Deepak Shukla, SEO Expert & Founder, Pearl Lemon.
7 B2B Marketing Trends for Driving GrowthDigital Vidya
Know about the top 7 B2B Marketing Trends for Driving Growth. Gain insights from the webinar led by Virginia Sharma,
Director, Marketing Solutions, LinkedIn India
Social Video Analytics: From Demography to Psychography of User BehaviourDigital Vidya
Know about Social Video Analytics: From Demography to Psychography of User Behaviour. Gain insights from the webinar led by Nishant Radia, Co-founder & CMO, Vidooly.
How to Use Marketing Automation to Convert More Leads to SalesDigital Vidya
Know about How to Use Marketing Automation to Convert More Leads to Sales. Gain insights from the webinar led by David Fallarme, Head of Marketing, SEA & India, HubSpot
Native Advertising: Changing Digital Advertising LandscapeDigital Vidya
Know about how Native Advertising is changing Digital Advertising landscape. Gain insights from the webinar led by Samir Tiwari, Co-Founder & CEO, Non Lineaar.
Apache Spark has been gaining steam, with rapidity, both in the headlines and in real-world adoption. Spark was developed in 2009, and open sourced in 2010. Since then, it has grown to become one of the largest open source communities in big data with over 200 contributors from more than 50 organizations. This open source analytics engine stands out for its ability to process large volumes of data significantly faster than contemporaries such as MapReduce, primarily owing to in-memory storage of data on its own processing framework. That being said, one of the top real-world industry use cases for Apache Spark is its ability to process ‘streaming data‘.
Community Development with Social MediaDigital Vidya
Know how to do 'Community Development with Social Media'. Gain insights from the webinar led by Saurabh Jain Head, Paytm - Build for India and Founder, Fun2Do Labs.
Framework of Digital Media Marketing in IndiaDigital Vidya
Know 'Framework of Digital Media Marketing in India'. Gain insights from the webinar led by Nishant Malsisaria, Associate Media Director. Dentsu Webchutney.
The Secret to Search Engine Marketing Success in 2018Digital Vidya
Know 'The Secret to Search Engine Marketing Success in 2018'. Gain insights from the webinar led by Prashant Nandan, Senior Director - Digital Trading & Buying, Amplifi India-Dentsu Aegis Network.
People Centric Marketing - Create Impact by Putting People First Digital Vidya
Know how to create impact by putting people first via 'People Centric Marketing'. Gain insights from the webinar led by Sakhee Dheer, Head of Digital, Global Business Marketing, Asia Pacific, Facebook.
Going Global? Key Steps to Expanding Your Business GloballyDigital Vidya
'Going Global? Key Steps to Expanding Your Business Globally'. Gain insights from the webinar led by Alexia Ohannessian, International Marketing Lead, Trello. Explore more webinars at www.digitalvidya.com/webinars/.
How to Optimize your Online Presence for 6X Growth in Sales?Digital Vidya
Explore 'How to Optimize Your Online Presence for 6x Growth in Sales?'. Gain insights from the webinar led by Advit Sahdev, Head of Marketing, Infibeam.com.
Opendatabay - Open Data Marketplace.pptxOpendatabay
Opendatabay.com unlocks the power of data for everyone. Open Data Marketplace fosters a collaborative hub for data enthusiasts to explore, share, and contribute to a vast collection of datasets.
First ever open hub for data enthusiasts to collaborate and innovate. A platform to explore, share, and contribute to a vast collection of datasets. Through robust quality control and innovative technologies like blockchain verification, opendatabay ensures the authenticity and reliability of datasets, empowering users to make data-driven decisions with confidence. Leverage cutting-edge AI technologies to enhance the data exploration, analysis, and discovery experience.
From intelligent search and recommendations to automated data productisation and quotation, Opendatabay AI-driven features streamline the data workflow. Finding the data you need shouldn't be a complex. Opendatabay simplifies the data acquisition process with an intuitive interface and robust search tools. Effortlessly explore, discover, and access the data you need, allowing you to focus on extracting valuable insights. Opendatabay breaks new ground with a dedicated, AI-generated, synthetic datasets.
Leverage these privacy-preserving datasets for training and testing AI models without compromising sensitive information. Opendatabay prioritizes transparency by providing detailed metadata, provenance information, and usage guidelines for each dataset, ensuring users have a comprehensive understanding of the data they're working with. By leveraging a powerful combination of distributed ledger technology and rigorous third-party audits Opendatabay ensures the authenticity and reliability of every dataset. Security is at the core of Opendatabay. Marketplace implements stringent security measures, including encryption, access controls, and regular vulnerability assessments, to safeguard your data and protect your privacy.
StarCompliance is a leading firm specializing in the recovery of stolen cryptocurrency. Our comprehensive services are designed to assist individuals and organizations in navigating the complex process of fraud reporting, investigation, and fund recovery. We combine cutting-edge technology with expert legal support to provide a robust solution for victims of crypto theft.
Our Services Include:
Reporting to Tracking Authorities:
We immediately notify all relevant centralized exchanges (CEX), decentralized exchanges (DEX), and wallet providers about the stolen cryptocurrency. This ensures that the stolen assets are flagged as scam transactions, making it impossible for the thief to use them.
Assistance with Filing Police Reports:
We guide you through the process of filing a valid police report. Our support team provides detailed instructions on which police department to contact and helps you complete the necessary paperwork within the critical 72-hour window.
Launching the Refund Process:
Our team of experienced lawyers can initiate lawsuits on your behalf and represent you in various jurisdictions around the world. They work diligently to recover your stolen funds and ensure that justice is served.
At StarCompliance, we understand the urgency and stress involved in dealing with cryptocurrency theft. Our dedicated team works quickly and efficiently to provide you with the support and expertise needed to recover your assets. Trust us to be your partner in navigating the complexities of the crypto world and safeguarding your investments.
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
Show drafts
volume_up
Empowering the Data Analytics Ecosystem: A Laser Focus on Value
The data analytics ecosystem thrives when every component functions at its peak, unlocking the true potential of data. Here's a laser focus on key areas for an empowered ecosystem:
1. Democratize Access, Not Data:
Granular Access Controls: Provide users with self-service tools tailored to their specific needs, preventing data overload and misuse.
Data Catalogs: Implement robust data catalogs for easy discovery and understanding of available data sources.
2. Foster Collaboration with Clear Roles:
Data Mesh Architecture: Break down data silos by creating a distributed data ownership model with clear ownership and responsibilities.
Collaborative Workspaces: Utilize interactive platforms where data scientists, analysts, and domain experts can work seamlessly together.
3. Leverage Advanced Analytics Strategically:
AI-powered Automation: Automate repetitive tasks like data cleaning and feature engineering, freeing up data talent for higher-level analysis.
Right-Tool Selection: Strategically choose the most effective advanced analytics techniques (e.g., AI, ML) based on specific business problems.
4. Prioritize Data Quality with Automation:
Automated Data Validation: Implement automated data quality checks to identify and rectify errors at the source, minimizing downstream issues.
Data Lineage Tracking: Track the flow of data throughout the ecosystem, ensuring transparency and facilitating root cause analysis for errors.
5. Cultivate a Data-Driven Mindset:
Metrics-Driven Performance Management: Align KPIs and performance metrics with data-driven insights to ensure actionable decision making.
Data Storytelling Workshops: Equip stakeholders with the skills to translate complex data findings into compelling narratives that drive action.
Benefits of a Precise Ecosystem:
Sharpened Focus: Precise access and clear roles ensure everyone works with the most relevant data, maximizing efficiency.
Actionable Insights: Strategic analytics and automated quality checks lead to more reliable and actionable data insights.
Continuous Improvement: Data-driven performance management fosters a culture of learning and continuous improvement.
Sustainable Growth: Empowered by data, organizations can make informed decisions to drive sustainable growth and innovation.
By focusing on these precise actions, organizations can create an empowered data analytics ecosystem that delivers real value by driving data-driven decisions and maximizing the return on their data investment.
3. Agenda
❖ A brief introduction to Qubole
❖ Apache Airflow
❖ Operational Challenges in managing an ETL
❖ Alerts and Monitoring
❖ Quality Assurance in ETL’s
3
4. About Qubole Data service
❖ A self-service platform for big data analytics.
❖ Delivers best-in-class Apache tools such as Hadoop, Hive, Spark,
etc. integrated into an enterprise-feature rich platform optimized
to run in the cloud.
❖ Enables users to focus on their data rather than the platform.
4
5. Data Team @ Qubole
❖ Data Warehouse for Qubole
❖ Provides Insights and Recommendations to users
❖ Just Another Qubole Account
❖ Enabling data driven features within QDS
5
6. Multi Tenant Nature Of Data
Team
6
Qubole
Distribution 2
(azure.qubole.com)
Distribution 1
(api.qubole.com)
Data Warehouse
Data Warehouse
7. Apache Airflow For ETL
❖ Developer Friendly
❖ A rich collection of Operators, CLI utilities and UI to author and manage your
Data Pipeline.
❖ Horizontally Scalable.
❖ Tight Integration With Qubole
7
9. Operational Challenges In ETL World.
9
How to achieve
continuous
integration and
deployment for
ETL’s
?
How to effectively
manage
configuration for
ETL’s in a multi
tenant environment
?
How we do we
make ETL’s aware of
the Data Warehouse
migrations
?
12. Airflow Variables for ETL Configuration
❖ Stores the information as a key value pair in airflow.
❖ Extensive support like CLI, UI and API to manage the variables
❖ Can be used from within the airflow script as
variable.get(“variable_name”)
12
13. Warehouse Management.
❖ A leaf out of Ruby on Rails: Active Record
Migrations.
❖ Each migration is tagged and committed as a
single commit to version control along with ETL
changes.
13
14. The PROCESS IS EASY
14
Checkout
from version
control the
target tag.
Update the
migration
number
Run any new
relevant
migrations
Fetch Current
Migration
Number from
Airflow
Variables.
15. ❖ Traditional deployment too messy when multiple users are handling airflow.
❖ Data Apps for ETL deployment.
❖ Provides cli option like <ETL_NAME> deploy -r <version_tag> -d <start_date>
Deployment
Checkout the
airflow
template file
from version
control.
Copy the final
script file to
airflow
directory.
Read Config
Values from
Airflow and
translate the
config values
19. IMPORTANCE OF DATA
VALIDATION
❖ Application’s correctness depends on correctness of data.
❖ Increase confidence on data by quantifying data quality.
❖ Correcting existing data can be expensive - prevention better than cure!
❖ Stopping critical downstream tasks if the data is invalid.
19
20. TREND MONITORING
❖ Monitor dips, peaks, anomalies.
❖ Hard problem!
❖ Not real time.
❖ One size doesn’t fit all - Different ETLs manipulate data in different ways.
❖ Difficult to maintain.
20
22. Using Apache Airflow Check operators:
Approach:
Extend open
source airflow
check operator for
queries running on
Qubole platform
Run data
validation queries
Fail the operator if
the validation fails
22
25. Problem: Airflow Check operators required pass_value to be defined
before
the ETL starts.
Use case: Validating data import logic
Solution: Make pass_value an Airflow template field
This way it can be configured at run-time. The pass value can be injected
through multiple mechanisms once it’s an airflow template field.
1. Compare Data across engines
25
27. Problem: Currently, Apache airflow check operators consider single row for
comparison.
Use case: Run group queries, compare each of the values against the pass_value.
Solution: Qubole_check_operator adds `results_parser_callable` parameter
The function pointed to by `results_parser_callable` holds the logic to return a list
of records on which the checks would be performed.
2. Validate multiline results
27
30. ETL # 1: Data Ingestion Imports data from RDS tables into Data Warehouse for analysis
purposes.
Historical Issues:
Mismatch with source data
1. Data duplication
2. Data missing for certain duration
Checks employed:
- Count comparison across the two data stores - source and
destination.
How checks have helped us:
- Verify and rectify upsert logic (which is not plain copy of
RDS)
PS: Runtime fetching of expected values!
30
31. ETL # 2: Data
Transformation
Repartitions a day’s worth of data into hourly partitions.
Historical Issues:
1. Data ending up in single partition field (Default hive
partition).
2. Wrong ordering of values in fields.
Checks employed:
1. Number of partitions getting created are 24 (one for every
hour).
2. Check the value of critical field, “source” .
How checks have helped us: Verify and rectify repartitioning
logic.
31
32. ETL # 3: Cost Computation
Computes Qubole Compute Unit
Hour (QCUH)
Situation: We are narrowing down on the
granularity of cost computation from daily to hourly.
How Checks have helped?
To monitor new data and alarm in case of
mismatch in trends of old and new data.
32
33. ETL # 4: Data
Transformation
Parses customer queries and outputs table usage information.
Historical Issues:
1. Data missing for a customer account.
2. Data loss due to different syntaxes across engines.
3. Data loss due to query syntax changes across different versions of data-
engines.
Checks employed:
1. Group by account ids, if any of them is 0, raise an alert.
2. Group by on engine type, account ids. If high error %, raise an alert.
How checks have helped us:
- Insights into amount of data loss.
- Provides feedback, helped us make syntax checking more robust.
33
34. FEATURES
❖ Ability to plug-in different alerting mechanisms.
❖ Dependency management and Failure handling.
❖ Ability to parse the output of assert query in a user defined manner.
❖ Run time fetching of the pass_value against which the comparison is made.
❖ Ability to generate failure/success report.
34
35. LESSONS LEARNT
One size doesn’t fit
all- Estimation of data
trends is a difficult
problem
Delegate the
validation task to the
ETL itself
35
36. Source code has been
contributed to Apache Airflow
AIRFLOW-2228: Enhancements in Check operator
AIRFLOW-2213: Adding Qubole Check Operator
36
37. In data we trust!
THANKS!
Any questions?
You can find us at:
sakshib@qubole.com
sreenathk@qubole.com