Multi Source Data Analysis Using Apache Spark and Tellius
This document discusses analyzing data from multiple sources using Apache Spark and the Tellius platform. It covers loading data from different sources like databases and files into Spark DataFrames, defining a data model by joining the sources, and performing analysis like calculating revenues by department across sources. It also discusses challenges like double counting values when directly querying the joined data. The Tellius platform addresses this by implementing a custom query layer on top of Spark SQL to enable accurate multi-source analysis.
Anatomy of Data Frame API : A deep dive into Spark Data Frame APIdatamantra
In this presentation, we discuss about internals of spark data frame API. All the code discussed in this presentation available at https://github.com/phatak-dev/anatomy_of_spark_dataframe_api
Anatomy of Data Frame API : A deep dive into Spark Data Frame APIdatamantra
In this presentation, we discuss about internals of spark data frame API. All the code discussed in this presentation available at https://github.com/phatak-dev/anatomy_of_spark_dataframe_api
Introduction to Structured Data Processing with Spark SQLdatamantra
An introduction to structured data processing using Data source and Dataframe API's of spark.Presented at Bangalore Apache Spark Meetup by Madhukara Phatak on 31/05/2015.
Improving Mobile Payments With Real time Sparkdatamantra
Talk about real world spark streaming implementation for improving mobile payments experience. Presented at Target data meetup at Bangalore by Madhukara Phatak on 22/08/2015.
Anatomy of Data Source API : A deep dive into Spark Data source APIdatamantra
In this presentation, we discuss how to build a datasource from the scratch using spark data source API. All the code discussed in this presentation available at https://github.com/phatak-dev/anatomy_of_spark_datasource_api
Extending Spark SQL 2.4 with New Data Sources (Live Coding Session)Databricks
Spark SQL 2.4.x gives you two Data Source APIs that your structured queries can use to access data in custom formats, possibly in unsupported storage systems. There is the older and almost legacy DataSource API V1 and what you can consider a modern DataSource API V2. This talk will introduce you to the main entities of each DataSource API and show you the steps how to write a new data source live on stage. That should give you enough knowledge on expanding available data sources in Spark SQL with new ones.
Lessons learnt and system built while solving the last mile problem in machine learning - taking models to production. Used for the talk at - http://sched.co/BLvf
Turn Data into Business Value – Starting with Data Analytics on Oracle Cloud ...Lucas Jellema
Data Science, Business Intelligence, Data Lake, Machine Learning and AI. Diverse terminology with a common goal: leverage data to realize business value. Through consolidated insight and automated processing, predictions, recommendations and actions. Using visualizations, dashboards, reports, alerts, machine learning models. Based on data. Data retrieved from raw sources into a data lake, wrangled into cleansed, enriched, anonymized and aggregated data sets and turned into business intelligence or used for training machine learning models, that in turn power Smart Applications. This session walks the audience through the start to end data flow on Oracle Autonomous Data Warehouse, Analytics Cloud, Big Data Cloud & Data Integration Platform.
Introduction to Structured Data Processing with Spark SQLdatamantra
An introduction to structured data processing using Data source and Dataframe API's of spark.Presented at Bangalore Apache Spark Meetup by Madhukara Phatak on 31/05/2015.
Improving Mobile Payments With Real time Sparkdatamantra
Talk about real world spark streaming implementation for improving mobile payments experience. Presented at Target data meetup at Bangalore by Madhukara Phatak on 22/08/2015.
Anatomy of Data Source API : A deep dive into Spark Data source APIdatamantra
In this presentation, we discuss how to build a datasource from the scratch using spark data source API. All the code discussed in this presentation available at https://github.com/phatak-dev/anatomy_of_spark_datasource_api
Extending Spark SQL 2.4 with New Data Sources (Live Coding Session)Databricks
Spark SQL 2.4.x gives you two Data Source APIs that your structured queries can use to access data in custom formats, possibly in unsupported storage systems. There is the older and almost legacy DataSource API V1 and what you can consider a modern DataSource API V2. This talk will introduce you to the main entities of each DataSource API and show you the steps how to write a new data source live on stage. That should give you enough knowledge on expanding available data sources in Spark SQL with new ones.
Lessons learnt and system built while solving the last mile problem in machine learning - taking models to production. Used for the talk at - http://sched.co/BLvf
Turn Data into Business Value – Starting with Data Analytics on Oracle Cloud ...Lucas Jellema
Data Science, Business Intelligence, Data Lake, Machine Learning and AI. Diverse terminology with a common goal: leverage data to realize business value. Through consolidated insight and automated processing, predictions, recommendations and actions. Using visualizations, dashboards, reports, alerts, machine learning models. Based on data. Data retrieved from raw sources into a data lake, wrangled into cleansed, enriched, anonymized and aggregated data sets and turned into business intelligence or used for training machine learning models, that in turn power Smart Applications. This session walks the audience through the start to end data flow on Oracle Autonomous Data Warehouse, Analytics Cloud, Big Data Cloud & Data Integration Platform.
Building machine learning muscle in your team & transitioning to make them do machine learning at scale. We also discuss about Spark & other relevant technologies.
Delivering a Linked Data warehouse and realising the power of graphsBen Gardner
Linklaters is one of the world’s leading global law firms. The firm has a wealth of high value information held within our systems however due to the nature of these systems it is not always easy to leverage this value. Our goal was to improve decision making across the firm by transforming access to and ability to query data. To do this we wanted a solution that would combine our information, was easy to extend in an iterative fashion and would leverage our existing investment in business intelligence. To achieve this we chose to create a graph based warehouse using Linked Data. Data from our SAP Business Warehouse was combined with flat file and XML feeds from our systems of record and transformed into RDF via ETL services that loaded it into a triple store. To provide simple integration with our existing environment a SPARQL to OData service was deployed creating an OData compliant endpoint. Finally a model driven, mobile friendly, user interface was created allowing users to query, review results and explore the underlying graph. This talk will describe the approach we took and the lessons learnt.
Splice Machine's use of Apache Spark and MLflowDatabricks
Splice Machine is an ANSI-SQL Relational Database Management System (RDBMS) on Apache Spark. It has proven low-latency transactional processing (OLTP) as well as analytical processing (OLAP) at petabyte scale. It uses Spark for all analytical computations and leverages HBase for persistence. This talk highlights a new Native Spark Datasource - which enables seamless data movement between Spark Data Frames and Splice Machine tables without serialization and deserialization. This Spark Datasource makes machine learning libraries such as MLlib native to the Splice RDBMS . Splice Machine has now integrated MLflow into its data platform, creating a flexible Data Science Workbench with an RDBMS at its core. The transactional capabilities of Splice Machine integrated with the plethora of DataFrame-compatible libraries and MLflow capabilities manages a complete, real-time workflow of data-to-insights-to-action. In this presentation we will demonstrate Splice Machine's Data Science Workbench and how it leverages Spark and MLflow to create powerful, full-cycle machine learning capabilities on an integrated platform, from transactional updates to data wrangling, experimentation, and deployment, and back again.
Feature Store as a Data Foundation for Machine LearningProvectus
Looking to design and build a centralized, scalable Feature Store for your Data Science & Machine Learning teams to take advantage of? Come and learn from experts of Provectus and Amazon Web Services (AWS) how to!
Feature Store is a key component of the ML stack and data infrastructure, which enables feature engineering and management. By having a Feature Store, organizations can save massive amounts of resources, innovate faster, and drive ML processes at scale. In this webinar, you will learn how to build a Feature Store with a data mesh pattern and see how to achieve consistency between real-time and training features, to improve reproducibility with time-traveling for data.
Agenda
- Modern Data Lakes & Modern ML Infrastructure
- Existing and Emerging Architectural Shifts
- Feature Store: Overview and Reference Architecture
- AWS Perspective on Feature Store
Intended Audience
Technology executives & decision makers, manager-level tech roles, data architects & analysts, data engineers & data scientists, ML practitioners & ML engineers, and developers
Presenters
- Stepan Pushkarev, Chief Technology Officer, Provectus
- Gandhi Raketla, Senior Solutions Architect, AWS
- German Osin, Senior Solutions Architect, Provectus
Feel free to share this presentation with your colleagues and don't hesitate to reach out to us at info@provectus.com if you have any questions!
REQUEST WEBINAR: https://provectus.com/webinar-feature-store-as-data-foundation-for-ml-nov-2020/
Simply Business is a leading insurance provider for small business in the UK and we are now growing to the USA. In this presentation, I explain how our data platform is evolving to keep delivering value and adapting to a company that changes really fast.
Data Con LA 2019 - Big Data Modeling with Spark SQL: Make data valuable by Ja...Data Con LA
In this Data age, business applications generate big data. To generate value out of large scale data applications, data models are the key. Data models serve various purposes, and it is essential to show reliable insights in a timely fashion. This session will cover the technical aspect of leveraging Spark's distributed engine to process Big data to generate insights. It includes a few aspects to optimize processes with Spark SQL. Come join me to explore the process of making data interesting!
How Data Virtualization Adds Value to Your Data Science StackDenodo
Watch here: https://bit.ly/3cZGCxr
For their machine learning and data science projects to be successful, data scientists need access to all of the enterprise data delivered through their myriad of data models. However, gaining access to all data, integrated into a central repository has been a challenge. Often 80% of the project time is spent on these tasks. But, a virtual layer can help the data scientist speed up some of the most tedious tasks, like data exploration and analysis. At the same time, it also integrates well with the data science ecosystem. There is no need to change tools and learn new languages. The data virtualization platform helps data scientists offload these data integration tasks, allowing them to focus on advanced analytics.
In this session, you will learn how data virtualization:
- Provides all of the enterprise data, in real-time, and without replication
- Enables data scientists to create and share multiple logical models using simple drag and drop
- Provides a catalog of all business definitions, lineage, and relationships
Microsoft Fabric is the next version of Azure Data Factory, Azure Data Explorer, Azure Synapse Analytics, and Power BI. It brings all of these capabilities together into a single unified analytics platform that goes from the data lake to the business user in a SaaS-like environment. Therefore, the vision of Fabric is to be a one-stop shop for all the analytical needs for every enterprise and one platform for everyone from a citizen developer to a data engineer. Fabric will cover the complete spectrum of services including data movement, data lake, data engineering, data integration and data science, observational analytics, and business intelligence. With Fabric, there is no need to stitch together different services from multiple vendors. Instead, the customer enjoys end-to-end, highly integrated, single offering that is easy to understand, onboard, create and operate.
This is a hugely important new product from Microsoft and I will simplify your understanding of it via a presentation and demo.
Agenda:
What is Microsoft Fabric?
Workspaces and capacities
OneLake
Lakehouse
Data Warehouse
ADF
Power BI / DirectLake
Resources
Introduction to Machine Learning - WeCloudDataWeCloudData
In this talk, WeCloudData introduces the lifecycle of machine learning and its tools/ecosystems. For more detail about WeCloudData's machine learning course please visit: https://weclouddata.com/data-science/
Running Business Analytics for a Serverless Insurance Company - Joe Emison & ...Daniel Zivkovic
Take a peek into the future of IT - beyond Serverless Software Development, when Serverless becomes a way to run Internal IT.
When ServerlessToronto.org invited Joe Emison - AWS Serverless Hero, we expected to see how he "knocked down the wall" between AWS & Google Clouds (to query Amazon DynamoDB from Google BigQuery) using the Fivetran ELT tool, but we learned so much more... and you will too: https://youtu.be/GK5Ivm6EOlI
Similar to Multi Source Data Analysis using Spark and Tellius (20)
Opendatabay - Open Data Marketplace.pptxOpendatabay
Opendatabay.com unlocks the power of data for everyone. Open Data Marketplace fosters a collaborative hub for data enthusiasts to explore, share, and contribute to a vast collection of datasets.
First ever open hub for data enthusiasts to collaborate and innovate. A platform to explore, share, and contribute to a vast collection of datasets. Through robust quality control and innovative technologies like blockchain verification, opendatabay ensures the authenticity and reliability of datasets, empowering users to make data-driven decisions with confidence. Leverage cutting-edge AI technologies to enhance the data exploration, analysis, and discovery experience.
From intelligent search and recommendations to automated data productisation and quotation, Opendatabay AI-driven features streamline the data workflow. Finding the data you need shouldn't be a complex. Opendatabay simplifies the data acquisition process with an intuitive interface and robust search tools. Effortlessly explore, discover, and access the data you need, allowing you to focus on extracting valuable insights. Opendatabay breaks new ground with a dedicated, AI-generated, synthetic datasets.
Leverage these privacy-preserving datasets for training and testing AI models without compromising sensitive information. Opendatabay prioritizes transparency by providing detailed metadata, provenance information, and usage guidelines for each dataset, ensuring users have a comprehensive understanding of the data they're working with. By leveraging a powerful combination of distributed ledger technology and rigorous third-party audits Opendatabay ensures the authenticity and reliability of every dataset. Security is at the core of Opendatabay. Marketplace implements stringent security measures, including encryption, access controls, and regular vulnerability assessments, to safeguard your data and protect your privacy.
As Europe's leading economic powerhouse and the fourth-largest hashtag#economy globally, Germany stands at the forefront of innovation and industrial might. Renowned for its precision engineering and high-tech sectors, Germany's economic structure is heavily supported by a robust service industry, accounting for approximately 68% of its GDP. This economic clout and strategic geopolitical stance position Germany as a focal point in the global cyber threat landscape.
In the face of escalating global tensions, particularly those emanating from geopolitical disputes with nations like hashtag#Russia and hashtag#China, hashtag#Germany has witnessed a significant uptick in targeted cyber operations. Our analysis indicates a marked increase in hashtag#cyberattack sophistication aimed at critical infrastructure and key industrial sectors. These attacks range from ransomware campaigns to hashtag#AdvancedPersistentThreats (hashtag#APTs), threatening national security and business integrity.
🔑 Key findings include:
🔍 Increased frequency and complexity of cyber threats.
🔍 Escalation of state-sponsored and criminally motivated cyber operations.
🔍 Active dark web exchanges of malicious tools and tactics.
Our comprehensive report delves into these challenges, using a blend of open-source and proprietary data collection techniques. By monitoring activity on critical networks and analyzing attack patterns, our team provides a detailed overview of the threats facing German entities.
This report aims to equip stakeholders across public and private sectors with the knowledge to enhance their defensive strategies, reduce exposure to cyber risks, and reinforce Germany's resilience against cyber threats.
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
Show drafts
volume_up
Empowering the Data Analytics Ecosystem: A Laser Focus on Value
The data analytics ecosystem thrives when every component functions at its peak, unlocking the true potential of data. Here's a laser focus on key areas for an empowered ecosystem:
1. Democratize Access, Not Data:
Granular Access Controls: Provide users with self-service tools tailored to their specific needs, preventing data overload and misuse.
Data Catalogs: Implement robust data catalogs for easy discovery and understanding of available data sources.
2. Foster Collaboration with Clear Roles:
Data Mesh Architecture: Break down data silos by creating a distributed data ownership model with clear ownership and responsibilities.
Collaborative Workspaces: Utilize interactive platforms where data scientists, analysts, and domain experts can work seamlessly together.
3. Leverage Advanced Analytics Strategically:
AI-powered Automation: Automate repetitive tasks like data cleaning and feature engineering, freeing up data talent for higher-level analysis.
Right-Tool Selection: Strategically choose the most effective advanced analytics techniques (e.g., AI, ML) based on specific business problems.
4. Prioritize Data Quality with Automation:
Automated Data Validation: Implement automated data quality checks to identify and rectify errors at the source, minimizing downstream issues.
Data Lineage Tracking: Track the flow of data throughout the ecosystem, ensuring transparency and facilitating root cause analysis for errors.
5. Cultivate a Data-Driven Mindset:
Metrics-Driven Performance Management: Align KPIs and performance metrics with data-driven insights to ensure actionable decision making.
Data Storytelling Workshops: Equip stakeholders with the skills to translate complex data findings into compelling narratives that drive action.
Benefits of a Precise Ecosystem:
Sharpened Focus: Precise access and clear roles ensure everyone works with the most relevant data, maximizing efficiency.
Actionable Insights: Strategic analytics and automated quality checks lead to more reliable and actionable data insights.
Continuous Improvement: Data-driven performance management fosters a culture of learning and continuous improvement.
Sustainable Growth: Empowered by data, organizations can make informed decisions to drive sustainable growth and innovation.
By focusing on these precise actions, organizations can create an empowered data analytics ecosystem that delivers real value by driving data-driven decisions and maximizing the return on their data investment.
Best best suvichar in gujarati english meaning of this sentence as Silk road ...
Multi Source Data Analysis using Spark and Tellius
1. Multi Source Data Analysis
Using Apache Spark and Tellius
https://github.com/phatak-dev/spark2.0-examples
2. ● Madhukara Phatak
● Director of
Engineering,Tellius
● Work on Hadoop, Spark , ML
and Scala
● www.madhukaraphatak.com
3. Agenda
● Multi Source Data
● Challenges with Multi Source
● Traditional and Data Lake Approach
● Spark Approach
● Data Source and Data Frame API
● Tellius Platform
● Multi Source analysis in Tellius
5. Multi Source Data
● In the era of cloud computing and big data, data for
analysis can come from various sources
● In every organization, it has become very common to
have multiple different sources to store wide variety of
storage system
● The nature of the data will vary from source to source
● Data can be structured, semi structured or fully
unstructured also.
6. Multi Source Example in Ecommerce
● Relational databases are used to hold product details
and customer transactions
● Big data warehousing tools like Hadoop/Hive/Impala are
used to store historical transactions and ratings for
analytics
● Google analytics to store the website analytics data
● Log Data in S3/ Azure Blog
● Every storage system is optimized to store specific type
of data
8. Need of Multi Source Analysis
● If the analysis of the data is restricted to only one
source, then we may lose sight of interesting patterns in
our business
● Complete view / 360 degree view of the business in not
possible unless we consider all the data which is
available to us
● Advance analytics like ML or AI is more useful when
there is more variety in the data
9. Traditional Approach
● In traditional way of doing multi source analysis, needed
all data to be moved to a single data source
● This approach made sense when number of sources
were few and data was well structured
● With increasing number of sources, the time to ETL
becomes bigger
● Normalizing the data for same schema becomes
challenging for semi-structured sources
● Traditional databases cannot hold the data in volume
also
10. Data Lake Approach
● Move the data to big data enabled repository from
different sources
● It solves the problem of volume, but there are still
challenges with it
● All the rich schema information in the source may not
translate well to the data lake repository
● ETL time will be still significant
● Will be not able to use underneath source processing
capabilities
● Not good for exploratory analysis
12. Requirements
● Ability to load the data uniformly from different source
irrespective their type
● Ability to represent the data in a single format
irrespective of their sources
● Ability to combine the data from the source naturally
● Ability to query the data across the sources naturally
● Ability to use the underneath source processing
whenever possible
13. Apache Spark Approach
● Data Source API of Spark SQL allows user to load the
uniformly from wide variety of sources
● DataFrame/ Dataset API of Spark allows user to
represent all the data source data uniformly
● Spark SQL has ability to join the data from different
sources
● Spark SQL pushes filters and prune columns if the
underneath source supports it
15. Customer 360
● Four different datasets from two different sources
● We will be using flat file and Mysql data sources
● Transactions - Primarily focuses on Customer information like
Age, Gender, location etc. ( Mysql)
● Demographics - Cost of product, purchase date, store id, store
type, brands, Retail Department, Retail cost(Mysql)
● Credit Information – Reward Member, Redemption Method
● Marketing Information - Ad source, Promotional code
16. Loading Data
● We are going to use csv and jdbc connector for spark to
load the data
● Due to auto inference of the schema, we will get all the
needed schema in data frame
● After that we are going to preview the data, using show
method
● Ex : MultiSourceLoad
17. Multi Source Data Model
● We can define a data model using the join of the spark
● Here we will be joining the 4 datasets on customerid as
common
● After join using inner join, we get a data model which
has all the sources combine
● Ex : MultiSourceDataModel
18. Multi Source Analysis
● Show us the sales by different sources
● Average Cost and Sum Revenue by City and
Department
● Revenue by Campaign
● Ex : MultiSourceDataAnalysis
20. About Tellius
Search and AI-powered analytics platform,
enabling anyone to get answers from their business data
using an intuitive search-driven interface and automatically
uncover hidden insights with machine learning
22. Takes days/weeks to get
answers to ad-hoc questions
Time consuming manual process of
analyzing millions of combinations
and charts
No easy way for business users and
analysts to understand, trust and
leverage ML/AI techniques
Low Analytics adoption Analysis process not scalable Trust with AI for business outcomes
So much business data, but very few insights
23. Tellius is disrupting data analytics with AI
Combining modern search driven user experience with
AI-driven automation to find hidden answers
24. Tellius Modern Analytics experience
Get Instant answers
Start exploring
Reduce your analysis time from
Hours to Mins
Explainable AI for business
analysts
Time consuming,
Canned reports and dashboards
On-Demand,
Personalized experienceSelf-service data prep
Scalable In-Memory Data Platform
Search-driven
Conversational Analytics
Automated discovery
Of insights
Automated Machine
Learning
25. Only AI Platform that enables collaboration between roles
DATA MANAGEMENT
Visual Data prep with
SQL/ Python support
VISUAL ANALYSIS
Voice Enabled Search Driven
Interface for Asking Questions
Business User
Data Science
Practitioner
Data Analyst
Data Engineer
DISCOVERY OF INSIGHTS
Augmented discovery of insights
With natural language narrative
MACHINE LEARNING
AutoML and deployment of
ML models with Explainable AI
26. Google-like Search
driven Conversational
interface
Reveals hidden
relevant insights
saving 1000’s of hours
Eliminating friction
between self service
data prep to
ad-hoc analysis
and explainable
ML models
In-memory
architecture capable
of handling
billions of records
Intuitive UX AI-Driven Automation
Unified Analytics
Experience
Scalable Architecture
Why Tellius?
Only company providing instant Natural language Search experience, surfacing
AI-driven relevant insights across billions of records across data sources at scale and
enabling users to easily create and explain ML/AI models
27. Business Value Proposition
Automate discovery of relevant
hidden Insights
in your data
Ease of Use Uncover Hidden Insights
Get instant answers with
conversational Search
driven approach
Save Time
Augment Manual discovery process
with automation powered by Machine
learning
30. Loading Data
● Tellius exposes various kind of data sources to connect
using spark data source API
● In this use case, we will using Mysql and csv
connectors to load the data to the system
● Tellius collects the metadata about data as part of the
loading.
● Some of the connectors like Salesforce and Google
Analytics are homegrown using same data source API
31. Defining Data Model
● Tellius calls data models as business views
● Business view allow user to create data model across
datasets seamlessly
● Internal all datasets in Tellius are represented as spark
Data Frames
● Defining a business view in the Tellius is like defining a
join in spark sql
32. Multi Source analysis using NLP
● Which top 6 sources by avg revenue
● Hey Tellius what’s my revenue broken down by
department
● show revenue by cit
● show revenue by department for InstagramAds
● These ultimately runs as spark queries and produces
the results
● We can use voice also
33. Multi Source analysis using Assistant
● Show total revenue
● By city
● What about cost
● for InstagramAds
● Use Voice
● Try out Google Home
35. Spark DataModel
● Spark join creates a flat data model which is different
than typical data ware data model
● So this flat data model is good when there no
duplication of primary keys aka star model
● But if there duplication, we end up double counting
values when we run the queries directly
● Example : DoubleCounting
36. Handling Double Counting in Tellius
● Tellius has implemented its own query language on top
of the Spark SQL layer to implement data warehouse
like strategies to avoid this double counting
● This layer allows Tellius to provide multi source analysis
on top spark with accuracy of a data warehouse system
● Ex : show point_redemeption_method
37. References
● Dataset API -
https://www.youtube.com/watch?v=hHFuKeeQujc
● Structured Data Analysis -
https://www.youtube.com/watch?v=0jd3EWmKQfo
● Anatomy of Spark SQL -
https://www.youtube.com/watch?v=TCWOJ6EJprY