@Kognitio @mphnyc #MPP_R@Kognitio @mphnyc #OANYC
Hadoop meets Mature BI:
Where the rubber meets the road for
Data Scientists
Michael Hiskey
Futurist, + Product Evangelist
VP, Marketing & Business Development
Kognitio
@Kognitio @mphnyc #MPP_R@Kognitio @mphnyc #OANYC
The Data Scientist
Sexiest job of the 21st Century?
@Kognitio @mphnyc #MPP_R@Kognitio @mphnyc #OANYC
Key Concept: Graduation
Projects will need
to Graduate
from the
Data Science Lab
and become part
of
Business as Usual
@Kognitio @mphnyc #MPP_R@Kognitio @mphnyc #OANYC
Demand for the Data Scientist
Organizational appetite for tens, not hundreds
@Kognitio @mphnyc #MPP_R@Kognitio @mphnyc #OANYC
Don’t be a Railroad Stoker!
Highly skilled engineering required …
but the world innovated around them.
@Kognitio @mphnyc #MPP_R@Kognitio @mphnyc #OANYC
Business Intelligence
Numbers
Tables
Charts
Indicators
Time
- History
- Lag
Access
- to view (portal)
- to data
- to depth
- Control/Secure
Consumption
- digestion
…with ease and simplicity
Straddle IT and Business
Faster
Lower latency
More granularity
Richer data model
Self service
@Kognitio @mphnyc #MPP_R@Kognitio @mphnyc #OANYC
What has changed?
More
connected-users?
More-connected
users?
According to one
estimate, mankind
created 150 exabytes
of data in 2005
(billion gigabytes)
In 2010 this was
1,200 exabytes
Data flow
@Kognitio @mphnyc #OANYC
Data Variety
@Kognitio @mphnyc #OANYC
Respondents were asked to choose up to two descriptions about how their organizations view big data from the choices above. Choices have been
abbreviated, and selections have been normalized to equal 100%. n=1144
Source: IBM Institute for Business Value/Said Business School Survey
What?
New value comes from your existing data
@Kognitio @mphnyc #MPP_R@Kognitio @mphnyc #OANYC
© 20th Century Fox
@Kognitio @mphnyc #MPP_R@Kognitio @mphnyc #OANYC
Hadoop ticks many but not all the boxes
a
aaaaaaa
aa a aa
aa aa a
a aa a
aa aaa
a aa aa
@Kognitio @mphnyc #MPP_R@Kognitio @mphnyc #OANYC
 No need to pre-process
 No need to align to schema
 No need to triage
Null storage concerns
@Kognitio @mphnyc #OANYC
Machine learning
algorithms Dynamic
Simulation
Statistical
Analysis
Clustering
Behaviour
modelling
The drive for deeper understanding
Reporting & BPM
Fraud detection
Dynamic
Interaction
Technology/Automation
AnalyticalComplexity
Campaign
Management
#MPP_R
@Kognitio @mphnyc #MPP_R@Kognitio @mphnyc #OANYC
Hadoop just too
slow for interactive
BI!
…loss of train-
of-thought
“while hadoop shines as a proc
platform, it is painfully slow as
@Kognitio @mphnyc #MPP_R@Kognitio @mphnyc #OANYC
Analytics needs
low latency, no I/O wait
High speed in-memory processing
Analytical Platform: Reference Architecture
Analytical
Platform
Layer
Near-line
Storage
(optional)
Application &
Client Layer
All BI Tools All OLAP Clients Excel
Persistence
Layer Hadoop
Clusters
Enterprise Data
Warehouses
Legacy
Systems
…
Reporting
Cloud
Storage
@Kognitio @mphnyc #MPP_R@Kognitio @mphnyc #OANYC
The Future
Big DataAdvanced Analytics
In-memory
Logical Data Warehouse
Predictive Analytics
Data Scientists
connect
www.kognitio.com
twitter.com/kognitiolinkedin.com/companies/kognitio
tinyurl.com/kognitio youtube.com/kognitio
NA: +1 855 KOGNITIO
EMEA: +44 1344 300 770
@Kognitio @mphnyc #MPP_R@Kognitio @mphnyc #MPP_R
Hadoop meets Mature BI:
Where the rubber meets the road for
Data Scientists
• The key challenge for Data Scientists is not the proliferation of their
roles, but the ability to ‘graduate’ key Big Data projects from the
‘Data Science Lab’ and production-ize them into their broader
organizations.
• Over the next 18 months, "Big Data' will become just "Data"; this
means everyone (even business users) will need to have a way to
use it - without reinventing the way they interact with their current
reporting and analysis.
• To do this requires interactive analysis with existing tools and
massively parallel code execution, tightly integrated with Hadoop.
Your Data Warehouse is dying; Hadoop will elicit a material shift
away from price per TB in persistent data storage.
The new bounty hunters:
Drill
Impala
Pivotal
Stinger
The No SQL Posse
Wanted
Dead or Alive
SQL
It’s all about getting work done
Used to be simple fetch of value
Tasks evolving:
Then was calc dynamic aggregate
Now complex algorithms!
@Kognitio @mphnyc #MPP_R
create external script LM_PRODUCT_FORECAST environment rsint
receives ( SALEDATE DATE, DOW INTEGER, ROW_ID INTEGER, PRODNO INTEGER, DAILYSALES
partition by PRODNO order by PRODNO, ROW_ID
sends ( R_OUTPUT varchar )
isolate partitions
script S'endofr( # Simple R script to run a linear fit on daily sales
prod1<-read.csv(file=file("stdin"), header=FALSE,row.names
colnames(prod1)<-c("DOW","ID","PRODNO","DAILYSALES")
dim1<-dim(prod1)
daily1<-aggregate(prod1$DAILYSALES, list(DOW = prod1$DOW),
daily1[,2]<-daily1[,2]/sum(daily1[,2])
basesales<-array(0,c(dim1[1],2))
basesales[,1]<-prod1$ID
basesales[,2]<-(prod1$DAILYSALES/daily1[prod1$DOW+1,2])
colnames(basesales)<-c("ID","BASESALES")
fit1=lm(BASESALES ~ ID,as.data.frame(basesales))
select Trans_Year, Num_Trans,
count(distinct Account_ID) Num_Accts,
sum(count( distinct Account_ID)) over (partition by Trans_Year
cast(sum(total_spend)/1000 as int) Total_Spend,
cast(sum(total_spend)/1000 as int) / count(distinct Account_ID
rank() over (partition by Trans_Year order by count(distinct A
rank() over (partition by Trans_Year order by sum(total_spend)
from( select Account_ID,
Extract(Year from Effective_Date) Trans_Year,
count(Transaction_ID) Num_Trans,
select dept, sum(sales)
from sales_fact
Where period between date ‘01-05-2006’ and date ‘31-05-2006’
group by dept
having sum(sales) > 50000;
select sum(sales)
from sales_history
where year = 2006 and month = 5 and region=1;
select total_sales
from summary
where year = 2006 and month = 5 and region=1;
Behind the
numbers
@Kognitio @mphnyc #MPP_R
For once technology is on our side
First time we have full triumvirate of
– Excellent Computing power
– Unlimited storage
– Fast Networks
…now that RAM is cheap!
@Kognitio @mphnyc #MPP_R
Lots of these
Not so many of these
Hadoop is…
Hadoop inherently disk oriented
Typically low ratio of CPU to Disk

Big data bi-mature-oanyc summit

  • 1.
    @Kognitio @mphnyc #MPP_R@Kognitio@mphnyc #OANYC Hadoop meets Mature BI: Where the rubber meets the road for Data Scientists Michael Hiskey Futurist, + Product Evangelist VP, Marketing & Business Development Kognitio
  • 2.
    @Kognitio @mphnyc #MPP_R@Kognitio@mphnyc #OANYC The Data Scientist Sexiest job of the 21st Century?
  • 3.
    @Kognitio @mphnyc #MPP_R@Kognitio@mphnyc #OANYC Key Concept: Graduation Projects will need to Graduate from the Data Science Lab and become part of Business as Usual
  • 4.
    @Kognitio @mphnyc #MPP_R@Kognitio@mphnyc #OANYC Demand for the Data Scientist Organizational appetite for tens, not hundreds
  • 5.
    @Kognitio @mphnyc #MPP_R@Kognitio@mphnyc #OANYC Don’t be a Railroad Stoker! Highly skilled engineering required … but the world innovated around them.
  • 6.
    @Kognitio @mphnyc #MPP_R@Kognitio@mphnyc #OANYC Business Intelligence Numbers Tables Charts Indicators Time - History - Lag Access - to view (portal) - to data - to depth - Control/Secure Consumption - digestion …with ease and simplicity Straddle IT and Business Faster Lower latency More granularity Richer data model Self service
  • 7.
    @Kognitio @mphnyc #MPP_R@Kognitio@mphnyc #OANYC What has changed? More connected-users? More-connected users?
  • 8.
    According to one estimate,mankind created 150 exabytes of data in 2005 (billion gigabytes) In 2010 this was 1,200 exabytes
  • 9.
  • 10.
  • 11.
    @Kognitio @mphnyc #OANYC Respondentswere asked to choose up to two descriptions about how their organizations view big data from the choices above. Choices have been abbreviated, and selections have been normalized to equal 100%. n=1144 Source: IBM Institute for Business Value/Said Business School Survey What? New value comes from your existing data
  • 12.
    @Kognitio @mphnyc #MPP_R@Kognitio@mphnyc #OANYC © 20th Century Fox
  • 13.
    @Kognitio @mphnyc #MPP_R@Kognitio@mphnyc #OANYC Hadoop ticks many but not all the boxes a aaaaaaa aa a aa aa aa a a aa a aa aaa a aa aa
  • 14.
    @Kognitio @mphnyc #MPP_R@Kognitio@mphnyc #OANYC  No need to pre-process  No need to align to schema  No need to triage Null storage concerns
  • 15.
    @Kognitio @mphnyc #OANYC Machinelearning algorithms Dynamic Simulation Statistical Analysis Clustering Behaviour modelling The drive for deeper understanding Reporting & BPM Fraud detection Dynamic Interaction Technology/Automation AnalyticalComplexity Campaign Management #MPP_R
  • 16.
    @Kognitio @mphnyc #MPP_R@Kognitio@mphnyc #OANYC Hadoop just too slow for interactive BI! …loss of train- of-thought “while hadoop shines as a proc platform, it is painfully slow as
  • 17.
    @Kognitio @mphnyc #MPP_R@Kognitio@mphnyc #OANYC Analytics needs low latency, no I/O wait High speed in-memory processing
  • 18.
    Analytical Platform: ReferenceArchitecture Analytical Platform Layer Near-line Storage (optional) Application & Client Layer All BI Tools All OLAP Clients Excel Persistence Layer Hadoop Clusters Enterprise Data Warehouses Legacy Systems … Reporting Cloud Storage
  • 19.
    @Kognitio @mphnyc #MPP_R@Kognitio@mphnyc #OANYC The Future Big DataAdvanced Analytics In-memory Logical Data Warehouse Predictive Analytics Data Scientists
  • 20.
  • 21.
    @Kognitio @mphnyc #MPP_R@Kognitio@mphnyc #MPP_R Hadoop meets Mature BI: Where the rubber meets the road for Data Scientists • The key challenge for Data Scientists is not the proliferation of their roles, but the ability to ‘graduate’ key Big Data projects from the ‘Data Science Lab’ and production-ize them into their broader organizations. • Over the next 18 months, "Big Data' will become just "Data"; this means everyone (even business users) will need to have a way to use it - without reinventing the way they interact with their current reporting and analysis. • To do this requires interactive analysis with existing tools and massively parallel code execution, tightly integrated with Hadoop. Your Data Warehouse is dying; Hadoop will elicit a material shift away from price per TB in persistent data storage.
  • 22.
    The new bountyhunters: Drill Impala Pivotal Stinger The No SQL Posse Wanted Dead or Alive SQL
  • 23.
    It’s all aboutgetting work done Used to be simple fetch of value Tasks evolving: Then was calc dynamic aggregate Now complex algorithms!
  • 24.
    @Kognitio @mphnyc #MPP_R createexternal script LM_PRODUCT_FORECAST environment rsint receives ( SALEDATE DATE, DOW INTEGER, ROW_ID INTEGER, PRODNO INTEGER, DAILYSALES partition by PRODNO order by PRODNO, ROW_ID sends ( R_OUTPUT varchar ) isolate partitions script S'endofr( # Simple R script to run a linear fit on daily sales prod1<-read.csv(file=file("stdin"), header=FALSE,row.names colnames(prod1)<-c("DOW","ID","PRODNO","DAILYSALES") dim1<-dim(prod1) daily1<-aggregate(prod1$DAILYSALES, list(DOW = prod1$DOW), daily1[,2]<-daily1[,2]/sum(daily1[,2]) basesales<-array(0,c(dim1[1],2)) basesales[,1]<-prod1$ID basesales[,2]<-(prod1$DAILYSALES/daily1[prod1$DOW+1,2]) colnames(basesales)<-c("ID","BASESALES") fit1=lm(BASESALES ~ ID,as.data.frame(basesales)) select Trans_Year, Num_Trans, count(distinct Account_ID) Num_Accts, sum(count( distinct Account_ID)) over (partition by Trans_Year cast(sum(total_spend)/1000 as int) Total_Spend, cast(sum(total_spend)/1000 as int) / count(distinct Account_ID rank() over (partition by Trans_Year order by count(distinct A rank() over (partition by Trans_Year order by sum(total_spend) from( select Account_ID, Extract(Year from Effective_Date) Trans_Year, count(Transaction_ID) Num_Trans, select dept, sum(sales) from sales_fact Where period between date ‘01-05-2006’ and date ‘31-05-2006’ group by dept having sum(sales) > 50000; select sum(sales) from sales_history where year = 2006 and month = 5 and region=1; select total_sales from summary where year = 2006 and month = 5 and region=1; Behind the numbers
  • 25.
    @Kognitio @mphnyc #MPP_R Foronce technology is on our side First time we have full triumvirate of – Excellent Computing power – Unlimited storage – Fast Networks …now that RAM is cheap!
  • 26.
    @Kognitio @mphnyc #MPP_R Lotsof these Not so many of these Hadoop is… Hadoop inherently disk oriented Typically low ratio of CPU to Disk

Editor's Notes

  • #2 Language – one word changes and the whole meaning shifts one word like Hadoop is creating seismic shifts in the world of data and its useCore themes: History – cycle of discover-construct-discover-construct Humans building EDW has had its time but will represent learning and museum for future Kognitio is new bridge between business need and the information stores of the EDW and Hadoop
  • #3 http://hbr.org/2012/10/data-scientist-the-sexiest-job-of-the-21st-century/the “data scientist.” It’s a high-ranking professional with the training and curiosity to make discoveries in the world of big data. The title has been around for only a few years. (It was coined in 2008 by one of us, D.J. Patil, and Jeff Hammerbacher, then the respective leads of data and analytics efforts at LinkedIn and Facebook.) But thousands of data scientists are already working at both start-ups and well-established companies. Their sudden appearance on the business scene reflects the fact that companies are now wrestling with information that comes in varieties and volumes never encountered before. If your organization stores multiple petabytes of data, if the information most critical to your business resides in forms other than rows and columns of numbers, or if answering your biggest question would involve a “mashup” of several analytical efforts, you’ve got a big data opportunity.http://www.guardian.co.uk/news/datablog/2012/mar/02/data-scientist#zoomed-picture
  • #5 http://hbr.org/2012/10/data-scientist-the-sexiest-job-of-the-21st-century/the “data scientist.” It’s a high-ranking professional with the training and curiosity to make discoveries in the world of big data. The title has been around for only a few years. (It was coined in 2008 by one of us, D.J. Patil, and Jeff Hammerbacher, then the respective leads of data and analytics efforts at LinkedIn and Facebook.) But thousands of data scientists are already working at both start-ups and well-established companies. Their sudden appearance on the business scene reflects the fact that companies are now wrestling with information that comes in varieties and volumes never encountered before. If your organization stores multiple petabytes of data, if the information most critical to your business resides in forms other than rows and columns of numbers, or if answering your biggest question would involve a “mashup” of several analytical efforts, you’ve got a big data opportunity.http://www.guardian.co.uk/news/datablog/2012/mar/02/data-scientist#zoomed-picture
  • #6 http://hbr.org/2012/10/data-scientist-the-sexiest-job-of-the-21st-century/the “data scientist.” It’s a high-ranking professional with the training and curiosity to make discoveries in the world of big data. The title has been around for only a few years. (It was coined in 2008 by one of us, D.J. Patil, and Jeff Hammerbacher, then the respective leads of data and analytics efforts at LinkedIn and Facebook.) But thousands of data scientists are already working at both start-ups and well-established companies. Their sudden appearance on the business scene reflects the fact that companies are now wrestling with information that comes in varieties and volumes never encountered before. If your organization stores multiple petabytes of data, if the information most critical to your business resides in forms other than rows and columns of numbers, or if answering your biggest question would involve a “mashup” of several analytical efforts, you’ve got a big data opportunity.http://www.guardian.co.uk/news/datablog/2012/mar/02/data-scientist#zoomed-picture
  • #7 Very crudely
  • #8 Can play the hyphen game more connected-users more-connected usersInformation consumers – your users or you the user?Creates more pressure on the infrastructure – more queries
  • #9 Mobile access is coming alongApplication space broadening BYODCan supply access to BIBut also furiously generate data for BIAccess to dynamic information but every access generates data and possible inferencesSelf-service access
  • #10 IBM Institute for Business Value/Said Business School Survey Defining big dataMuch of the confusion about big data begins with thedefinition itself. To understand our study respondents’definition of the term, we asked each to select up to twocharacteristics of big data. Rather than any single characteristicclearly dominating among the choices, respondents weredivided in their views on whether big data is best describedby today’s greater volume of data, the new types of data andanalysis, or the emerging requirements for more real-timeinformation analysis (see Figure 1).
  • #11 EXPERIMENTING? So is this you, not quite sureOne CTO told me he invented a Hadoop project just to keep developers happy – it’s important for their own techie development that they have the experience! That may sound DAFT… hot think of Hadoop in 2013 like the Internet in 1996… if you are not far down that road you’re behind and need to catch-up. The community is growing and building, they will address many of the limitations we see today… SOMEDAY
  • #14 Hadoop is not &quot;universal solution“!Way too much hype and hyperbole - great for innovators and start-ups not so good for plain old business
  • #15 DW demanded ETL to map data into model and ensure logical consistency - upfront prerequisiteHadoop making people lazy – it cuts out thought but leaves future decisions wide open – no lock in, cuts risks of bad decisionsSimplified decisions of what to keep – keep it allBUT hey BI needs structure and discipline!!!!
  • #16 Bottlenecks caused by platforms and tools unable to cope with demands of complexity, disparity and volumeComplex analyticsMachine learning – fraud detection/gamingWeb Analytics – Dynamic content/bid managementModelling – traditional clustering/behavioural for marketing/product development/resource optimisationInvestigative Reporting (Dashboards and reports with granular data access)Data Model
  • #17 Hadoop – the false dawnThere, I have dared to say it!Does not accelerate BI in quite the same way asthe EDWUsers have had a decade of being sold train-of-thought - icubes and Visual Insight Hadoop - Not hands on, not desktop, not agile
  • #18 Lots of access to data - iterationsAnalytics is about work done – more work needs to be doneSo don’t hold CPUs back!In-memory is not cache!Memory is underplayed in Hadoop - its cheap use it!Processors and Ram are true measure of work that can be done – disks just fetchKeep data in memory!!! Don’t swap, don’t wait on disk don’t pick through indexes then data, just access what is needed.Economics of RAM have changed, much lower cost, large volumes readily available
  • #23 Ah yes plugging into Hadoop So much for noSQL revolutionUniversal integration needed – protect the BI investmentLost the gun fight like all revolutions the upstarts died down and got absorbed (subsumed)Business and BI investment demands SQL!Hive now we have drill, impala, Pivotal,Tough game – yes its SQL access but not low latency
  • #24 What the business cares about is getting work doneThey really don’t care about how it is stored or where it is stored!Its not about raw individual speed its about throughputAddress the bottlenecksToo many vendors play games that just shift the bottleneck
  • #25 BI mostly focuses (sells) on presentation – Graphics, pictures, VisualisationBUT behind the scenes a lot of heavy lifting has to be doneThis workload has changed over time from the simple to complex
  • #26 No need for single platforms like the traditional DW – stores and analysesThis is why data sciences risesWe did not get this in rise of data mining in the 90’sWe’ll come onto RAM shortly
  • #27 BATCH Hadoop disk centric – Storage - just like the EDW more parallelism yes, lots more but still batch disk I/O centricSchedulers not designed for rapid responseEssentially a batch queue – BI applications and business users have significantly evolved from batch reporting