James Serra
Big Data Evangelist
Microsoft
JamesSerra3@gmail.com
About Me
 Microsoft, Big Data Evangelist
 In IT for 30 years, worked on many BI and DW projects
 Worked as desktop/web/database developer, DBA, BI and DW architect and developer, MDM
architect, PDW/APS developer
 Been perm employee, contractor, consultant, business owner
 Presenter at PASS Business Analytics Conference, PASS Summit, Enterprise Data World conference
 Certifications: MCSE: Data Platform, Business Intelligence; MS: Architecting Microsoft Azure
Solutions, Design and Implement Big Data Analytics Solutions, Design and Implement Cloud Data
Platform Solutions
 Blog at JamesSerra.com
 Former SQL Server MVP
 Author of book “Reporting with Microsoft SQL Server 2012”
Why Deploy To the Cloud?
Microsoft’s Solution
How Do I Get Started?
What if you could handle big data?
Introducing Apache Hadoop
Data volume
Data variety
Data velocity
Hadoop is a platform with portfolio of projects
A Hadoop distribution is a package of projects
Business applications of Hadoop
New analytic applications from new data
Main differences vs RDBMS/NoSQL
Pros
• Not a type of database, but rather a open-source software ecosystem that allows for massively
parallel computing
• No inherent structure (no conversion to relational or JSON needed)
• Good for batch processing, large files, volume writes, parallel scans, sequential access
• Great for large, distributed data processing tasks where time isn’t a constraint (i.e. end-of-day
reports, scanning months of historical data)
• Tradeoff: In order to make deep connections between many data points, the technology sacrifices
speed
• Some NoSQL databases such as HBase are built on top of HDFS
Main differences vs RDBMS/NoSQL
Cons
• File system, not a database
• Not good for millions of users, random access, fast individual record lookups or updates (OLTP)
• Not so great for real-time analytics
• Lacks: indexing, metadata layer, query optimizer, memory management
• Same cons at non-relational: no ACID support, data integrity, limited indexing, weak SQL, etc
• Security limitations
What Is Hadoop?
Microsoft’s Solution
How Do I Get Started?
Challenges with implementing Hadoop
Why Cloud + Big Data?
Speed Scale Economics
Always Up,
Always On
Open and flexibleTime to value
Data of all Volume,
Variety, Velocity
Massive Compute
and Storage
Deployment
expertise
Why Hadoop in the Cloud?
Scenarios For Deploying Hadoop As Hybrid
What Is Hadoop?
Why Deploy To the Cloud?
Microsoft’s Solution
How Do I Get Started?
Introducing Azure HDInsight
Microsoft contributions to Hadoop
Microsoft + Hortonworks
HDInsight Built for Windows or Linux
HDInsight Supports Hive
HDInsight Supports HBase
Data Node Data Node Data Node Data Node
Task Tracker Task Tracker Task Tracker Task Tracker
Name Node
Job Tracker
HMaster
Coordination
Region Server Region Server Region Server Region Server
HDInsight Supports Mahout
HDInsight Supports Storm
Stream
processin
g
Search and query
Data analytics (Excel)
Web/thick client
dashboards
Devices to take action
RabbitMQ /
ActiveMQ
Spark for Azure HDInsight
In Memory Processing on Multiple Workloads
Azure
HDInsight
In Memory
Spark
• Single execution model for multiple
tasks
• Processing up to 100x faster
performance
• Developer friendly (Java, Python, Scala)
• BI tool of choice (Power BI, Tabelau,
Qlik, SAP)
• Notebook experience (Jupyter/iPython,
Zeppelin)
HDInsight Allows You To Add Hadoop Projects
Easy for Developers
Easy for Data Scientists
Easy for Business Analysts
R Server for HDInsight
• Familiarity of R (most popular language for data
scientists)
• Scalability of Hadoop and Spark
• Up to 7x faster using Spark engine
• Train and run ML models on datasets of any size
• Cloud managed solution (easy setup, elastic,
SLA)
Introducing Azure HDInsight
Hyper scale Infrastructure is the enabler
32 Regions Worldwide, 24 Generally Available…
 100+ datacenters
 Top 3 networks in the world
 2.5x AWS, 7x Google DC Regions
 G Series – Largest VM in World, 32 cores, 448GB Ram, SSD…
Operational
Announced/Not Operational
Central US
Iowa
West US
California
East US
Virginia
US Gov
Virginia
North Central US
Illinois
US Gov
Iowa
South Central US
Texas
Brazil South
Sao Paulo State
West Europe
Netherlands
China North *
Beijing
China South *
Shanghai
Japan East
Tokyo, Saitama
Japan West
Osaka
India South
Chennai
East Asia
Hong Kong
SE Asia
Singapore
Australia South East
Victoria
Australia East
New South Wales
India Central
Pune
Canada East
Quebec City
Canada Central
Toronto
India West
Mumbai
Germany North East **
Magdeburg
Germany Central **
Frankfurt
North Europe
Ireland
East US 2
Virginia
United Kingdom
RegionsUnited Kingdom
Regions
US DoD East
TBD
US DoD West
TBD
* Operated by 21Vianet ** Data Stewardship by Deutsche Telekom
Seoul
Korea (2)
Why Microsoft Azure?
Azure Storage
Azure Blob Storage
Azure Data Lake Store
No hardware challenges
Deployed in minutes
Mission Critical, Enterprise Ready
Maintenance done for you
Low Cost
$£€¥
*IDC study “The Business Value and TCO Advantage of Apache Hadoop in the Cloud with Microsoft Azure HDInsight”
Introducing Azure HDInsight
Bringing Hadoop to a billion people
Making advanced analytics accessible to Hadoop
Cloud
HDInsight vs HDP on Azure VM
HDInsight HDP on Azure VM
PaaS (setup, scale, manage, patch, etc) IaaS
Managed by Microsoft Managed by customer
Storage separate (Blob or ADLS) Storage in VM (local disk), but can also
have storage in Azure blob or ADLS
Delete VM keeps data Delete VM deletes data (unless external)
Up to 30-days behind latest HDP version Latest HDP Version
Limited Hadoop projects Unlimited Hadoop projects
Microsoft supports VM and Hadoop Microsoft: VM, HDP: Hadoop
No on-prem version On-prem version
Distributed, parallel analytics
framework
U-SQL (based on C# and SQL)
Dial for scale
Hides infrastructure complexity
Visual Studio integration
Instant scale on demand
Reduced learning curve
Azure Data Lake Analytics
Azure Services for big data analytics
YARN
HDFS
HDInsightAnalytics
Service
Store
Partners
U-SQL
Clickstream
Sensors
Video
Social
Web
Devices
Relational
Applications
56
What Is Hadoop?
Why Deploy To the Cloud?
Microsoft’s Solution
Get Started
http://azure.microsoft.com/en-us/documentation/services/hdinsight/
http://azure.microsoft.com/en-us/documentation/articles/hdinsight-learn-map/
http://www.microsoftvirtualacademy.com/training-courses/getting-started-with-microsoft-big-data
http://channel9.msdn.com/Shows/Data-Exposed
http://azure.microsoft.com/en-us/pricing/free-trial/
Azure getting started
• Free Azure account, $200 in credit, https://azure.microsoft.com/en-us/free/
• Startups: BizSpark, $750/month free Azure, BizSpark Plus - $120k/year free Azure,
https://www.microsoft.com/bizspark/
• MSDN subscription, Data Platform MVP, $150/month free Azure, https://azure.microsoft.com/en-
us/pricing/member-offers/msdn-benefits/
• Microsoft Educator Grant Program, faculty - $250/month free Azure for a year, students -
$100/month free Azure for 6 months, https://azure.microsoft.com/en-us/pricing/member-
offers/msdn-benefits/
• Microsoft Azure for Research Grant, http://research.microsoft.com/en-us/projects/azure/default.aspx
• DreamSpark for students, https://www.dreamspark.com/Student/Default.aspx
• DreamSpark for academic institutions: https://www.dreamspark.com/Institution/Subscription.aspx
• Various Microsoft funds so you can learn the technologies or build a client solution
Pricing for HDInsight
CAPABILITIES STANDARD PREMIUM PREVIEW
Big Data Workloads
Standard Hadoop and Open Source Projects
(Core Hadoop & YARN, Hive & HCatalog, Tez,
Pig, Sqoop, Oozie, Zookeeper, Phoenix)
Columnar NoSQL (HBase)
Stream processing (Storm)
Interactive processing, real-time stream
processing & ML (Spark)
Big Data statistics predictive modeling, and
machine learning with R Server
Enterprise Readiness
Administration – manage, monitor &
troubleshoot clusters
Hadoop version upgrades and patching –
Automatic patching and upgrades
Encryption of data at rest
Price Standard price per Node HDInsight Standard Price + $0.02/Core-hour for each core
used in the cluster during preview (75% discount)
Resources
 What is HDInsight? http://bit.ly/1WpS0at
 Hadoop and Microsoft http://bit.ly/20Cg2hA
 Introduction to Hadoop http://bit.ly/1WpTstq
Q & A ?
James Serra, Big Data Evangelist
Email me at: JamesSerra3@gmail.com
Follow me at: @JamesSerra
Link to me at: www.linkedin.com/in/JamesSerra
Visit my blog at: JamesSerra.com (where this slide deck is posted via the
“Presentations” link on the top menu)

Introduction to Microsoft’s Hadoop solution (HDInsight)

  • 1.
    James Serra Big DataEvangelist Microsoft JamesSerra3@gmail.com
  • 2.
    About Me  Microsoft,Big Data Evangelist  In IT for 30 years, worked on many BI and DW projects  Worked as desktop/web/database developer, DBA, BI and DW architect and developer, MDM architect, PDW/APS developer  Been perm employee, contractor, consultant, business owner  Presenter at PASS Business Analytics Conference, PASS Summit, Enterprise Data World conference  Certifications: MCSE: Data Platform, Business Intelligence; MS: Architecting Microsoft Azure Solutions, Design and Implement Big Data Analytics Solutions, Design and Implement Cloud Data Platform Solutions  Blog at JamesSerra.com  Former SQL Server MVP  Author of book “Reporting with Microsoft SQL Server 2012”
  • 3.
    Why Deploy Tothe Cloud? Microsoft’s Solution How Do I Get Started?
  • 4.
    What if youcould handle big data?
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
    Hadoop is aplatform with portfolio of projects
  • 10.
    A Hadoop distributionis a package of projects
  • 11.
  • 12.
  • 13.
    Main differences vsRDBMS/NoSQL Pros • Not a type of database, but rather a open-source software ecosystem that allows for massively parallel computing • No inherent structure (no conversion to relational or JSON needed) • Good for batch processing, large files, volume writes, parallel scans, sequential access • Great for large, distributed data processing tasks where time isn’t a constraint (i.e. end-of-day reports, scanning months of historical data) • Tradeoff: In order to make deep connections between many data points, the technology sacrifices speed • Some NoSQL databases such as HBase are built on top of HDFS
  • 14.
    Main differences vsRDBMS/NoSQL Cons • File system, not a database • Not good for millions of users, random access, fast individual record lookups or updates (OLTP) • Not so great for real-time analytics • Lacks: indexing, metadata layer, query optimizer, memory management • Same cons at non-relational: no ACID support, data integrity, limited indexing, weak SQL, etc • Security limitations
  • 15.
    What Is Hadoop? Microsoft’sSolution How Do I Get Started?
  • 16.
  • 17.
    Why Cloud +Big Data? Speed Scale Economics Always Up, Always On Open and flexibleTime to value Data of all Volume, Variety, Velocity Massive Compute and Storage Deployment expertise
  • 18.
    Why Hadoop inthe Cloud?
  • 19.
    Scenarios For DeployingHadoop As Hybrid
  • 20.
    What Is Hadoop? WhyDeploy To the Cloud? Microsoft’s Solution How Do I Get Started?
  • 21.
  • 22.
  • 23.
  • 24.
    HDInsight Built forWindows or Linux
  • 25.
  • 26.
    HDInsight Supports HBase DataNode Data Node Data Node Data Node Task Tracker Task Tracker Task Tracker Task Tracker Name Node Job Tracker HMaster Coordination Region Server Region Server Region Server Region Server
  • 27.
  • 28.
    HDInsight Supports Storm Stream processin g Searchand query Data analytics (Excel) Web/thick client dashboards Devices to take action RabbitMQ / ActiveMQ
  • 29.
    Spark for AzureHDInsight In Memory Processing on Multiple Workloads Azure HDInsight In Memory Spark • Single execution model for multiple tasks • Processing up to 100x faster performance • Developer friendly (Java, Python, Scala) • BI tool of choice (Power BI, Tabelau, Qlik, SAP) • Notebook experience (Jupyter/iPython, Zeppelin)
  • 30.
    HDInsight Allows YouTo Add Hadoop Projects
  • 31.
  • 32.
    Easy for DataScientists
  • 33.
  • 34.
    R Server forHDInsight • Familiarity of R (most popular language for data scientists) • Scalability of Hadoop and Spark • Up to 7x faster using Spark engine • Train and run ML models on datasets of any size • Cloud managed solution (easy setup, elastic, SLA)
  • 35.
  • 36.
    Hyper scale Infrastructureis the enabler 32 Regions Worldwide, 24 Generally Available…  100+ datacenters  Top 3 networks in the world  2.5x AWS, 7x Google DC Regions  G Series – Largest VM in World, 32 cores, 448GB Ram, SSD… Operational Announced/Not Operational Central US Iowa West US California East US Virginia US Gov Virginia North Central US Illinois US Gov Iowa South Central US Texas Brazil South Sao Paulo State West Europe Netherlands China North * Beijing China South * Shanghai Japan East Tokyo, Saitama Japan West Osaka India South Chennai East Asia Hong Kong SE Asia Singapore Australia South East Victoria Australia East New South Wales India Central Pune Canada East Quebec City Canada Central Toronto India West Mumbai Germany North East ** Magdeburg Germany Central ** Frankfurt North Europe Ireland East US 2 Virginia United Kingdom RegionsUnited Kingdom Regions US DoD East TBD US DoD West TBD * Operated by 21Vianet ** Data Stewardship by Deutsche Telekom Seoul Korea (2)
  • 37.
  • 38.
  • 39.
  • 40.
  • 41.
  • 42.
  • 43.
  • 44.
    Low Cost $£€¥ *IDC study“The Business Value and TCO Advantage of Apache Hadoop in the Cloud with Microsoft Azure HDInsight”
  • 45.
  • 46.
    Bringing Hadoop toa billion people
  • 47.
    Making advanced analyticsaccessible to Hadoop Cloud
  • 48.
    HDInsight vs HDPon Azure VM HDInsight HDP on Azure VM PaaS (setup, scale, manage, patch, etc) IaaS Managed by Microsoft Managed by customer Storage separate (Blob or ADLS) Storage in VM (local disk), but can also have storage in Azure blob or ADLS Delete VM keeps data Delete VM deletes data (unless external) Up to 30-days behind latest HDP version Latest HDP Version Limited Hadoop projects Unlimited Hadoop projects Microsoft supports VM and Hadoop Microsoft: VM, HDP: Hadoop No on-prem version On-prem version
  • 49.
    Distributed, parallel analytics framework U-SQL(based on C# and SQL) Dial for scale Hides infrastructure complexity Visual Studio integration Instant scale on demand Reduced learning curve Azure Data Lake Analytics Azure Services for big data analytics YARN HDFS HDInsightAnalytics Service Store Partners U-SQL Clickstream Sensors Video Social Web Devices Relational Applications 56
  • 50.
    What Is Hadoop? WhyDeploy To the Cloud? Microsoft’s Solution
  • 51.
  • 52.
    Azure getting started •Free Azure account, $200 in credit, https://azure.microsoft.com/en-us/free/ • Startups: BizSpark, $750/month free Azure, BizSpark Plus - $120k/year free Azure, https://www.microsoft.com/bizspark/ • MSDN subscription, Data Platform MVP, $150/month free Azure, https://azure.microsoft.com/en- us/pricing/member-offers/msdn-benefits/ • Microsoft Educator Grant Program, faculty - $250/month free Azure for a year, students - $100/month free Azure for 6 months, https://azure.microsoft.com/en-us/pricing/member- offers/msdn-benefits/ • Microsoft Azure for Research Grant, http://research.microsoft.com/en-us/projects/azure/default.aspx • DreamSpark for students, https://www.dreamspark.com/Student/Default.aspx • DreamSpark for academic institutions: https://www.dreamspark.com/Institution/Subscription.aspx • Various Microsoft funds so you can learn the technologies or build a client solution
  • 53.
    Pricing for HDInsight CAPABILITIESSTANDARD PREMIUM PREVIEW Big Data Workloads Standard Hadoop and Open Source Projects (Core Hadoop & YARN, Hive & HCatalog, Tez, Pig, Sqoop, Oozie, Zookeeper, Phoenix) Columnar NoSQL (HBase) Stream processing (Storm) Interactive processing, real-time stream processing & ML (Spark) Big Data statistics predictive modeling, and machine learning with R Server Enterprise Readiness Administration – manage, monitor & troubleshoot clusters Hadoop version upgrades and patching – Automatic patching and upgrades Encryption of data at rest Price Standard price per Node HDInsight Standard Price + $0.02/Core-hour for each core used in the cluster during preview (75% discount)
  • 54.
    Resources  What isHDInsight? http://bit.ly/1WpS0at  Hadoop and Microsoft http://bit.ly/20Cg2hA  Introduction to Hadoop http://bit.ly/1WpTstq
  • 55.
    Q & A? James Serra, Big Data Evangelist Email me at: JamesSerra3@gmail.com Follow me at: @JamesSerra Link to me at: www.linkedin.com/in/JamesSerra Visit my blog at: JamesSerra.com (where this slide deck is posted via the “Presentations” link on the top menu)

Editor's Notes

  • #2 Introduction to Microsoft’s Hadoop solution (HDInsight) Did you know Microsoft provides a Hadoop Platform-as-a-Service (PaaS)? It’s called Azure HDInsight and it deploys and provisions managed Apache Hadoop clusters in the cloud, providing a software framework designed to process, analyze, and report on big data with high reliability and availability. HDInsight uses the Hortonworks Data Platform (HDP) Hadoop distribution that includes many Hadoop components such as HBase, Spark, Storm, Pig, Hive, and Mahout. Join me in this presentation as I talk about what Hadoop is, why deploy to the cloud, and Microsoft’s solution.
  • #3 Fluff, but point is I bring real work experience to the session
  • #5  http://gigaom.com/2013/03/04/the-history-of-hadoop-from-4-nodes-to-the-future-of-data/ Key goal of slide: To convey what every IT person knows: The data warehouse and what’s it for. Then we set-up the Gartner quote to say that there is a tipping point. End the slide with a question: Why is it at a tipping point?   Slide talk track: What is the “traditional” data warehouse? IT professionals know this well. A data warehouse or an enterprise data warehouse is a database that was designed specifically for data analysis. It is the single source of truth or the central repository for all data in the company. This means disparate data in the company coming from your transactional systems, your ERP, CRM or Line of Business applications would all be extracted, transformed, and cleansed and put into the warehouse. It was built so that the people who is accessing the warehouse using BI tools will be accessing data that has been provisioned by IT and represent accurate data sanctioned by the company. However, this traditional data warehouse is reaching an inflection point. Gartner in their analysis of the state of data warehousing noted that it is reaching the most significant tipping point since it’s inception. The question is why? What is going on?
  • #6  http://gigaom.com/2013/03/04/the-history-of-hadoop-from-4-nodes-to-the-future-of-data/ Key goal of slide: To convey what every IT person knows: The data warehouse and what’s it for. Then we set-up the Gartner quote to say that there is a tipping point. End the slide with a question: Why is it at a tipping point?   Slide talk track: What is the “traditional” data warehouse? IT professionals know this well. A data warehouse or an enterprise data warehouse is a database that was designed specifically for data analysis. It is the single source of truth or the central repository for all data in the company. This means disparate data in the company coming from your transactional systems, your ERP, CRM or Line of Business applications would all be extracted, transformed, and cleansed and put into the warehouse. It was built so that the people who is accessing the warehouse using BI tools will be accessing data that has been provisioned by IT and represent accurate data sanctioned by the company. However, this traditional data warehouse is reaching an inflection point. Gartner in their analysis of the state of data warehousing noted that it is reaching the most significant tipping point since it’s inception. The question is why? What is going on?
  • #7 Key goal of slide: To convey what every IT person knows: The data warehouse and what’s it for. Then we set-up the Gartner quote to say that there is a tipping point. End the slide with a question: Why is it at a tipping point?   Slide talk track: What is the “traditional” data warehouse? IT professionals know this well. A data warehouse or an enterprise data warehouse is a database that was designed specifically for data analysis. It is the single source of truth or the central repository for all data in the company. This means disparate data in the company coming from your transactional systems, your ERP, CRM or Line of Business applications would all be extracted, transformed, and cleansed and put into the warehouse. It was built so that the people who is accessing the warehouse using BI tools will be accessing data that has been provisioned by IT and represent accurate data sanctioned by the company. However, this traditional data warehouse is reaching an inflection point. Gartner in their analysis of the state of data warehousing noted that it is reaching the most significant tipping point since it’s inception. The question is why? What is going on?
  • #8 Key goal of slide: To convey what every IT person knows: The data warehouse and what’s it for. Then we set-up the Gartner quote to say that there is a tipping point. End the slide with a question: Why is it at a tipping point?   Slide talk track: What is the “traditional” data warehouse? IT professionals know this well. A data warehouse or an enterprise data warehouse is a database that was designed specifically for data analysis. It is the single source of truth or the central repository for all data in the company. This means disparate data in the company coming from your transactional systems, your ERP, CRM or Line of Business applications would all be extracted, transformed, and cleansed and put into the warehouse. It was built so that the people who is accessing the warehouse using BI tools will be accessing data that has been provisioned by IT and represent accurate data sanctioned by the company. However, this traditional data warehouse is reaching an inflection point. Gartner in their analysis of the state of data warehousing noted that it is reaching the most significant tipping point since it’s inception. The question is why? What is going on?
  • #17 http://hadoop.apache.org/who.html
  • #20 http://www.jamesserra.com/archive/2014/05/hadoop-and-data-warehouses/ Hadoop Common – Contains libraries and utilities needed by other Hadoop modules Hadoop Distributed File System (HDFS) – A distributed file-system that stores data on commodity machines, providing very high aggregate bandwidth across the cluster Hadoop MapReduce – A programming model for large scale data processing.  It is designed for batch processing.  Although the Hadoop framework is implemented in Java, MapReduce applications can be written in other programming languages (R, Python, C# etc).  But Java is the most popular Hadoop YARN – YARN is a resource manager introduced in Hadoop 2 that was created by separating the processing engine and resource management capabilities of MapReduce as it was implemented in Hadoop 1 (see Hadoop 1.0 vs Hadoop 2.0).  YARN is often called the operating system of Hadoop because it is responsible for managing and monitoring workloads, maintaining a multi-tenant environment, implementing security controls, and managing high availability features of Hadoop
  • #21 http://www.jamesserra.com/archive/2014/05/hadoop-and-data-warehouses/ Hadoop Common – Contains libraries and utilities needed by other Hadoop modules Hadoop Distributed File System (HDFS) – A distributed file-system that stores data on commodity machines, providing very high aggregate bandwidth across the cluster Hadoop MapReduce – A programming model for large scale data processing.  It is designed for batch processing.  Although the Hadoop framework is implemented in Java, MapReduce applications can be written in other programming languages (R, Python, C# etc).  But Java is the most popular Hadoop YARN – YARN is a resource manager introduced in Hadoop 2 that was created by separating the processing engine and resource management capabilities of MapReduce as it was implemented in Hadoop 1 (see Hadoop 1.0 vs Hadoop 2.0).  YARN is often called the operating system of Hadoop because it is responsible for managing and monitoring workloads, maintaining a multi-tenant environment, implementing security controls, and managing high availability features of Hadoop
  • #23 Hardware acquisition (Capex up front) Scale constrained to on-premise procurement (resource and capacity planning) Skilled Hadoop Expertise Tuning + Maintenance
  • #24 Gartner has increased its sizing and forecast for cloud compute services, reflecting greater interest among our client base than had been expected. We expect the 2013 market to be worth $8 billion (up from $6.8 billion forecast last year) and the 2014 market to be worth $10 billion.
  • #25  Why Hadoop in the cloud? You can deploy Hadoop in a traditional on-site datacenter. Some companies–including Microsoft–also offer Hadoop as a cloud-based service. One obvious question is: why use Hadoop in the cloud? Here's why a growing number of organizations are choosing this option. The cloud saves time and money Open source doesn't mean free. Deploying Hadoop on-premises still requires servers and skilled Hadoop experts to set up, tune, and maintain them. A cloud service lets you spin up a Hadoop cluster in minutes without up-front costs. See how Virginia Tech is using Microsoft's cloud instead of spending millions of dollars to establish their own supercomputing center. The cloud is flexible and scales fast In the Microsoft Azure cloud, you pay only for the compute and storage you use, when you use it. Spin up a Hadoop cluster, analyze your data, then shut it down to stop the meter. We quickly spun up the Azure HDInsight cluster and processed six years worth of data in just a few hours, and then we shut it down&ellipsis; processing the data in the cloud made it very affordable. –Paul Henderson, National Health Service (U.K.) The cloud makes you nimble Create a Hadoop cluster in minutes–and add nodes on-demand. The cloud offers organizations immediate time to value. It was simply so much faster to do this in the cloud with Windows Azure. We were able to implement the solution and start working with data in less than a week. –Morten Meldgaard, Chr. Hansen
  • #30 Joint engineering teams – joint people on each other’s teams. They report to Hortonworks and Hortonworks report to us. We share resources and have a joint engineering team. This is not just a re-licensing of a distro. We are one engineering team. Milestone cadence – we do a 12-month calendar year roadmap with each other. That roadmap is how each company lock to. They revisit it every 3 months. When Cloudera released Impala, Microsoft and Hortonworks changed their resources to start working on Stinger project. In CY13, this became a joint commitment to build Tez (vectorized query executor) and Stinger Joint planning and execution – Joint signoffs (Linux, Windows Azure) – Microsoft and Hortonworks signedoff on Linux, on Windows, and on Azure. Aligned offerings – HDP on Windows HDI Appliance, HDI Service Joint roadmap CY planning/ roadmap development Quarterly adjustments Hortonworks is making Hadoop an enterprise viable platform, by the end of 2015 more than half the worlds data will be processed by Hadoop <look at partner page on Hortonworks site> Engineering alignment Microsoft worked with Hortonworks to develop Hortonworks Data Platform for Windows HDInsight on Windows Azure and HDInsight on PDW is based on HDP Microsoft and Hortonworks are working on various Hadoop sub-projects together (Stinger, Tez) contributed over 6k engineering hours, 25k+ lines of code Corporate alignment Microsoft is one of Hortonworks strategic partners (Q for Audrey) Joint Marketing alignment (webinars, events strategy, Press, etc.) Joint Support alignment (Microsoft provides Lvl1 support and pays Hortownworks for Lvl2 and Lvl3) Field Alignment Hortonworks field reps will get quota compensation or relief if HDP for Windows OR if PDW, Azure is sold Microsoft field reps work with Hortonworks reps to get credibility from Hadoop story
  • #34 https://mahout.apache.org/users/basics/algorithms.html
  • #36 Slide 1 – it’s supposed to represent a zoom in of the right most box with certain new details. Slide 2 and 3 are the same slide. I received feedback from my original slide 2 below and I made slide 3.  They still didn’t like it.  Here is the feedback: Original feedback:   Interactive experience conceptual picture Make the top and bottom be proportional Drop the boxes and headings (OSS notebooks, 3rd party BI tools) Power BI should stand out but the current image is not adding new value, can we just use Power BI logo and also some additional image to have it stand out   Secondary feedback about slide 3   In the image that shows Power BI, we do still want to make Power BI pop above the others and have it featured, but Ranga was a little unsure about having the images of the devices included. Let’s get a version that shows it both ways, (with the devices, and making the Power BI section stand out in another way, ie size, but not including the devices,) and then let him decide which one he prefers.   Sldie 4 General cleanup
  • #45 http://www.zdnet.com/article/microsoft-claims-azure-now-used-by-half-of-the-fortune-500/
  • #46 Azure Blob (WASB, SAS keys, manually manage storage expansion) vs Azure data lake store (webHDFS, AD, auto shared) http://www.jamesserra.com/archive/2014/02/what-is-hdinsight/
  • #47 There does not have to be multiple ADLS Accounts. You can use one ADLS account since there is the promise of infinite scale. You could have multiple accounts if they are used for wider access control, different owners, billing/chargeback, management/backup strategy differences, come from different subscriptions, etc. You could also have an HDI cluster with one or more WASBs and one or more ADLS accounts. And each of those WASBs and each ADLS account can be accessed by multiple HDIs.
  • #58 Case Study: http://www.microsoft.com/casestudies/Windows-Azure/Virginia-Polytechnic-Institute-and-State-University/University-Transforms-Life-Sciences-Research-with-Big-Data-Solution-in-the-Cloud/710000003381 Video: http://www.youtube.com/watch?v=4StwPG0qWT0 Virginia Tech is one of the country’s leading research institutions. The university manages a research portfolio of US$454 million. Virginia Tech began previously used a network of supercomputers to locate undetected genes in a massive genome database. This and related work by other institutions has the potential to lead to exciting medical breakthroughs, including new cancer therapies and antibiotics used to combat the emergence of drug-resistant bugs. However, as the size of genome databases grows, this no longer became attenable. Of the estimated 2,000 DNA sequencers worldwide, they are generating 15 petabytes of genome data every year. Their existing computational and storage resources required to work with data sets of this size wasn’t keeping up. Rather than getting a grant for millions of dollars to establish their own supercomputing center, Virginia Tech went for Azure HDInsight and only paying for the compute they use. Benefits include: Significant cost savings going from a multi-million supercomputer center to paying for only the compute you need in the cloud. Ability to access the cloud from anywhere and on any device (even outside the laboratory) Azure is able to elastically scale and keep up with the amount of data being generated Ultimate benefit: Some day be able to find a treatment for cancer
  • #59  BlackBall is a tea and dessert company based in Taipei, Taiwan. Established in 2006, BlackBall has expanded from 20 stores in Taiwan to 40 more locations throughout Malaysia. To support regional growth, BlackBall wanted better insight into business operations. Each store sent point-of-sale (POS) data to the corporate headquarters, where managers manually entered the information into spreadsheets. They also monitored other sources such as social media, but it was difficult to make connections between the disparate sources of information. “Our main challenge involved reporting,” says Andrew Cheong, Senior Manager at BlackBall. “There were a lot of questions that we were just unable to ask about customer behavior. Without insight into regional demand, it’s very difficult to grow our business.” In addition to wanting to more targeted promotions, the company wanted to improve product distribution. BlackBall had built its reputation on the quality of its highly perishable ingredients, and getting them to the right place at the right time was critical to the company’s success. “We use a lot of fruits in our desserts, and if they don’t taste right, then we lose our competitive advantage,” says Cheong. BlackBall offers more than 70 products, and constantly works on developing new ingredients and combinations. By monitoring customer feedback on social media as well as sales data, the company can plan more strategically. For example, BlackBall is using the information to develop new products and execute more effective promotions. “By capitalizing on the strengths of HDInsight Service, we have a much better understanding of what people want,” says Cheong. “We also found that our marketing can be vastly improved because we can quickly identify top-selling products and push that information out to each store. We used to only be able to do that on a weekly basis, but now we can do that in near-real time.” The company is already gaining surprising insights that it can use to improve product distribution and marketing. For example, BlackBall expected to see that the weather affected sales; however, the results were not what it anticipated. “Before, we thought that people would choose cold drinks and desserts in hot weather,” says Cheong. “But contrary to our assumptions, in certain outlets we saw an opposite trend.” As a result, the company can ensure that those outlets are equipped with the staff and supplies needed to meet customer demand.