Philly Code Camp 2013 Mark Kromer Big Data with SQL Server
Upcoming SlideShare
Loading in...5
×
 

Philly Code Camp 2013 Mark Kromer Big Data with SQL Server

on

  • 1,086 views

These are my slides from May 2013 Philly Code Camp at Penn State Abington. I will post the samples, code and scripts on my blog here following the event this Saturday: http://www.kromerbigdata.com

These are my slides from May 2013 Philly Code Camp at Penn State Abington. I will post the samples, code and scripts on my blog here following the event this Saturday: http://www.kromerbigdata.com

Statistics

Views

Total Views
1,086
Slideshare-icon Views on SlideShare
1,086
Embed Views
0

Actions

Likes
0
Downloads
30
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Philly Code Camp 2013 Mark Kromer Big Data with SQL Server Philly Code Camp 2013 Mark Kromer Big Data with SQL Server Presentation Transcript

    • Big Data with SQL ServerPhilly Code Camp 2013.1May 2013http://www.pssug.orgMark Kromerhttp://www.kromerbigdata.com@kromerbigdata@mssqldudemakromer@microsoft.com
    • ‣What is Big Data?‣The Big Data and Apache Hadoop environment‣Big Data Analytics‣SQL Server in the Big Data world‣Microsoft + Hortonworks (Yahoo!) = HDInsightsWhat we’ll (try) to cover today2
    • Big Data 101‣ 3 V’s‣ Volume – Terabyte records, transactions, tables, files‣ Velocity – Batch, near-time, real-time (analytics), streams.‣ Variety – Structures, unstructured, semi-structured, and all the above in a mix‣ Text Processing‣ Techniques for processing and analyzing unstructured (and structured) LARGE files‣ Analytics & Insights‣ Distributed File System & Programming
    • ‣ Batch Processing‣ Commodity Hardware‣ Data Locality, no shared storage‣ Scales linearly‣ Great for large text file processing, not so great on small files‣ Distributed programming paradigm
    • Popular Hadoop DistributionsHosted PaaS Hadoop platforms: AmazonEMR, Pivotal, Microsoft Hadoop on Azure
    • ‣ Big Data ≠ NoSQL‣ NoSQL has similar Internet-scale Web origins of Hadoop stack (Yahoo!,Google, Facebook, et al) but not the same thing‣ Facebook, for example, uses Hbase from the Hadoop stack‣ Big Data ≠ Real Time‣ Big Data is primarily about batch processing huge files in a distributed mannerand analyzing data that was otherwise too complex to provide value‣ Use in-memory analytics for real time insights‣ Big Data ≠ Data Warehouse‣ I still refer to large multi-TB DWs as “VLDB”‣ Big Data is about crunching stats in text files for discovery of new patterns andinsights‣ Use the DW to aggregate and store the summaries of those calculations forreportingMark’s Big Data Myths
    • Big Data Analytics Web Platform - Example
    • using Microsoft.Hadoop.MapReduce;using System.Text.RegularExpressions;public class TotalHitsForPageMap : MapperBase{public override void Map(string inputLine, MapperContext context){context.Log(inputLine);var parts = Regex.Split(inputLine, "s+");if (parts.Length != expected) //only take records with all values{return;}context.EmitKeyValue(parts[pagePos], hit);}}MapReduce Framework (Map)
    • public class TotalHitsForPageReducerCombiner : ReducerCombinerBase{public override void Reduce(string key, IEnumerable<string> values, ReducerCombinerContextcontext){context.EmitKeyValue(key, values.Sum(e=>long.Parse(e)).ToString());}}public class TotalHitsJob : HadoopJob<TotalHitsForPageMap,TotalHitsForPageReducerCombiner>{public override HadoopJobConfiguration Configure(ExecutorContext context){var retVal = new HadoopJobConfiguration();retVal.InputPath = Environment.GetEnvironmentVariable("W3C_INPUT");retVal.OutputFolder = Environment.GetEnvironmentVariable("W3C_OUTPUT");retVal.DeleteOutputFolder = true;return retVal;}}MapReduce Framework (Reduce & Job)
    • ‣ Linux shell commands to access data in HDFS‣ Put file in HDFS: hadoop fs -put sales.csv /import/sales.csv‣ List files in HDFS:‣ c:Hadoop>hadoop fs -ls /importFound 1 items-rw-r--r-- 1 makromer supergroup 114 2013-05-07 12:11 /import/sales.csv‣ View file in HDFS:c:Hadoop>hadoop fs -cat /import/sales.csvKromer,123,5,55Smith,567,1,25Jones,123,9,99James,11,12,1Johnson,456,2,2.5Singh,456,1,3.25Yu,123,1,11‣ Now, we can work on the data with MapReduce, Hive, Pig, etc.Get Data into Hadoop
    • create external table ext_sales(lastname string,productid int,quantity int,sales_amount float)row format delimited fields terminated by , stored astextfile location /user/makromer/hiveext/input;LOAD DATA INPATH /user/makromer/import/sales.csv OVERWRITEINTO TABLE ext_sales;Use Hive for Data Schema and Analysis
    • ‣ sqoop import –connect jdbc:sqlserver://localhost –username sqoop -passwordpassword –table customers -m 1‣ > hadoop fs -cat /user/mark/customers/part-m-00000‣ > 5,Bob Smith‣ sqoop export –connect jdbc:sqlserver://localhost –username sqoop -passwordpassword -m 1 –table customers –export-dir /user/mark/data/employees3‣ 12/11/11 22:19:24 INFO mapreduce.ExportJobBase: Transferred 201 bytes in32.6364 seconds (6.1588 bytes/sec)‣ 12/11/11 22:19:24 INFO mapreduce.ExportJobBase: Exported 4 records.SqoopData transfer to & from Hadoop & SQL Server
    • SQL Server Big Data – Data LoadingAmazon HDFS & EMR Data LoadingAmazon S3 Bucket
    • Role of NoSQL in a Big Data Analytics Solution‣ Use NoSQL to store data quickly without the overhead of RDBMS‣ Hbase, Plain Old HDFS, Cassandra, MongoDB, Dynamo, just to name a few‣ Why NoSQL?‣ In the world of “Big Data”‣ “Schema later”‣ Ignore ACID properties‣ Drop data into key-value store quick & dirty‣ Worry about query & read later‣ Why NOT NoSQL?‣ In the world of Big Data Analytics, you will need support from analytical tools with aSQL, SAS, MR interface‣ SQL Server and NoSQL‣ Not a natural fit‣ Use HDFS or your favorite NoSQL database‣ Consider turning off SQL Server locking mechanisms‣ Focus on writes, not reads (read uncommitted)
    • ‣ SQL Server Database‣ SQL 2012 Enterprise Edition‣ Page Compression‣ 2012 Columnar Compression on Fact Tables‣ Clustered Index on all tables‣ Auto-update Stats Asynch‣ Partition Fact Tables by month and archive data with sliding window technique‣ Drop all indexes before nightly ETL load jobs‣ Rebuild all indexes when ETL completes‣ SQL Server Analysis Services‣ SSAS 2012 Enterprise Edition‣ 2008 R2 OLAP cubes partition-aligned with DW‣ 2012 cubes in-memory tabular cubes‣ All access through MSMDPUMP or SharePointSQL Server Big Data Environment
    • ‣Columnstore‣Sqoop adapter‣PolyBase‣Hive‣In-memory analytics‣Scale-out MPPSQL Server Big Data Analytics Features
    • 17 17Sensors Devices Bots CrawlersERP CRM LOB APPsUnstructured and Structured DataParallel Data WarehouseHadoop OnWindowsAzureHadoop OnWindowsServerConnectorsSSRSSSASBI PlatformFamiliar End User ToolsExcel with PowerPivot Embedded BIPredictive AnalyticsData Market PlaceData MarketPetabytes of Data(Unstructured)Hundreds of TB of Data(structured)Microsoft’s Data Solution – Big Data & PDW
    • MICROSOFT BIG DATADiscover Combine RefineRelational Non-relational Streamingimmersive dataexperiencesconnecting withworlds dataany data, anysize, anywhereSelf-Service Collaboration Corporate Apps DevicesAnalyticalParallel Data WarehouseMicrosoft HDInsight ServerHDInsight ServiceStreamInsightPowerPivotPower View
    • Microsoft .NET Hadoop APIs‣ WebHDFS‣ Linq to Hive‣ MapReduce‣ C#‣ Java‣ Hive‣ Pig‣ http://hadoopsdk.codeplex.com/‣ SQL on Hadoop‣ Cloudera Impala‣ Teradata SQL-H‣ Microsoft Polybase‣ Hadapt
    • Data Movement to the Cloud‣Use Windows Azure Blob Storage• Already stored in 3 copies• Hadoop can read from Azure blob storage• Allows you to upload while using no Hadoop network or CPU resources‣Compress files• Hadoop can read Gzip• Uses less network resources than uncompressed• Costs less for direct storage costs• Compress directories where source files are created as well.21
    • ‣ What is a Big Data approach to Analytics?‣ Massive scale‣ Data discovery & research‣ Self-service‣ Reporting & BI‣ Why do we take this Big Data Analytics approach?‣ TBs of change data in each subject area‣ The data in the sources are variable and unstructured‣ SSIS ETL alone couldn’t keep up or handle complexity‣ SQL Server 2012 columnstore and tabular SSAS 2012 are key to using SQLServer for Big Data‣ With the configs mentioned previously, SQL Server works great‣ Analytics on Big Data also requires Big Data Analytics tools‣ Aster, Tableau, PowerPivot, SAS, Parallel Data WarehouseWrap-up