• Save
Sears Case Study: Hadoop as an Enterprise Data Hub
Upcoming SlideShare
Loading in...5
×
 

Sears Case Study: Hadoop as an Enterprise Data Hub

on

  • 844 views

http://www.datameer.com Data has never been more critical to organizations, especially as data volumes, and varieties continue to grow. The old approach of a centralized data warehouse becomes archaic ...

http://www.datameer.com Data has never been more critical to organizations, especially as data volumes, and varieties continue to grow. The old approach of a centralized data warehouse becomes archaic and slow as these legacy data warehouses, over time, become multiple, siloed data warehouses. What results is an operational nightmare that requires ETL data to be modeled from one schema to another in order to maintain its data integrity and quality. In this webinar, Sears CTO Phil Shelley and Datameer CEO Stefan Groschupf highlight the use case and technology that enables Sears to solve the problems of the old data warehouse approach. Learn how Sears significantly minimized their data architecture complexity, resulting in a reduction of time to insight by 30-70%. View the full recording at: http://info.datameer.com/Web-Hadoop-Data-Hub-Sears.html

Statistics

Views

Total Views
844
Views on SlideShare
844
Embed Views
0

Actions

Likes
0
Downloads
0
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Sears Case Study: Hadoop as an Enterprise Data Hub Sears Case Study: Hadoop as an Enterprise Data Hub Presentation Transcript

  • © 2012 Datameer, Inc. All rights reserved. © 2012 Datameer, Inc. All rights reserved. Hadoop as a Data Hub: A Sears Case Study
  • © 2012 Datameer, Inc. All rights reserved. About our Speaker! Phil Shelley
 ! Dr. Shelley is CTO at Sears Holdings Corporation (SHC), leading IT Operations and is focusing on the modernization of IT across the company. ! ! Phil is also CEO of Metascale, a subsidiary of Sears Holdings. Metascale is an IT managed Services Company that makes Big Data easy by designing, delivering and operating Hadoop-based solutions for Analytics, Mainframe Migration and massive-scale processing, integrated into the customers’ Enterprise.!
  • © 2012 Datameer, Inc. All rights reserved. About our Speaker! Stefan Groschupf! ! Stefan Groschupf is the co-founder and CEO of Datameer and one of the original contributors to Nutch, the open source predecessor of Hadoop, ! ! Prior to Datameer, Stefan was the co-founder and CEO of Scale Unlimited, which implemented custom Hadoop analytic solutions for HP, Sun, Deutsche Telekom, Nokia and others. Earlier, Stefan was CEO of 101Tec, a supplier of Hadoop and Nutch-based search and text classification software to industry-leading companies such as Apple, DHL and EMI Music. Stefan has also served as CTO at multiple companies, including Sproose, a social search engine company.!
  • Hadoop as a Data Hub a new approach to data management Dr. Phil Shelley CTO Sears Holdings CEO MetaScale
  • The Challenge Data Volume / Retention Batch Window Limits Escalating IT Costs Scalability Ever Evolving Business ETL Complexity / Costs Data Latency / Redundancy Tight IT Budgets Challenges & Trends 2 Constant pressure to lower costs, deliver faster, migrate to real time and answer more difficult questions… Batch Real-Time→ Proprietary Open Source→ Capital Cloud Expense→ Heavy Iron Commodity→ Linear Parallel Processing→ Copy and Use Source Once & Re-Use→ Costs Down→ Power Up→
  • What is a Data Hub A single, consolidated, fully populated data archive that gives unfettered user access to analyze and report on data, with appropriate security, as soon as the data is created by the transactional or other source system
  • Why a Data Hub • Most data latency is removed • Users and analysts are put in a self-service mode • The concept of a “data cube” is unnecessary • Analysis at the lowest level – No need to run at the segment level • Any question can be asked • Business users and analysts have unrestricted ability to explore • Correlation of any data set is immediately possible • Significant reduction in reporting and analysis times – Time to source the data – Time for users to gain access to the data • Reduction in IT labor …. – Source Once – Use Many Times
  • • Data is Copied from source systems via ETL • Sub-sets of data are captured – Too expensive to keep all detail – Takes too long to ETL all data fields from sources • Each use of data generates more unique ETL jobs • Data is segmented to reduce query times • Cubes or views are generated to improve analysis speed • Disparate data silos required ETL before users have access • Data warehouse costs and performance limitations force archiving and data truncation • Tends to lead to different versions of “truth” • Time lag or latency from data generation to use The Traditional Approach
  • Benefits - Hadoop as a Data Hub • All data is available – All history – All detail • No need to filter, segment or cube before use • Data can be consumed almost immediately • No need to silo into different databases to accommodate performance limitations • Users do not require IT to ETL data before use • Security is applied via Datameer profiles • User self-service is a reality
  • Prerequisites • An Enterprise data architecture that has a Data Hub as a foundation • Data sourcing must be controlled • Metadata must be created for data sources • A leader with the vision and capability to drive • Willing business users to pilot and coach others • A sustained strategy to Enterprise Data Architecture and governance • A carefully designed Hadoop data layer architecture
  • Key Concepts • A Data Hub is now reality • Drives lower costs and reduces delays • Time to value for data is reduced • Business users and analysts are empowered • The most important: – Source Once – Re-use Many Times – Source everything – Retain everything
  • o ETL complexity is needed no-longer – DATA HUB – Source Once – Re-Use many times – ETL is transformed to ELTTTTTT with lower data latency – Consume data in-place with Datameer o ETL-induced data latency is largely eliminated – Analysis is routinely possible within minutes of data creation o Long-running overnight workload on Legacy Systems – Can be eliminated and executed at any time – Run times are a fraction of the original clock-time o Batch processing on mainframes or other conventional batch – Moved to Hadoop – Run 10, 50, even 100 times faster. o Intelligent Archive – Put your archives/tape data on Hadoop and make it Intelligent – Archive with the ability to run analytics or join it with other data o Modernize Legacy – Mainframe MIPs reduction has very attractive ROI – Move Data Warehouse workload – Reduce Cost – Go Faster Key Learning
  • Sample Reports - Datameer
  • © 2012 Datameer, Inc. All rights reserved. Questions and Answers!
  • © 2012 Datameer, Inc. All rights reserved. Online Resources !  Try Datameer: www.datameer.com! !  Visit Metascale: www.metascale.com! !  Follow us on Twitter @datameer & @BigDataMadeEasy! !