Big Data - Insights & Challenges
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Big Data - Insights & Challenges

on

  • 5,362 views

This doc describes the challenges that business could face in the Big Data era.

This doc describes the challenges that business could face in the Big Data era.

Statistics

Views

Total Views
5,362
Views on SlideShare
5,362
Embed Views
0

Actions

Likes
5
Downloads
161
Comments
1

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
  • I like your Big Data presentation.
    I would like to share with you document about application of Big Data and Data Science in retail banking. http://www.slideshare.net/LadislavUrban/syoncloud-big-data-for-retail-banking-syoncloud
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Big Data - Insights & Challenges Document Transcript

  • 1. 2012Big Data – Insights & Challenges Rupen Momaya WEschool Part Time Masters Program 8/28/2012 1
  • 2. Table of ContentsExecutive Summary ......................................................................................................................... 4Introduction .................................................................................................................................... 5 1.0 Big Data Basics ...................................................................................................................... 5 1.1 What is Big Data ?.................................................................................................................. 5 1.2 Big Data Steps, Vendors & Technology Landscape .............................................................. 6 1.3 What Business Problems are being targeted ? ..................................................................... 7 1.4 Key Terms ............................................................................................................................ 8 1.4.1 Data Scientists ................................................................................................................ 8 1.4.2 Massive Parallel Processing (MPP) ................................................................................ 8 1.4.3 In Memory analytics ...................................................................................................... 8 1.4.4 Structured, Semi-Structured & UnStructured Data........................................................ 92.0. Big Data Infrastructure ........................................................................................................... 10 2.1 Storage................................................................................................................................. 10 2.1.1 Why RAID Fails at Scale ................................................................................................ 10 2.1.2 Scale up vs Scale out NAS ............................................................................................ 10 2.1.3 Object Based Storage.................................................................................................... 11 2.2 Apache Hadoop ................................................................................................................... 12 2.3 Data Appliances .................................................................................................................. 13 2.3.1 HP Vertica .................................................................................................................... 13 2.3.2 Terradata Aster ............................................................................................................ 143.0 Domain Wise Challenges in Big Data Era ................................................................................ 16 3.1 Log Management................................................................................................................. 16 3.2 Data Integrity & Reliability in the Big Data Era ................................................................... 16 3.3 Backup Management in Big Data Era ................................................................................. 17 3.4 Database Management in Big Data Era .............................................................................. 194.0 Big Data Use Cases ................................................................................................................. 21 4.1 Potential Use Cases ............................................................................................................. 21 4.2 Big Data Actual Use Cases .................................................................................................. 24Bibliography ................................................................................................................................. 29 2
  • 3. Table of FiguresFigure 1 - Big Data Statistics InfoGraphic ____________________________________________________________ 6Figure 2 - Big Data Vendors & Technology Landscape __________________________________________________ 7Figure 3 - HP Vertica Analytics Appliance ___________________________________________________________ 13Figure 4 - Terradata Unified Big Data Architecture for the Enterprise ____________________________________ 15Figure 5 - Framework for Choosing Teradata Aster Solution ____________________________________________ 15Figure 6 - Potential Use Cases for Big Data _________________________________________________________ 21Figure 7 - Big Data Analytics Business Model ________________________________________________________ 22Figure 8 - Survey Results : Use of Open Source to Manage Big Data _____________________________________ 25Figure 9 - Big Data Value Potential Index ___________________________________________________________ 28 3
  • 4. Executive Summary : The Internet has made new sources of vast amount of data to business executives. Bigdata is comprised of data sets too large to be handled by traditional systems. To remaincompetitive, business executives need to adopt new technologies & techniques emerging dueto big data. Big data includes structured data, semistructured and unstructured data. Structureddata are those data formatted for use in a database management system. Semistructured andunstructured data include all types of unformatted data including multimedia and social mediacontent. Big data are also provided by myriad hardware objects, including sensors & actuatorsembedded in physical objects, which are termed the Internet of things. Data storage techniques used include multiple clustered network attached storage(NAS) and object-based storage. Clustered NAS deploys storage devices attached to a network.Groups of storage devices attached to different networks are then clustered together. Object-based storage systems distribute sets of objects over a distributed storage system. Hadoop, used to process unstructured and semistructured big data, uses the mapparadigm to locate all relevant data then select only the data directly answering the query.NoSQL, MongoDB, and TerraStore process structured big data. NoSQL data is characterized bybeing basically available, soft state (changeable), and eventually consistent. MongoDB andTerraStore are both NoSQL-related products used for document – oriented applications. The advent of the age of big data poses opportunities and challenges for businesses.Previously unavailable forms of data can now be saved, retrieved, and processed. However,changes to hardware, software, and data processing techniques are necessary to employ thisnew paradigm. 4
  • 5. Introduction: The internet has grown tremendously in the last decade, from 304 million users in Mar2000 to 2280 million users in Mar 2012 according to Internet Worlds stats. Worldwideinformation is more than doubling every two years, with 1.8 zettabytes or 1.8 trillion gigabytesprojected to be created and replicated in 2011 according to the study conducted by researchfirm IDC. A buzzword, or catch-phrase, used to describe a massive volume of both structured andunstructured data that is so large that its difficult to process with traditional database andsoftware techniques is "Big Data". An example of Big Data might be petabytes (1,024 terabytes)or exabytes (1,024 petabytes) and zettabytes of data consisting of billions to trillions of recordsof millions of people -- all from different sources (e.g. blogs, social media, email, sensors, RFIDreaders, photographs, videos, microphones, mobile data and so on). The data is typicallyloosely structured data that is often incomplete and inaccessible. When dealing with larger datasets, organizations face difficulties in being able to create,manipulate, and manage Big Data. Scientists regularly encounter this problem in meteorology,genomics, connectomics, complex physics simulations, biological and environmentalresearch,Internet search, finance and business informatics. Big data is particularly a problem inbusiness analytics because standard tools and procedures are not designed to search andanalyze massive datasets. While the term may seem to reference the volume of data, that isntalways the case. The term Big Data, especially when used by vendors, may refer to thetechnology (the tools and processes) that an organization requires to handle the large amountsof data and storage facilities.1.0 Big Data Basics :1.1 What is Big Data ? Below infographic depicts the expected market size of Big data and some statistics. 5
  • 6. Figure 1 - Big Data Statistics InfoGraphic1.2 Big Data Steps, Vendors & Technology Landscape :  Data Acquisition: Data is collected from the data sources and distributed across multiple nodes -- often a grid -- each of which processes a subset of data in parallel. Here we have technological providers like IBM, HP etc.. and data providers like Reuters, Salesforce etc.. and social network websites like Facebook, Google+, LinkedIn etc..  Marshalling : In this domain, we have Very Large Data Warehousing and BI Appliances, actors like Actian, EMC² (Greenplum), HP (Vertica), IBM (Netezza) etc.  Analytics : In this phase, we have the predictive technologies (such as data mining) and vendors which are Adobe, EMC², GoodData, Hadoop Map Reduce etc.  Action : Includes all the Data Acquisition providers plus the ERP, CRM and BPM actors, including Adobe, Eloqua, EMC² etc.. Both in Analytical and Action phases, BI tools vendors are GoodData, Google, HP (Autonomy), IBM (Cognos suite) etc.. 6
  • 7.  Data Governance : An efficient Master data management solution. As defined, data governance applies to each of the six preceding stages of Big Data delivery. By establishing processes and guiding principles it sanctions behaviors around data. In short, data governance means that the application of Big Data is useful and relevant. Its an insurance policy that the right questions are being asked. So we wont be squandering the immense power of new Big Data technologies that make processing, storage and delivery speed more cost-effective and nimble than ever. Figure 2 - Big Data Vendors & Technology Landscape1.3 What Business Problems are being targeted ? World-class companies are targeting a new set of business problems that were hard tosolve before –  Modeling true risk  Customer churn analysis,  Flexible supply chains,  Loyalty pricing,  Recommendation engines,  Ad targeting,  Precision targeting, 7
  • 8.  PoS transaction analysis,  Threat analysis,  Trade surveillance,  Search quality fine tuning and  Mashups such as location + ad targeting.Data growth curve: Terabytes -> Petabytes -> Exabytes -> Zettabytes -> Yottabytes ->Brontobytes -> Geopbytes. It is getting more interesting.Analytical Infrastructure curve: Databases -> Datamarts -> Operational Data Stores (ODS) ->Enterprise Data Warehouses -> Data Appliances -> In-Memory Appliances -> NoSQL Databases -> Hadoop Clusters1.4 Key Terms : 1.4.1 Data Scientists : A data scientist represents an evolution from the business or data analyst role. Data scientists, also known as data analysts -- are professionals with core statistics or mathematics background coupled with good knowledge in analytics and data software tools. A McKinsey study on Big Data states, “India will need nearly 1,00,000 data scientists in the next few years.” A Data Scientist is a fairly new role defined by Hillary Mason of Bit.ly as someone who can obtain, scrub, explore, model and interpret data, blending hacking, statistics and machine learning who culls information from data. These data scientists take a blend of the hackers’ arts, statistics, and machine learning and apply their expertise in mathematics and understanding the domain of the data—where the data originated—to process the data into useful information. This requires the ability to make creative decisions about the data and the information created and maintaining a perspective that goes beyond ordinary scientific boundaries. 1.4.2 Massive Parallel Processing (MPP) : MPP is the coordinated processing of a program by multiple processors that work on different parts of the program, with each processor using its own operating system and memory. An MPP system is considered better than a symmetrically parallel system ( SMP ) for applications that allow a number of databases to be searched in parallel. These include decision support system and data warehouse applications. 1.4.3 In Memory analytics : The key difference between conventional BI tools and in-memory products is that the former query data on disk while the latter query data in random access memory(RAM). When a user runs a query against a typical data warehouse, the querynormally goes to a database that reads the information from multiple tables stored on a server’shard disk. With a server- based inmemory database, all information is initially loaded into memory. Users then query and interact with the data loaded into the machine’s memory.Does an in-memory analytics platform replace or augment traditional in-databaseapproaches? 8
  • 9. The answer is that it is quite complementary. In-database approaches put a large focus onthe data preparation and scoring portions of the analytic process. The value of in-databaseprocessing is the ability to handle terabytes or petabytes of data effectively. Much of theprocessing may not be highly sophisticated, but it is critical. The new in-memory architectures use a massively parallel platform to enable the multipleterabytes of system memory to be utilized (conceptually) as one big pool of memory. Thismeans that samples can be much larger, or even eliminated. The number of variables testedcan be expanded immensely. In-memory approaches fit best in situations where there is a need for:  High Volume & Speed: It is necessary to run many, many models quickly  High Width & Depth: It is desired to test hundreds or thousands of metrics across tens of millions customers (or other entities)  High Complexity: It is critical to run processing-intensive algorithms on all this data and to allow for many iterations to occur. There are a number of in-memory analytics tools and technologies with differentarchitectures. Boris Evelson (Forrester Research) defines the following five types of businessintelligence in-memory analytics:  In-memory OLAP: Classic MOLAP (Multidimensional Online Analytical Processing) cube loaded entirely in memory.  In-memory ROLAP: Relational OLAP metadata loaded entirely in memory.  In-memory inverted index: Index, with data, loaded into memory.  In-memory associative index: An array/index with every entity/attribute correlated to every other entity/attribute.  In-memory spreadsheet: Spreadsheet like array loaded entirely into memory. 1.4.4 Structured, Semi-Structured & UnStructured Data : Structured Data is the type that would fit neatly into a standard Relational Data BaseManagement System, RDBMS, and lend itself to that type of processing. Semi-structured Data is that which has some level of commonality but does not fit thestructured data type. Unstructured Data is the type that varies in its content and can change from entry to entry.Structured Data Semi Structure Data UnStructured DataCustomer Records Web Logs PicturesPoint of Sale data Social Media Video Editing DataInventory E-Commerce Productivity (Office docs)Financial Records Geological DataAbove table depicts the examples of each of them. 9
  • 10. 2. 0 Big Data Infrastructure2.1 Storage2.1.1 Why RAID Fails at Scale : RAID schemes are based on parity, and at its root, if more than two drives failsimultaneously, data is not recoverable. The statistical likelihood of multiple drive failures hasnot been an issue in the past. However, as drive capacities continue to grow beyond theterabyte range and storage systems continue to grow to hundreds of terabytes and petabytes,the likelihood of multiple drive failures is now a reality. Further, drives aren’t perfect, and typical SATA drives have a published bit rate error(BRE) of 1014 , meaning that once every 100,000,000,000,000 bits, there will be a bit that isunrecoverable. Doesn’t seem significant? In today’s big data storage systems, it is. Thelikelihood of having one drive fail, and encountering a bit rate error when rebuilding from theremaining RAID set is highly probable in real world scenarios. To put this into perspective, whenreading 10 terabytes, the probability of an unreadable bit is likely (56%), and when reading 100terabytes, it is nearly certain (99.97%).2.1.2 Scale up vs Scale out NAS : Traditional Scale up system would provide a small number of access points, or dataservers, that would sit in front of a set of disks protected with RAID. As these systems neededto provide more data to more users the storage administrator would add more disks to theback end but this only caused to create the data servers as a choke point. Larger and fasterdata servers could be created using faster processor and more memory but this architecturestill had significant scalability issues. Scale out uses the approach of more of everything—instead of adding drives behind apair of servers, it adds servers each with processor, memory, network interfaces and storagecapacity. As I need to add capacity to a grid—the scale out version of an array—I insert a newnode with all the available resources. This architecture required a number of things to make itwork from both a technology and financial aspect. Some of these factors include:  Clustered architecture – for this model to work the entire grid needed to work as a single entity and each node in the grid would need to be able to pick up a portion of the function of any other node that may fail.  Distributed/parallel file system – the file system must allow for a file to be accessed from any one or any number of nodes to be sent to the requesting system. This required different mechanisms underlying the file system: distribution of data across multiple nodes for redundancy, a distributed metadata or locking mechanism, and data scrubbing/validation routines.  Commodity hardware – for these systems to be affordable they must rely on commodity hardware that is inexpensive and easily accessible instead of purpose built systems. Benefits of Scale Out : There are a number of significant benefits to these new scale out systems that meet the needs of big data challenges. 10
  • 11.  Manageability – when data can grow in a single file system namespace the manageability of the system increases significantly and a single data administrator can now manage a petabyte or more of storage versus 50 or 100 terabytes on a scale up system.  Elimination of stovepipes – since these systems scale linearly and do not have the bottlenecks that scale up systems create, all data is kept in a single file system in a single grid eliminating the stovepipes introduced by the multiple arrays and files systems required.  Just in time scalability – as my storage needs grow I can add an appropriate number of nodes to meet my needs at the time I need them. With scale up arrays I would have to guess at the final size my data may grow while using that array which often led to the purchase of large data servers with only a few disks behind them initially so I would not hit bottleneck in the data server as I added disks.  Increased utilization rates – since the data servers in these scale out systems can address the entire pool of storage there is no stranded capacity. There are five core tenets of scale-out NAS: a NAS should be simple to scale, offerpredictable performance, be efficient to operate, always available and be proven to work in alarge enterprise.EMC Isilon :EMC Isilon is the scale-out platform that delivers ideal storage for Big Data. Powered by theOneFS operating system, Isilon nodes are clustered to create a high-performing, single pool ofstorage. EMC Corporation announced in May 2011, the world’s largest single file system withthe introduction of EMC Isilon’s new IQ 108NL scale-out NAS hardware product. Leveragingthree terabyte (TB) enterprise-class Hitachi Ultrastar drives in a 4U node, the 108NL scales tomore than 15 petabytes (PB) in a single file system and single volume, providing the storagefoundation for maximizing the big data opportunity. EMC also announced Isilon’s newSmartLock data retention software application, delivering immutable protection for big data toensure the integrity and continuity of big data assets from initial creation to archival.2.1.3 Object Based Storage Object storage is based on a single, flat address space that enables the automaticrouting of data to the right storage systems, and the right tier and protection levels withinthose systems according to its value and stage in the data life cycle.Better Data Availability than RAID : In a properly configured object storage system, content isreplicated so that a minimum of two replicas assure continuous data availability. If a disk dies,all other disks in the cluster join in to replace the lost replicas while the system still runs atnearly full speed. Recovery takes only minutes, with no interruption of data availability and nonoticeable performance degradation.Provides Unlimited Capacity and Scalability In object storage systems, there is no directory hierarchy (or "tree") and the objectslocation does not have to be specified in the same way a directorys path has to be known inorder to retrieve it. This enables object storage systems to scale to petabytes and beyond 11
  • 12. without limits on the number of files (objects), file size or file system capacity, such as the 2-terabyte restriction that is common for Windows and Linux file systems.Backups Are Eliminated With a well-designed object storage system, backups are not required. Multiple replicasensure that content is always available and an offsite disaster recovery replica can beautomatically created if desiredAutomatic Load Balancing A well-designed object storage cluster is totally symmetrical, which means that eachnode is independent, provides an entry point into the cluster and runs the same code. Companies that provide this are CleverSafe,Compuverde, Amplidata, Caringo, EMC(Atmos), Hitachi Data Systems (Hitachi Content Platform), NetApp (StorageGRID) and Scality.2.2 Apache Hadoop Apache Hadoop has been the driving force behind the growth of the big data industry. It isa framework for running applications on large cluster built of commodity hardware. TheHadoop framework transparently provides applications both reliability and data motion.MapReduce : is the core of Hadoop. Created at Google in response to the problem of creatingweb search indexes, the MapReduce framework is the powerhouse behind most of today’s bigdata processing. In addition to Hadoop, you’ll find MapReduce inside MPP and NoSQLdatabases, such as Vertica or MongoDB. The important innovation of MapReduce is the abilityto take a query over a dataset, divide it, and run it in parallel over multiple nodes. Distributingthe computation solves the issue of data too large to fit onto a single machine. Combine thistechnique with commodity Linux servers and you have a cost-effective alternative to massivecomputing arrays.HDFS : we discussed the ability of MapReduce to distribute computation over multiple servers.For that computation to take place, each server must have access to the data. This is the role ofHDFS, the Hadoop Distributed File System. HDFS and MapReduce are robust. Servers in a Hadoop cluster can fail and not abort thecomputation process. HDFS ensures data is replicated with redundancy across the cluster. Oncompletion of a calculation, a node will write its results back into HDFS. There are norestrictions on the data that HDFS stores. Data may be unstructured and schemaless. Bycontrast, relational databases require that data be structured and schemas be defined beforestoring the data. With HDFS, making sense of the data is the responsibility of the developer’scode.Why a company will be interested in Hadoop? The number one reason is that the company is interested in taking advantage of un-structured or semi-structured data. This data will not fit well into a relational database, butHadoop offers a scalable and relatively easy-to-program way to work with it. This categoryincludes emails, web server logs, instrumentation of online stores, images, video and externaldata sets (such as list of small businesses organized by geographical area). All this data cancontain information that is critical to the business and should reside in your data warehouse, 12
  • 13. but it needs a lot of pre-processing, and this pre-processing will not happen in Oracle RDBMS(for example). The other reason to look into Hadoop is for information that exists in the database, butcan’t be efficiently processed within the database. This is a wide use-case, and it is usuallylabelled “ETL” because the data is going out of an OLTP system and into a data warehouse. Youuse Hadoop when 99% of the work is in the “T” of ETL – Processing the data into usefulinformation.2.3 Data Appliances : Purpose built solutions like Teradata, IBM/Netezza, EMC/Greenplum, SAP HANA (High-Performance Analytic Appliance), HP Vertica and Oracle Exadata are forming a new category. Dataappliances are one of the fastest growing categories in Big Data. Data appliances integrate database,processing, and storage in a integrated system optimized for analytics.  Processing close to the data source  Appliance simplicity (ease of procurement; limited consulting)  Massively parallel architecture  Platform for advanced analytics  Flexible configurations and extreme scalability2.3.1 HP Vertica : Figure 3 - HP Vertica Analytics Appliance The Vertica Analytics Platform is purpose built from the ground up to enable companiesto extract value from their data at the speed and scale they need to thrive in today’s economy. 13
  • 14. Vertica was designed and built since its inception for today’s most demanding analyticworkloads, each Vertica component is able to take full-advantage of the others by design.Key Features of the Vertica Analytics Platform : o Real-Time Query & Loading » Capture the time value of data by continuously loading information, while simultaneously allowing immediate access for rich analytics. o Advanced In-Database Analytics » Ever growing library of features and functions to explore and process more data closer to the CPU cores without the need to extract. o Database Designer & Administration Tools » Powerful setup, tuning and control with minimal administration effort. Can make continual improvements while the system remains online. o Columnar Storage & Execution » Perform queries 50x-1000x faster by eliminating costly disk I/O without the hassle and overhead of indexes and materialized views. o Aggressive Data Compression » Accomplish more with less CAPX, while delivering superior performance with our engine that operates on compressed data. o Scale-Out MPP Architecture » Vertica automatically scales linearly and limitlessly by just adding industry-standard x86 servers to the grid. o Automatic High Availability » Runs non-stop with automatic redundancy, failover and recovery optimized to deliver superior query performance as well. o Optimizer, Execution Engine & Workload Management » Get maximum performance without worrying about the details of how it gets done. Users just think about questions, we deliver answers, fast. o Native BI, ETL, & Hadoop/MapReduce Integration » Seamless integration with a robust and ever growing ecosystem of analytics solutions.2.3.2 Terradata Aster : To Gain Business Insight Using MapReduce and Apache Hadoop with SQL-Based Analytics,below is a summary using a unified big data architecture that blends the best of Hadoop andSQL, allowing users to:  Capture and refine data from a wide variety of sources  Perform necessary multi-structured data preprocessing  Develop rapid analytics  Process embedded analytics, analyzing both relational and non-relational data  Produce semi-structured data as output, often withmetadata and heuristic analysis  Solve new analytical workloads with reduced time to insight  Usemassively parallel storage in Hadoop to efficiently store and retain data 14
  • 15. Figure 4 - Terradata Unified Big Data Architecture for the EnterpriseWhen to choose which solution : (Teradata, Aster & Hadoop) ? Below figure offers a framework to help enterprise architects most effectively use eachpart of a unified big data architecture. This framework allows a best-of-breed approach thatyou can apply to each schema type, helping you achieve maximum performance, rapidenterprise adoption, and the lowest TCO. Figure 5 - Framework for Choosing Teradata Aster Solution 15
  • 16. 3.0 Domain Wise Challenges in Big Data Era3.1 Log Management Log data does not fall into the convenient schemas required by relational databases.Log data is, at its core, unstructured, or, in fact, semi-structured, which leads to a deafeningcacophony of formats; the sheer variety in which logs are being generated is presenting a majorproblem in how they are analyzed. The emergence of Big Data has not only been driven by theincreasing amount of unstructured data to be processed in near real-time, but also by theavailability of new toolsets to deal with these challenges. There are 2 things that don’t receive enough attention in the log management space.The 1st is real scalability, which means thinking beyond what data centers can do. Thatinevitably leads to ambient cloud models for log management. Splunk has done an amazing jobof pioneering an ambient cloud model with the way they created an eventual consistencymodel which allows you to make a query to get a “good enough” answer quickly, or a perfectanswer in more time. The 2nd thing is security. Log data is next to useless if it is not nonrepudiatable.Basically, all the log data in the world is not useful as evidence unless you can prove thatnobody changed it. Sumo Data, Loggly, Splunk,are the primary companies that currently have productsaround Log management.3.2 Data Integrity & Reliability in the Big Data Era Consider standard business practices and how nearly all physical forms ofdocumentation and transactions have evolved to become digitized versions, and with themcome the inherent challenges of validating not just the authenticity of their contents but alsothe impact of acting upon an invalid data set – something which is highly possible in todayshigh-velocity, big data business environment. With this view, we can then begin to identify thescale of the challenge. With cybercrime and insider threats clearly emerging as a much moreprofitable (and safe) business for the criminal element, the need to validate and verify isgoing to become critical to all business documentation and related transactions,even within the existing supply chains. Keyless signature technology is a relatively new concept in the market and will requirea different set of perspectives when put under consideration. A keyless signature provides analternative method to key-based technologies by providing proof and non-repudiation ofelectronic data using only hash functions for verification. The implementation of keylesssignature is done via a globally distributed machine, taking hash values of data as inputs andreturning keyless signatures that prove the time, integrity, and origin (machine, organization,individual) of the input data. A primary goal of the keyless signature technology is to provide mass-scale, non-expiring data validation while eliminating the need for secrets or other forms of trust, therebyreducing or even eliminating the need for more complex certificate based solutions as theseare ripe with certificate management issues, including expiration and revocation. As more organizations become affected by Big Data phenomenon, the clear implicationis that many businesses will potentially be making business decision based on massive amounts 16
  • 17. of internal and third-party data. Consequently, the demand for novel and trusted approachesto validating data will grow. Extend this concept to the ability to validate a virtual machine,switch logs or indeed the security logs, and then multiply by the clear advantages that cloudcomputing (public or private) has over the traditional datacenter design – we will begin tounderstand why keyless data integrity technology that can ensure self-validatingdata is a technology that is likely to experience swift adoption. The ability to move away from reliance on a third-party certification authority willbe welcomed by many, although this move from the traditionally accepted approach to verifydata integrity needs to be more fully broadcasted and understood for more mass marketadoption and acceptance. Another solution for monitoring the stability, performance and security of your big dataenvironment is from a company called Gazzang. Enterprises and SaaS solution providers havenew needs that are driven by the new infrastructures and opportunities of cloud computing.For example, business intelligence analysis uses big data stores such as MongoDB, Hadoop andCassandra. The data is spread across hundreds or thousands of servers in order to optimizeprocessing time and return business insight to the user. Leveraging its extensive experiencewith cloud architectures and big data platforms, Gazzang is delivering a SaaS solution for thecapture, management and analysis of massive volumes of IT data. Gazzang zOps is purpose-built for monitoring big data platforms and multiple cloud environments. The powerful enginecollects and correlates vast amounts of data from numerous sources in a variety of forms.3.3 Backup Management in Big Data Era : For protection against user or application error, Ashar Baig, a senior analyst andconsultant with the Taneja Group, said snapshots can help with big data backups. Baig also recommends a local disk-based system for quick and simple first-level data-recovery problems. “Look for a solution that provides you an option for local copies of data sothat you can do local restores, which are much faster,” he said. “Having a local copy, and havingan image-based technology to do fast, image-based snaps and replications, does speed it upand takes care of the performance concern.”Faster scanning needed : One of the issues big data backup systems face is scanning each time the backup andarchiving solutions start their jobs. Legacy data protection systems scan the file system eachtime a backup job is run, and each time an archiving job is run. For file systems in big dataenvironments, this can be time-consuming. Commvault’s solution for the scanning issue in its Simpana data protection software isits OnePass feature. According to Commvault, OnePass is an object-level converged process forcollecting backup, archiving and reporting data. The data is collected and moved off theprimary system to a ContentStore virtual repository for completing the data protectionoperations. Once a complete scan has been accomplished, the Commvault software places an agenton the file system to report on incremental backups, making the process even more efficient. 17
  • 18. Casino doesn’t want to gamble on backups Pechanga Resort & Casino in Temecula, Calif., went live with a cluster of 50 EMC IsilonX200 nodes in February to back up data from its surveillance cameras. The casino has 1.4 PB ofusable Isilon storage to keep the data, which is critical to operations because the casino mustshut down all gaming operations if its surveillance system is interrupted. “In gaming, we’re mandated to have surveillance coverage,” said Michael Grimsley,director of systems for Pechanga Technology Solutions Group. “If surveillance is down, allgaming has to stop.” If a security incident occurs, the IT team pulls footage from the X200 nodes and movesit to WORM-compliant storage and backs it up with NetWorker software to EMC Data DomainDD860 deduplication target appliances. The casino doesn’t need tape for WORM capabilitybecause WORM is part of Isilon’s SmartLock software. “It’s mandatory that part of our storage includes a WORM-compliant section,” Grimsleysaid. “Any time an incident happens, we put that footage in the vault. We have policies in placeso it’s not deleted.” The casino keeps 21 days’ worth of video on Isilon before recording overthe video. Grimsley said he is looking to expand the backup for the surveillance camera data. He’sconsidering adding a bigger Data Domain device to do day-to-day backup of the data. “We haveno requirements for day-to-day backup, but it’s something we would like to do,” he said. Another possibility is adding replication to a DR site so the casino can recover quickly ifthe surveillance system goes down.Scale-out systems : Another option to solving the performance and capacity issues is using a scale-outbackup system, one similar to scale-out NAS, but built for data protection. You add nodes withadditional performance and capacity resources as the amount of protected data grows. “Any backup architecture, especially for the big data world, has to balance theperformance and the capacity properly,” said Jeff Tofano, Sepaton Inc.’s chief technologyofficer. “Otherwise, at the end of the day, it’s not a good solution for the customer and is amore expensive solution than it should be.” Sepaton’s S2100-ES2 modular virtual tape library (VTL) was built for data-intensive largeenterprises. According to the company, its 64-bit processor nodes backup data at up to 43.2 TBper hour, regardless of the data type, and can store up to 1.6 PB. You can add up to eightperformance nodes per cluster as your needs require, and add disk shelves to add capacity. 18
  • 19. 3.4 Database Management in Big Data Era : There are currently three trends in the industry:  the NoSQL databases, designed to meet the scalability requirements of distributed architectures, and/or schemaless data management requirements,  the NewSQL databases designed to meet the requirements of distributed architectures or to improve performance such that horizontal scalability is no longer needed  the Data grid/cache products designed to store data in memory to increase application and database performance Below comparison assesses the drivers behind the development and adoption of NoSQLand NewSQL databases, as well as data grid/caching technologies. NoSQL NewSQL o Newbreed of non-relational o New breed of relational database products. database products o Rejection of fixed table schema o Retain SQL and ACID and join operations. o Designed to meet scalability o Designed to meet scalability requirements of distributed requirements of distributed architectures. architectures o And/or schemaless data management o Or improve performance so requirements . horizontal scalability is no o Big tables – data mapped by row longer a necessity key, column key and time stamp o MySQL storage engines -> scale- o Keyvalue stores store keys and up and scale-out associated values. o Transparent sharding - reduce to o Document stores all data as a single manual effort required to scale document. o Appliances - take advantage of o Graph databases to use nodes properties improved hardware and edges to store data and the performance, solid state drives relationships between entries. o New databases - designed specifically for scale out. .. And Beyond o In-memory data grid/cache products o Potential primary platform for distributed data management Data grid/cache o spectrum of data management capabilities, from nonpersistent data caching to persistent caching, replication, and distributed data and compute grid. ComputerWorld’s Tam Harbert explored the skills and needs organizations are searchingfor in the quest to manage the Big Data challenge, and also identified five job titles emerging inthe Big Data world. Along with Harbert’s findings, here are 7 new types of jobs being createdby Big Data: 19
  • 20. 1. Data scientists: This emerging role is taking the lead in processing raw data and determining what types of analysis would deliver the best results. 2. Data architects: Organizations managing Big Data need professionals who will be able to build a data model, and plan out a roadmap of how and when various data sources and analytical tools will come online, and how they will all fit together. 3. Data visualizers: These days, a lot of decision-makers rely on information that is presented to them in a highly visual format — either on dashboards with colorful alerts and dials, or in quick-to-understand charts and graphs. Organizations need professionals who can harness the data and put it in context, in layman’s language, exploring what the data means and how it will impact the company, . 4. Data change agents: Every forward-thinking organization needs change agents — usually an informal role — who can evangelize and marshal the necessary resources for new innovation and ways of doing business. Harbert predicts that data change agents may be more of a formal job title in the years to come, driving changes in internal operations and processes based on data analytics. They need to be good communicators, and a Six Sigma background — meaning they know how to apply statistics to improve quality on a continuous basis — also helps. 5. Data engineer/operators: These are the people that make the Big Data infrastructure hum on a day-to-day basis. They develop the architecture that helps analyze and supply data in the way the business needs, and make sure systems are performing smoothly, says Harbert. 6. Data stewards: Not mentioned in Harbert’s list, but essential to any analytics-driven organization, is the emerging role of data steward. Every bit and byte of data across the enterprise should be owned by someone — ideally, a line of business. Data stewards ensure that data sources are properly accounted for, and may also maintain a centralized repository as part of a Master Data Management approach, in which there is one gold copy of enterprise data to be referenced. 7. Data virtualization/cloud specialists: Databases themselves are no longer as unique as they use to be. What matters now is the ability to build and maintain a virtualized data service layer that can draw data from any source and make it available across organizations in a consistent, easy-to-access manner. Sometimes, this is called Database- as-a-Service. No matter what it’s called, organizations need professionals that can also build and support these virtualized layers or clouds. Above insights will help visualize what the future global world class organizations would need tomanage their data. 20
  • 21. 4.0 Big Data Use Cases :4.1 Potential Use Cases The key to exploiting Big Data Analytics is focusing on a compelling business opportunityas defined by a use case — WHAT (What exactly are we trying to do?); WHAT value is there inproving a hypothesis? Use cases are emerging in a variety of industries that illustrate different corecompetencies around analytics. Figure below illustrates some Use Cases along two dimensions:data velocity and variety. A Use Case provides a context for a value chain: how to move from Raw Data -> Aggregated Data -> Intelligence -> Insights -> Decisions -> Operational Impact ->Financial Outcomes -> Value creation.Source : SAS & IDC Figure 6 - Potential Use Cases for Big Data Insurance — Individualize auto-insurance policies based on newly captured vehicle telemetry data. Insurer gains insight into customer’s driving habits delivering: (1) More accurate assessments of risks; (2) Individualized pricing based on actual individual customer driving habits; (3) Influence and motivate individual customers to improve their driving habits 21
  • 22. Travel — Optimize buying experience through web log and social media data analysis (1) Travel site gains insight into customer preferences and desires; (2) Up-selling products by correlating current sales with subsequent browsing behavior Increase browse-to-buy conversions via customized offers and packages; (3) Deliver personalized travel recommendations based on social media data Gaming – Collect gaming data to optimize spend within and across games: (1) Games company gains insight into likes, dislikes and relationships of its users; (2) Enhance games to drive customer spend within games; (3) Recommend other content based on analysis of player connections and similar likes Create special offers or packages based on browsing and (non-)buying behaviour Figure 7 - Big Data Analytics Business ModelE-tailing – E-Commerce – Online Retail/Consumer ProductsRetailing  Merchandizing and market basket Recommendation engines — increase analysis. average order size by recommending  Campaign management and complementary products based on customer loyalty programs - 22
  • 23. predictive analysis for cross-selling. Marketing departments across Cross-channel analytics — sales industries have long used technology attribution, average order value, lifetime to monitor and determine the value (e.g., how many in-store purchases effectiveness of marketing resulted from a particular campaigns. Big Data allows marketing recommendation, advertisement or teams to incorporate higher volumes promotion). of increasingly granular data, like click-stream data and call detail Event analytics — what series of steps records, to increase the accuracy of (golden path) led to a desired outcome analysis. (e.g., purchase, registration).  Supply-chain management and analytics.  Event- and behavior-based targeting.  Market and consumer segmentations.Financial Services Web & Digital Media Services Compliance and regulatory reporting.  Large-scale clickstream analytics. Risk Modelling and management -  Ad targeting, analysis, forecasting Financial firms, banks and others use and optimization. Hadoop and Next Generation Data  Abuse and click-fraud prevention. Warehouses to analyze large volumes of  Social graph analysis and profile transactional data to determine risk and segmentation - In conjunction with exposure of fincnaical assets, to prepare Hadoop and often Next Generation for potential what-if scenarios based on Data Warehousing, social simulated market behavior, and to score networking data is mined to potential clients for risk. determine which customers pose Fraud detection and security analytics - the most influence over others Credit card companies, for example, use inside social networks. This helps Big Data technologies to identify enterprises determine which are transactional behavior that indicates a high their most important customers, likelihood of a stolen card. who are not always those that buy CRM and customer loyalty programs. the most products or spend the Credit risk, scoring and analysis. most but those that tend to influence the buying behavior of High speed Arbitrage trading others the most. Trade surveillance.  Campaign management and loyalty Abnormal trading pattern analysis programs.Government New Applications Fraud detection and cybersecurity  Sentiment Analytics - used in Compliance and regulatory analysis. conjunction with Hadoop, advanced text analytics tools analyze the Energy consumption and carbon footprint unstructured text of social media management. 23
  • 24. and social networking posts, including Tweets and Facebook posts, to determine the user sentiment related to particular companies, brands or products  Mashups – Mobile User Location + Precision Targeting  Machine-generated data, the exhaust fumes of the WebHealth & Life Sciences Telecommunications Health Insurance fraud detection  Revenue assurance and price Campaign and sales program optimization. optimization. Brand management.  Customer churn analysis - Enterprises use Hadoop and Big Data Patient care quality and program analysis. technologies to analyse customer Supply-chain management. behavior data to identify patterns Drug discovery and development analysis. that indicate which customers are most likely to leave for a competing vendor or service..  Campaign management and customer loyalty.  Call Detail Record (CDR) analysis.  Network performance and optimization  Mobile User Location analysis Smart meters in the utilities industry. The rollout of smart meters as part of the SmartGrid adoption by utilities everywhere has resulted in a deluge of data flowing at unprecedentedlevels. Most utilities are ill-prepared to analyze the data once the meters are turned on.4.2 Big Data Actual Use Cases : Below graphic mentions the survey result undertaken by Information Week whichindicates the % of respondents who would be opting for a open source solutions for Big Data . 24
  • 25. Figure 8 - Survey Results : Use of Open Source to Manage Big Data  Interesting Use Case – Amazon Will Pay Shoppers $5 to Walk Out of Stores Empty- Handed Interesting use of consumer data entry to power next generation retail pricecompetition…. Amazon is offering consumers up to $5 off on purchases if they compare pricesusing their mobile phone application in a store. The promotion will serve as a way for Amazonto increase usage of its bar-code-scanning application, while also collecting intelligence onprices in the stores. Amazon’s Price Check app, which is available for iPhone and Android, allows shoppersto scan a bar code, take a picture of an item or conduct a text search to find the lowest prices.Amazon is also asking consumers to submit the prices of items with the app, so Amazon knowsif it is still offering the best prices. A great way to feed data into its learning engine from brick-and-mortar retailers. This is an interesting trend that should terrify brick-and-mortar retailers. While the real-time Everyday Low Price information empowers consumers, it terrifies retailers, whoincreasingly are feeling like showrooms — shoppers come to to check out the merchandise butultimately decide to walk out and buy online instead.  Smart Meters : a. Because of smart meters, electricity providers can read the meter once every 15minutes rather than once a month. This not only eliminates the need to send some one formeter reading, but as the meter is read once every fifteen minutes, electricity can be priceddifferently for peak and off-peak hours. Pricing can be used to shape the demand curve duringpeak hours, eliminating the need for creating additional generating capacity just to meet peakdemand, saving electricity providers millions of dollars worth of investment in generatingcapacity and plant maintenance costs. 25
  • 26. b. Well, there is a smart electric meter in a residence in Texas and one of theelectricity providers in the area (TXU Energy) is using the smart meter technology to shape thedemand curve by offering Free Night time Energy Charges — All Night. Every Night. All YearLong. In fact, they promote their service as Do your laundry or run the dishwasher at night,and pay nothing for your Energy Charges . What TXU Energy is trying to do here is to re-shapeenergy demand using pricing so as to manage peak-time demand resulting in savings for both,TXU and customers. This wouldn’t have been possible without Smart Electric meters.  T-Mobile USA has integrated Big Data across multiple IT systems to combine customertransaction and interactions data in order to better predict customer defections. By leveragingsocial media data (Big Data) along with transaction data from CRM and Billing systems, T-Mobile USA has been able to cut customer defections in half in a single quarter .  US Xpress, provider of a wide variety of transportation solutions collects about athousand data elements ranging from fuel usage to tire condition to truck engine operations toGPS information, and uses this data for optimal fleet management and to drive productivitysaving millions of dollars in operating costs.  McLaren’s Formula One racing team uses real-time car sensor data during car races,identifies issues with its racing cars using predictive analytics and takes corrective actions pro-actively before it’s too late! (for more on T-Mobile USA, US Xpress and McLaren’s F1 casestudies refer to this article on FT.com)  How Morgan Stanley uses Hadoop : Gary Bhattacharjee, executive director of enterprise information management at thefirm, had worked with Hadoop as early as 2008 and thought that it might provide a solution. Sothe IT department hooked up some old servers. At the Fountainhead conference on Hadoop in Finance in New York, Bhattacharjee saidthe investment bank has started by stringing together 15 end of life boxes. It allowed us tobring really cheap infrastructure into a framework and install Hadoop and let it run. One area that Bhattacharjee would talk about was in IT and log analysis. A typicalapproach would be to look at web logs and database logs to see problems, but one logwouldn’t show if a web delay was caused by a database problem. We dumped every log wecould get, including web and all the different database logs, put them into Hadoop and rantime-based correlations.. Now they can see market events and how they correlate with webissues and database read-write problems.  Big Data at Ford With analytics now embedded into the culture of Ford, the rise of Big Data analytics hascreated a whole host of new possibilities for the automaker. We recognize that the volumes of data we generate internally -- from our businessoperations and also from our vehicle research activities as well as the universe of data that our 26
  • 27. customers live in and that exists on the Internet -- all of those things are huge opportunities forus that will likely require some new specialized techniques or platforms to manage, saidGinder. Our research organization is experimenting with Hadoop and were trying to combineall of these various data sources that we have access to. We think the sky is the limit. Werecognize that were just kind of scraping the tip of the iceberg here. The other major asset that Ford has going for it when it comes to Big Data is that thecompany is tracking enormous amounts of useful data in both the product developmentprocess and the products themselves. Ginder noted, “Our manufacturing sites are all very well instrumented. Our vehicles arevery well instrumented. Theyre closed loop control systems. There are many many sensors ineach vehicle… Until now, most of that information was [just] in the vehicle, but we think theresan opportunity to grab that data and understand better how the car operates and howconsumers use the vehicles and feed that information back into our design process and helpoptimize the users experience in the future as well”. Of course, Big Data is about a lot more than just harnessing all of the runaway data sourcesthat most companies are trying to grapple with. It’s about structured data plus unstructureddata. Structured data is all the traditional stuff most companies have in their databases (as wellas the stuff like Ford is talking about with sensors in its vehicles and assembly lines).Unstructured data is the stuff that’s now freely available across the Internet, from public datanow being exposed by governments on sites such as data.gov in the U.S. to treasure troves ofconsumer intelligence such as Twitter. Mixing the two and coming up with new analysis is whatBig Data is all about. The fundamental assumption of Big Data is the amount of that data is only going to growand theres an opportunity for us to combine that external data with our own internal data innew ways, said Ginder. For better forecasting or for better insights into product design, thereare many, many opportunities. Ford is also digging into the consumer intelligence aspect of unstructured data. Ginder said,We recognize that the data on the Internet is potentially insightful for understanding what ourcustomers or our potential customers are looking for [and] what their attitudes are, so we dosome sentiment analysis around blog posts, comments, and other types of content on theInternet. That kind of thing is pretty common and a lot of Fortune 500 companies are doing similarkinds of things. However, there’s another way that Ford is using unstructured data from theWeb that is a little more unique and it has impacted the way the company predicts future salesof its vehicles. We use Google Trends, which measures the popularity of search terms, to help inform ourown internal sales forecasts, Ginder explained. Along with other internal data we have, we usethat to build a better forecast. Its one of the inputs for our sales forecast. In the past, it wouldjust be what we sold last week. Now its what we sold last week plus the popularity of thesearch terms... Again, I think were just scratching the surface. Theres a lot more I think wellbe doing in the future. 27
  • 28. Figure 9 - Big Data Value Potential Index Computer and electronic products and information sectors (Cluster A), traded globally,stand out as sectors that have already been experiencing very strong productivity growth andthat are poised to gain substantially from the use of big data. Two services sectors (Cluster B)—finance and insurance and government—arepositioned to benefit very strongly from big data as long as barriers to its use can be overcome. Several sectors (Cluster C) have experienced negative productivity growth, probablyindicating that these sectors face strong systemic barriers to increasing productivity. Amongthe remaining sectors, we see that globally traded sectors (mostly Cluster D) tend to haveexperienced higher historical productivity growth, while local services (mainly Cluster E) haveexperienced lower growth. While all sectors will have to overcome barriers to capture value from the use of bigdata, barriers are structurally higher for some than for others (Exhibit 3). For example, thepublic sector, including education, faces higher hurdles because of a lack of data-driven mind-set and available data. Capturing value in health care faces challenges given the relatively lowIT investment performed so far. Sectors such as retail, manufacturing, and professional servicesmay have relatively lower degrees of barriers to overcome for precisely the opposite reasons. 28
  • 29. Bibliography :John Webster – “Understanding Big Data Analytics”, Aug 2011,Searchstorage.techtarget.comBill Franks – “What’s Up With In-Memory Analytics?”, May 7, 2012 iianalytics.com.Pankaj Maru – “Data scientist: The new kid on the IT block!”, Sep 3, 2012, CIOL.com.Yellowfin WhitePaper – “In-Memory Analytics”, www.yellowfin.bi.“Morgan Stanley Takes On Big Data With Hadoop”, March 30, 2012, Forbes.comRavi Kalakota - “New Tools for New Times – Primer on Big Data, Hadoop and “In-memory” Data Clouds”, May 15, 2011, practicalanalytics.wordpress.com McKinsey Global Institute – “ Big data: The next frontier for innovation, competition,and productivity”, June 2011Harish Kotadia – “4 Excellent Big Data Case Studies”, July 2012, hkotadia.comJeff Kelly, Big Data: Hadoop, Business Analytics and Beyond, Aug 27, 2012Wikibon.orgJoe McKendrick - “7 new types of jobs created by Big Data”, Sep 20, 2012,Smartplanet.com.Jean-Jacques Dubray - NoSQL, NewSQL and Beyond, Apr 19, 2011, Infoq.com 29