Big Data & Analytics (Conceptual and Practical Introduction)Yaman Hajja, Ph.D.
A 3-day interactive workshop for startups involve in Big Data & Analytics in Asia. Introduction to Big Data & Analytics concepts, and case studies in R Programming, Excel, Web APIs, and many more.
DOI: 10.13140/RG.2.2.10638.36162
3 pillars of big data : structured data, semi structured data and unstructure...PROWEBSCRAPER
There are 3 pillars of Big Data
1.Structured data
2.Unstructured data
3.Semi structured data
Businesses worldwide construct their empire on these three pillars and capitalize on their limitless potential.
A MapReduce job usually splits the input data-set into independent chunks which are processed by the map tasks in a completely parallel manner. The framework sorts the outputs of the maps, which are then input to the reduce tasks. Typically both the input and the output of the job are stored in a file-system.
Big Data & Analytics (Conceptual and Practical Introduction)Yaman Hajja, Ph.D.
A 3-day interactive workshop for startups involve in Big Data & Analytics in Asia. Introduction to Big Data & Analytics concepts, and case studies in R Programming, Excel, Web APIs, and many more.
DOI: 10.13140/RG.2.2.10638.36162
3 pillars of big data : structured data, semi structured data and unstructure...PROWEBSCRAPER
There are 3 pillars of Big Data
1.Structured data
2.Unstructured data
3.Semi structured data
Businesses worldwide construct their empire on these three pillars and capitalize on their limitless potential.
A MapReduce job usually splits the input data-set into independent chunks which are processed by the map tasks in a completely parallel manner. The framework sorts the outputs of the maps, which are then input to the reduce tasks. Typically both the input and the output of the job are stored in a file-system.
This Presentation is about NoSQL which means Not Only SQL. This presentation covers the aspects of using NoSQL for Big Data and the differences from RDBMS.
Big data Analytics is a process to extract meaningful insight from big such as hidden patterns, unknown correlations, market trends and customer preferences
It is a brief overview of Big Data. It contains History, Applications and Characteristics on BIg Data.
It also includes some concepts on Hadoop.
It also gives the statistics of big data and impact of it all over the world.
Very basic Introduction to Big Data. Touches on what it is, characteristics, some examples of Big Data frameworks. Hadoop 2.0 example - Yarn, HDFS and Map-Reduce with Zookeeper.
Data science is different from Data Analytics,Data Engineering,Big Data.
Presentation about Data Science.
What is Data Science its process future and scope.
Data Science Presentation By Amit Singh.
"Sexiest job of 21st century"
Class lecture by Prof. Raj Jain on Big Data. The talk covers Why Big Data Now?, Big Data Applications, ACID Requirements, Terminology, Google File System, BigTable, MapReduce, MapReduce Optimization, Story of Hadoop, Hadoop, Apache Hadoop Tools, Apache Other Big Data Tools, Other Big Data Tools, Analytics, Types of Databases, Relational Databases and SQL, Non-relational Databases, NewSQL Databases, Columnar Databases. Video recording available in YouTube.
Big Data Tutorial | What Is Big Data | Big Data Hadoop Tutorial For Beginners...Simplilearn
This presentation about Big Data will help you understand how Big Data evolved over the years, what is Big Data, applications of Big Data, a case study on Big Data, 3 important challenges of Big Data and how Hadoop solved those challenges. The case study talks about Google File System (GFS), where you’ll learn how Google solved its problem of storing increasing user data in early 2000. We’ll also look at the history of Hadoop, its ecosystem and a brief introduction to HDFS which is a distributed file system designed to store large volumes of data and MapReduce which allows parallel processing of data. In the end, we’ll run through some basic HDFS commands and see how to perform wordcount using MapReduce. Now, let us get started and understand Big Data in detail.
Below topics are explained in this Big Data presentation for beginners:
1. Evolution of Big Data
2. Why Big Data?
3. What is Big Data?
4. Challenges of Big Data
5. Hadoop as a solution
6. MapReduce algorithm
7. Demo on HDFS and MapReduce
What is this Big Data Hadoop training course about?
The Big Data Hadoop and Spark developer course have been designed to impart in-depth knowledge of Big Data processing using Hadoop and Spark. The course is packed with real-life projects and case studies to be executed in the CloudLab.
What are the course objectives?
This course will enable you to:
1. Understand the different components of the Hadoop ecosystem such as Hadoop 2.7, Yarn, MapReduce, Pig, Hive, Impala, HBase, Sqoop, Flume, and Apache Spark
2. Understand Hadoop Distributed File System (HDFS) and YARN as well as their architecture, and learn how to work with them for storage and resource management
3. Understand MapReduce and its characteristics, and assimilate some advanced MapReduce concepts
4. Get an overview of Sqoop and Flume and describe how to ingest data using them
5. Create database and tables in Hive and Impala, understand HBase, and use Hive and Impala for partitioning
6. Understand different types of file formats, Avro Schema, using Arvo with Hive, and Sqoop and Schema evolution
7. Understand Flume, Flume architecture, sources, flume sinks, channels, and flume configurations
8. Understand HBase, its architecture, data storage, and working with HBase. You will also understand the difference between HBase and RDBMS
9. Gain a working knowledge of Pig and its components
10. Do functional programming in Spark
11. Understand resilient distribution datasets (RDD) in detail
12. Implement and build Spark applications
13. Gain an in-depth understanding of parallel processing in Spark and Spark RDD optimization techniques
14. Understand the common use-cases of Spark and the various interactive algorithms
15. Learn Spark SQL, creating, transforming, and querying Data frames
Learn more at https://www.simplilearn.com/big-data-and-analytics/big-data-and-hadoop-training
The recent focus on Big Data in the data management community brings with it a paradigm shift—from the more traditional top-down, “design then build” approach to data warehousing and business intelligence, to the more bottom up, “discover and analyze” approach to analytics with Big Data. Where does data modeling fit in this new world of Big Data? Does it go away, or can it evolve to meet the emerging needs of these exciting new technologies? Join this webinar to discuss:
Big Data –A Technical & Cultural Paradigm Shift
Big Data in the Larger Information Management Landscape
Modeling & Technology Considerations
Organizational Considerations
The Role of the Data Architect in the World of Big Data
This Presentation is about NoSQL which means Not Only SQL. This presentation covers the aspects of using NoSQL for Big Data and the differences from RDBMS.
Big data Analytics is a process to extract meaningful insight from big such as hidden patterns, unknown correlations, market trends and customer preferences
It is a brief overview of Big Data. It contains History, Applications and Characteristics on BIg Data.
It also includes some concepts on Hadoop.
It also gives the statistics of big data and impact of it all over the world.
Very basic Introduction to Big Data. Touches on what it is, characteristics, some examples of Big Data frameworks. Hadoop 2.0 example - Yarn, HDFS and Map-Reduce with Zookeeper.
Data science is different from Data Analytics,Data Engineering,Big Data.
Presentation about Data Science.
What is Data Science its process future and scope.
Data Science Presentation By Amit Singh.
"Sexiest job of 21st century"
Class lecture by Prof. Raj Jain on Big Data. The talk covers Why Big Data Now?, Big Data Applications, ACID Requirements, Terminology, Google File System, BigTable, MapReduce, MapReduce Optimization, Story of Hadoop, Hadoop, Apache Hadoop Tools, Apache Other Big Data Tools, Other Big Data Tools, Analytics, Types of Databases, Relational Databases and SQL, Non-relational Databases, NewSQL Databases, Columnar Databases. Video recording available in YouTube.
Big Data Tutorial | What Is Big Data | Big Data Hadoop Tutorial For Beginners...Simplilearn
This presentation about Big Data will help you understand how Big Data evolved over the years, what is Big Data, applications of Big Data, a case study on Big Data, 3 important challenges of Big Data and how Hadoop solved those challenges. The case study talks about Google File System (GFS), where you’ll learn how Google solved its problem of storing increasing user data in early 2000. We’ll also look at the history of Hadoop, its ecosystem and a brief introduction to HDFS which is a distributed file system designed to store large volumes of data and MapReduce which allows parallel processing of data. In the end, we’ll run through some basic HDFS commands and see how to perform wordcount using MapReduce. Now, let us get started and understand Big Data in detail.
Below topics are explained in this Big Data presentation for beginners:
1. Evolution of Big Data
2. Why Big Data?
3. What is Big Data?
4. Challenges of Big Data
5. Hadoop as a solution
6. MapReduce algorithm
7. Demo on HDFS and MapReduce
What is this Big Data Hadoop training course about?
The Big Data Hadoop and Spark developer course have been designed to impart in-depth knowledge of Big Data processing using Hadoop and Spark. The course is packed with real-life projects and case studies to be executed in the CloudLab.
What are the course objectives?
This course will enable you to:
1. Understand the different components of the Hadoop ecosystem such as Hadoop 2.7, Yarn, MapReduce, Pig, Hive, Impala, HBase, Sqoop, Flume, and Apache Spark
2. Understand Hadoop Distributed File System (HDFS) and YARN as well as their architecture, and learn how to work with them for storage and resource management
3. Understand MapReduce and its characteristics, and assimilate some advanced MapReduce concepts
4. Get an overview of Sqoop and Flume and describe how to ingest data using them
5. Create database and tables in Hive and Impala, understand HBase, and use Hive and Impala for partitioning
6. Understand different types of file formats, Avro Schema, using Arvo with Hive, and Sqoop and Schema evolution
7. Understand Flume, Flume architecture, sources, flume sinks, channels, and flume configurations
8. Understand HBase, its architecture, data storage, and working with HBase. You will also understand the difference between HBase and RDBMS
9. Gain a working knowledge of Pig and its components
10. Do functional programming in Spark
11. Understand resilient distribution datasets (RDD) in detail
12. Implement and build Spark applications
13. Gain an in-depth understanding of parallel processing in Spark and Spark RDD optimization techniques
14. Understand the common use-cases of Spark and the various interactive algorithms
15. Learn Spark SQL, creating, transforming, and querying Data frames
Learn more at https://www.simplilearn.com/big-data-and-analytics/big-data-and-hadoop-training
The recent focus on Big Data in the data management community brings with it a paradigm shift—from the more traditional top-down, “design then build” approach to data warehousing and business intelligence, to the more bottom up, “discover and analyze” approach to analytics with Big Data. Where does data modeling fit in this new world of Big Data? Does it go away, or can it evolve to meet the emerging needs of these exciting new technologies? Join this webinar to discuss:
Big Data –A Technical & Cultural Paradigm Shift
Big Data in the Larger Information Management Landscape
Modeling & Technology Considerations
Organizational Considerations
The Role of the Data Architect in the World of Big Data
BDVe Webinar Series - Designing Big Data pipelines with Toreador (Ernesto Dam...Big Data Value Association
In the Internet of Everything, huge volumes of multimedia data are generated at very high rates by heterogeneous sources in various formats, such as sensors readings, process logs, structured data from RDBMS, etc. The need of the hour is setting up efficient data pipelines that can compute advanced analytics models on data and use results to customize services, predict future needs or detect anomalies. This Webinar explores the TOREADOR conversational, service-based approach to the easy design of efficient and reusable analytics pipelines to be automatically deployed on a variety of cloud-based execution platforms.
DevOps for Data Engineers - Automate Your Data Science Pipeline with Ansible,...Mihai Criveti
Automate your Data Science pipeline with Ansible, Python and Kubernetes - ODSC Talk
What is Data Science and the Data Science Landscape
Process and Flow
Understanding Data
The Data Science Toolkit
The Big Data Challenge
Cloud Computing Solutions
The rise of DevOps in Data Science
Automate your data pipeline with Ansible
Slides used for the keynote at the even Big Data & Data Science http://eventos.citius.usc.es/bigdata/
Some slides are borrowed from random hadoop/big data presentations
INTRODUCTION TO BIG DATA AND HADOOP
9
Introduction to Big Data, Types of Digital Data, Challenges of conventional systems - Web data, Evolution of analytic processes and tools, Analysis Vs reporting - Big Data Analytics, Introduction to Hadoop - Distributed Computing
Challenges - History of Hadoop, Hadoop Eco System - Use case of Hadoop – Hadoop Distributors – HDFS – Processing Data with Hadoop – Map Reduce.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
We all have good and bad thoughts from time to time and situation to situation. We are bombarded daily with spiraling thoughts(both negative and positive) creating all-consuming feel , making us difficult to manage with associated suffering. Good thoughts are like our Mob Signal (Positive thought) amidst noise(negative thought) in the atmosphere. Negative thoughts like noise outweigh positive thoughts. These thoughts often create unwanted confusion, trouble, stress and frustration in our mind as well as chaos in our physical world. Negative thoughts are also known as “distorted thinking”.
How to Create Map Views in the Odoo 17 ERPCeline George
The map views are useful for providing a geographical representation of data. They allow users to visualize and analyze the data in a more intuitive manner.
This is a presentation by Dada Robert in a Your Skill Boost masterclass organised by the Excellence Foundation for South Sudan (EFSS) on Saturday, the 25th and Sunday, the 26th of May 2024.
He discussed the concept of quality improvement, emphasizing its applicability to various aspects of life, including personal, project, and program improvements. He defined quality as doing the right thing at the right time in the right way to achieve the best possible results and discussed the concept of the "gap" between what we know and what we do, and how this gap represents the areas we need to improve. He explained the scientific approach to quality improvement, which involves systematic performance analysis, testing and learning, and implementing change ideas. He also highlighted the importance of client focus and a team approach to quality improvement.
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptxEduSkills OECD
Andreas Schleicher presents at the OECD webinar ‘Digital devices in schools: detrimental distraction or secret to success?’ on 27 May 2024. The presentation was based on findings from PISA 2022 results and the webinar helped launch the PISA in Focus ‘Managing screen time: How to protect and equip students against distraction’ https://www.oecd-ilibrary.org/education/managing-screen-time_7c225af4-en and the OECD Education Policy Perspective ‘Students, digital devices and success’ can be found here - https://oe.cd/il/5yV
Ethnobotany and Ethnopharmacology:
Ethnobotany in herbal drug evaluation,
Impact of Ethnobotany in traditional medicine,
New development in herbals,
Bio-prospecting tools for drug discovery,
Role of Ethnopharmacology in drug evaluation,
Reverse Pharmacology.
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
GIÁO ÁN DẠY THÊM (KẾ HOẠCH BÀI BUỔI 2) - TIẾNG ANH 8 GLOBAL SUCCESS (2 CỘT) N...
Lecture1 introduction to big data
1.
2. Syllabus
2
Prerequisite: Data Base Management System (CS-2004)
Introduction to Big Data : Importance of Data, Characteristics of Data Analysis of Unstructured Data, Combining
Structured and Unstructured Sources. Introduction to Big Data Platform – Challenges of conventional systems – Web
data – Evolution of Analytic scalability, analytic processes and tools, Analysis vs reporting – Modern data analytic
tools, Types of Data, Elements of Big Data, Big Data Analytics, Data Analytics Lifecycle. Exploring the Use of Big Data
in Business Context, Use of Big Data in Social Networking, Business Intelligence, Product Design and Development
Data analysis: Exploring Basic Features of R, Programming Features, Packages, Exploring RStudio, Handling Basic
Expressions in R, Basic Arithmetic in R, Mathematical Operators, Calling Functions in R, Working with Vectors,
Creating and Using Objects, Handling Data in R Workspace, Creating Plots, Using Built-in Datasets in R, Reading
Datasets and Exporting Data from R, Manipulating and Processing Data in R. Statistical Features-Analysis of time
series: linear systems analysis, nonlinear dynamics – Rule induction – Neural networks: learning and generalization,
competitive learning, principal component analysis and neural networks.
Big data technology foundations & mining data streams: Exploring the Big Data Stack, Data Sources Layer, Ingestion
Layer, Storage Layer, Physical Infrastructure Layer, Platform Management Layer, Security Layer, Monitoring Layer,
Analytics Engine, Visualization Layer, Big Data Applications, Virtualization. Introduction to Streams Concepts –
Stream data model and architecture – Stream Computing, Sampling data in a stream – Filtering streams, Counting
distinct elements in a stream.
Frequent itemsets and clustering :
Mining Frequent itemsets – Market based model – Apriori Algorithm – Handling large data sets in Main memory –
Limited Pass Algorithm – Counting frequent itemsets in a stream – Clustering Techniques – Hierarchical – KMeans.
Analytical Approaches and Tools to Analyze Data: Text Data Analysis, Graphical User Interfaces, Point Solutions.
Frameworks and visualization : Distributed and Parallel Computing for Big Data, MapReduce – Hadoop, HDFS, Hive,
MapR – Hadoop -YARN - Pig and PigLatin, Jaql - Zookeeper - HBase, Cassandra- Oozie, Lucene- Avro, Mahout.
Hadoop Distributed file systems. Visualizations – Visual data analysis techniques, interaction techniques; Systems
and applications
Text Books:
1. Big Data, Black Book, DT Editorial Services, Dreamtech Press, 2015
3. Course Outcome
• At the end of the course, the students will be
able to:
– CO1. Identify the need for big data analytics for a domain
– CO2. Performing analysis of data using R tool.
– CO3. Use Hadoop, Map Reduce Framework
– CO4. Apply big data for a given problem
– CO5. Suggest areas to apply big data to increase business outcome
– CO6. Contextually integrate and correlate large amounts of information
automatically to gain faster insights
4. What’s Big Data?
No single definition; here is from Wikipedia:
Big data is the term for a collection of data sets so large and complex
that it becomes difficult to process using on-hand database management
tools or traditional data processing applications.
The challenges include capture, curation, storage, search, sharing,
transfer, analysis, and visualization.
The trend to larger data sets is due to the additional information
derivable from analysis of a single large set of related data, as compared
to separate smaller sets with the same total amount of data, allowing
correlations to be found to
“spot business trends, determine quality of research, prevent diseases,
link legal citations, combat crime, and determine real-time roadway
traffic conditions.”
4
6. Harnessing Big Data
• OLTP: Online Transaction Processing (DBMSs)
• OLAP: Online Analytical Processing (Data Warehousing)
• RTAP: Real-Time Analytics Processing (Big Data Architecture & technology)
6
7. The Model Has Changed…
• The Model of Generating/Consuming Data has Changed
Old Model: Few companies are generating data, all
others are consuming data
New Model: all of us are generating data, and all of
us are consuming data
7
8. What’s driving Big Data to Analytics
- Ad-hoc querying and reporting
- Data mining techniques
- Structured data, typical sources
- Small to mid-size datasets
- Optimizations and predictive analytics
- Complex statistical analysis
- All types of data, and many sources
- Very large datasets
- More of a real-time
8
9. Structuring Big Data
• In simple terms, is arranging the available data in a manner such
that it becomes easy to study, analyze, and derive conclusion
format.
• Why is structuring required?
In our daily life, you may have come across questions like,
‒ How do I use to my advantage the vast amount of data and information I
come accross?
‒ Which news articles should I read of the thousands I come accross?
‒ How do I choose a book of the millions available on my favourate sites or
stores?
‒ How do I keep myself updated about new events, sports, inventions, and
discoveries taking place across the globe?
Today, solution to such questions can be found by information processing
systems.
10. Types of Data
• Data that comes from multiple sources, such as databases, ERP
systems, weblogs, chat history, and GPS maps so varies in format.
But primarily data is obtained from following types of data sources.
• Internal Sources : Organisational or enterprise data
– CRM, ERP, OLTP, products and sales data.......
(Structured data)
• External sources: Social Data
• Business partners, Internet, Government, Data supliers.............
(Unstructured or unorganised data)
11. • On the basis of the data received from the
source mentioned, big data is comprises;
– Structure Data
– Unstructured Data
– Semi-structured Data
BIG DATA = Structure Data + Unstructure Data +
Semi-structure Data
Types of Data (cont..)
12. Structure Data
• It can be defined as the data that has a
defined repeating pattern.
• This pattern makes it easier for any program to
sort, read, and process the data.
• Processing structured data is much faster and
easier than processing data without any
specific repeating pattern.
13. • Is organised data in a prescribed format.
• Is stored in tabular form.
• Is the data that resides in fixed fields within a record or file.
• Is formatted data that has eities and their attributes are
properly mapped.
• Is used in query and report against predetermined data types.
• Sources: DBMS/RDBMS, Flat files, Multidimensional databases,
Legacy databases
Structure Data (cont..)
15. • It is a set of data that might or might not have any
logical or repeating patterns.
• Typically of metadata, i.e, the additional
information related to data.
• Inconsistent data (files, social media websites,
satalities, etc.)
• Data in different format (e-mails, text, audio, video
or images.
• Sources: Social media, Mobile Data, Text both
internal & external to an organzation
Unstructure Data
20. • Having a schema-less or self-describing structure,
refers to a form of structured data that contains tags
or markup element in order to separate elements
and generate hierarchies of records and fields in the
given data.
• In other words, data is stored inconsistently in rows
and columns of a database.
• Sources: File systems such as Web data in the form
of cookies, Data exchange formats....
Semi-Structure Data
21.
22.
23.
24. Big Data:
Batch Processing &
Distributed Data
Store
Hadoop/Spark;
HBase/Cassandra
BI Reporting
OLAP &
Dataware house
Business Objects, SAS,
Informatica, Cognos other SQL
Reporting Tools
Interactive
Business
Intelligence &
In-memory RDBMS
QliqView, Tableau, HANA
Big Data:
Real Time &
Single View
Graph Databases
THE EVOLUTION OF BUSINESS INTELLIGENCE
1990’s 2000’s 2010’s
Speed
Scale
Scale
Speed
27. IBM Big Data characteristics –
3V. Adopted from (Zikopoulos
and Eaton 2011)
3V's of Big Data Architectural Paradigms
28. Volume (Scale)
• Data Volume
– 44x increase from 2009 2020
– From 0.8 zettabytes to 35zb
• Data volume is increasing exponentially
28
Exponential increase in
collected/generated data
29. 12+ TBs
of tweet data
every day
25+ TBs of
log data
every day
?TBsof
dataeveryday
2+
billion
people on
the Web
by end
2011
30 billion RFID
tags today
(1.3B in 2005)
4.6
billion
camera
phones
world wide
100s of
millions
of GPS
enabled
devices sold
annually
76 million smart meters
in 2009…
200M by 2014
31. The Earthscope
• The Earthscope is the world's largest science project. Designed to track
North America's geological evolution, this observatory records data over
3.8 million square miles, amassing 67 terabytes of data. It analyzes
seismic slips in the San Andreas fault, sure, but also the plume of
magma underneath Yellowstone and much, much more.
(http://www.msnbc.msn.com/id/44363598/ns/technology_and_science-
future_of_technology/#.TmetOdQ--uI)
32. Variety (Complexity)
• Relational Data (Tables/Transaction/
Legacy Data)
• Text Data (Web)
• Semi-structured Data (XML)
• Graph Data
– Social Network, Semantic Web
(RDF), …
• Streaming Data
– You can only scan the data once
• A single application can be
generating/collecting many types of
data
• Big Public Data (online, weather,
finance, etc) 32
To extract knowledge all these types
of data need to linked together
33. A Single View to the Customer
Customer
Social
Media
Gaming
Entertain
Banking
Finance
Our
Known
History
Purchas
e
34. Velocity (Speed)
• Data is begin generated fast and need to
be processed fast
• Online Data Analytics
• Late decisions missing opportunities
• Examples
– E-Promotions: Based on your current location, your
purchase history, what you like send promotions
right now for store next to you
– Healthcare monitoring: sensors monitoring your
activities and body any abnormal measurements
require immediate reaction
34
35. Real-time/Fast Data
Social media and networks
(all of us are generating data)
Scientific instruments
(collecting all sorts of data)
Mobile devices
(tracking all objects all the time)
Sensor technology and networks
(measuring all kinds of data)
• The progress and innovation is no longer hindered by the ability
to collect data
• But, by the ability to manage, analyze, summarize, visualize, and
discover knowledge from the collected data in a timely manner
and in a scalable fashion
35
36. Real-Time Analytics/Decision Requirement
Customer
Influence
Behavior
Product
Recommendations
that are Relevant
& Compelling
Friend Invitations
to join a
Game or Activity
that expands
business
Preventing Fraud
as it is Occurring
& preventing more
proactively
Learning why Customers
Switch to competitors
and their offers; in
time to Counter
Improving the
Marketing
Effectiveness of a
Promotion while it
is still in Play
39. Value
• Value is defined as the usefulness of data for an
enterprise.
• The value characteristic is intuitively related to the
veracity characteristic in that the higher the data
fidelity, the more value it holds for the business.
• Value is also dependent on how long data processing
takes because analytics results have a shelf-life; for
example, a 20 minute delayed stock quote has little to
no value for making a trade compared to a quote that
is 20 milliseconds old.
• Data that has high veracity and can be analyzed
quickly has more value to business.
43. 10 V's Big Data
1. Volume
2. Variety
3. Velocity
4. Veracity
5. Value
6. Variability
7. Visualization
8. Voloatility
9. Validity
10. Vulnerability
Volotility: How old does your data need to be before it is considered irrelevant,
historic, or not useful any longer? How long does data need to be kept for?
Vulnerability: Big data brings new security concerns. After all, a data breach
with big data is a big breach.
44. Big Data Analytics
• Big data is more real-time in nature than traditional DW
applications
• Big data analytics reformed the ways to conduct business in
many ways, such as it improves decission making, business
process management, etc.
• Business analytics uses the data and different other
techniques like information technology, features of statistics,
quantitative methods and different models to provide
results.
• Traditional DW architectures (e.g. Exadata, Teradata) are
not well-suited for big data apps
• Shared nothing, massively parallel processing, scale out
architectures are well-suited for big data apps 44
45.
46.
47. Types of Data Analytics
The main goal of big data analytics is to help organizations make
smarter decisions for better business outcomes.
With data in hand, you can begin doing analytics.
• But where do you begin?
• And which type of analytics is most appropriate for your big
data environment?
Looking at all the analytic options can be a daunting task. However,
luckily these analytic options can be categorized at a high level
into three distinct types.
Descriptive Analytics,
Predictive Analytics,
Prescriptive Analytics
48. Descriptive Analytics - (Insight into the past)
• Descriptive Analytics, which use data aggregation and data
mining to provide insight into the past and answer:
– “What has happened in the business?”
• Descriptive analysis or statistics does exactly what the name
implies they “Describe”, or summarize raw data and make it
something that is interpretable by humans.
• The past refers to any point of time that an event has
occurred, whether it is one minute ago, or one year ago.
• Descriptive analytics are useful because they allow us to learn
from past behaviors, and understand how they might
influence future outcomes.
49. • The main objective of descriptive analytics is to find out the
reasons behind precious success or failure in the past.
• The vast majority of the statistics we use fall into this
category.
• Common examples of descriptive analytics are reports that
provide historical insights regarding the company’s
production, financials, operations, sales, finance, inventory
and customers.
Descriptive Analytics (cont..)
50. Predictive Analytics -
(Understanding the future)
• Predictive Analytics, which use statistical models and
forecasts techniques to understand the future and answer:
– “What could happen?”
• These analytics are about understanding the future.
• Predictive analytics provide estimates about the likelihood of
a future outcome. It is important to remember that no
statistical algorithm can “predict” the future with 100%
certainty.
• Companies use these statistics to forecast what might
happen in the future. This is because the foundation of
predictive analytics is based on probabilities.
• These statistics try to take the data that you have, and fill in
the missing data with best guesses.
51. Predictive Analytics (cont..)
Predictive analytics can be further categorized as –
• Predictive Modelling –What will happen next, if ?
• Root Cause Analysis-Why this actually happened?
• Data Mining- Identifying correlated data.
• Forecasting- What if the existing trends continue?
• Monte-Carlo Simulation – What could happen?
• Pattern Identification and Alerts –When should an action be
invoked to correct a process.
Sentiment analysis is the most common kind of predictive
analytics. The learning model takes input in the form of plain text
and the output of the model is a sentiment score that helps
determine whether the sentiment is positive, negative or neutral.
52. Prescriptive Analytics -
(Advise on possible outcomes)
• Prescriptive Analytics, which use optimization and
simulation algorithms to advice on possible outcomes and
answer:
– “What should we do?”
• The relatively new field of prescriptive analytics allows
users to “prescribe” a number of different possible actions
to and guide them towards a solution. In a nut-shell, these
analytics are all about providing advice.
• Prescriptive analytics is the next step of predictive analytics
that adds the spice of manipulating the future.
53. Prescriptive Analytics (cont..)
• Prescriptive analytics is an advanced analytics concept based
on,
– Optimization that helps achieve the best outcomes.
– Stochastic optimization that helps understand how to
achieve the best outcome and identify data uncertainties
to make better decisions.
• Prescriptive analytics is a combination of data, mathematical
models and various business rules. The data for prescriptive
analytics can be both internal (within the organization) and
external (like social media data).
• Prescriptive analytics can be used in healthcare to enhance
drug development, finding the right patients for clinical trials,
etc.
56. Cloud Computing
• IT resources provided as a service
– Compute, storage, databases, queues
• Clouds leverage economies of scale of
commodity hardware
– Cheap storage, high bandwidth networks &
multicore processors
– Geographically distributed data centers
• Offerings from Microsoft, Amazon, Google, …
58. Benefits
• Cost & management
– Economies of scale, “out-sourced” resource
management
• Reduced Time to deployment
– Ease of assembly, works “out of the box”
• Scaling
– On demand provisioning, co-locate data and compute
• Reliability
– Massive, redundant, shared resources
• Sustainability
– Hardware not owned
59. Types of Cloud Computing
• Public Cloud: Computing infrastructure is hosted at the
vendor’s premises.
• Private Cloud: Computing architecture is dedicated to the
customer and is not shared with other organisations.
• Hybrid Cloud: Organisations host some critical, secure
applications in private clouds. The not so critical applications
are hosted in the public cloud
– Cloud bursting: the organisation uses its own infrastructure for normal
usage, but cloud is used for peak loads.
• Community Cloud
60. Classification of Cloud Computing
based on Service Provided
• Infrastructure as a service (IaaS)
– Offering hardware related services using the principles of cloud
computing. These could include storage services (database or disk
storage) or virtual servers.
– Amazon EC2, Amazon S3, Rackspace Cloud Servers and Flexiscale.
• Platform as a Service (PaaS)
– Offering a development platform on the cloud.
– Google’s Application Engine, Microsofts Azure, Salesforce.com’s
force.com .
• Software as a service (SaaS)
– Including a complete software offering on the cloud. Users can
access a software application hosted by the cloud vendor on pay-
per-use basis. This is a well-established sector.
– Salesforce.coms’ offering in the online Customer Relationship
Management (CRM) space, Googles gmail and Microsofts hotmail,
Google docs.
63. Key Ingredients in Cloud Computing
• Service-Oriented Architecture (SOA)
• Utility Computing (on demand)
• Virtualization (P2P Network)
• SAAS (Software As A Service)
• PAAS (Platform AS A Service)
• IAAS (Infrastructure AS A Servie)
• Web Services in Cloud