This slide gives a simple and purposeful knowledge about popular Hadoop platforms.
From simple definition to importance of Hadoop in modern era the presentation also introduces Hadoop service providers along with its core components.
Do go through it once and comment below with your feedback. I am sure that this slide will help many in presenting basics of Hadoop for their projects or business purpose.
The crisp information has been generated after going through detailed information available on internet as well as research papers
The Apache Hadoop software library is essentially a framework that allows for the distributed processing of large datasets across clusters of computers using a simple programming model. Hadoop can scale up from single servers to thousands of machines, each offering local computation and storage.
Building a Big Data platform with the Hadoop ecosystemGregg Barrett
This presentation provides a brief insight into a Big Data platform using the Hadoop ecosystem.
To this end the presentation will touch on:
-views of the Big Data ecosystem and it’s components
-an example of a Hadoop cluster
-considerations when selecting a Hadoop distribution
-some of the Hadoop distributions available
-a recommended Hadoop distribution
The Apache Hadoop software library is essentially a framework that allows for the distributed processing of large datasets across clusters of computers using a simple programming model. Hadoop can scale up from single servers to thousands of machines, each offering local computation and storage.
Building a Big Data platform with the Hadoop ecosystemGregg Barrett
This presentation provides a brief insight into a Big Data platform using the Hadoop ecosystem.
To this end the presentation will touch on:
-views of the Big Data ecosystem and it’s components
-an example of a Hadoop cluster
-considerations when selecting a Hadoop distribution
-some of the Hadoop distributions available
-a recommended Hadoop distribution
Introduction to HDInsight Hadoop on Windows Azure services, including using the interactive console with JavaScript and running WordCount via other methods (Streaming, Hive, etc..)
This is an in-depth look at the future of data warehouses and how SQL-on-Hadoop technologies play a pivotal role in those settings.
Matt Aslett, Research Director for 451 Research, is joined by Apache Drill architect Jacques Nadeau to share what lies ahead for enterprise data warehouse architects and BI users in 2015 and beyond.
Enterprises have been using both Big Data and Cloud Computing technologies for years. Until recently, the two have not been combined. Now the agility and efficiency benefits of self-service elastic infrastructure are being extended to Big Data initiatives – whether on-premises or in the public cloud.
This session at Hadoop Summit in San Jose, California (June 2016) discusses the emerging category of Big-Data-as-a-Service (BDaaS) - representing the intersection of Big Data and Cloud Computing.
In this session, Kris Applegate (Cloud and Big Data Solution Architect at Dell) and Thomas Phelan (Co-Founder and Chief Architect at BlueData) outlined the following:
- Innovations that paved the way for Big-Data-as-a-Service
- Definition and categories of Big-Data-as-a-Service
- Key considerations for Big-Data-as-a-Service in the enterprise, including public cloud or on-premises deployment options
A video replay can also be found here: https://youtu.be/_ucPoTKuj8Q
Overview of Big data, Hadoop and Microsoft BI - version1Thanh Nguyen
Big Data and advanced analytics are critical topics for executives today. But many still aren't sure how to turn that promise into value. This presentation provides an overview of 16 examples and use cases that lay out the different ways companies have approached the issue and found value: everything from pricing flexibility to customer preference management to credit risk analysis to fraud protection and discount targeting. For the latest on Big Data & Advanced Analytics: http://mckinseyonmarketingandsales.com/topics/big-data
How Big Data and Hadoop Integrated into BMC ControlM at CARFAXBMC Software
Learn how CARFAX utilized the power of Control-M to help drive big data processing via Cloudera. See why it was a no-brainer to choose Control-M to help manage workflows through Hadoop, some of the challenges faced, and the benefits the business received by using an existing, enterprise-wide workload management system instead of choosing “yet another tool.”
Introduction To Big Data with Hadoop and Spark - For Batch and Real Time Proc...Agile Testing Alliance
Introduction To Big Data with Hadoop and Spark - For Batch and Real Time Processing by "Sampat Kumar" from "Harman". The presentation was done at #doppa17 DevOps++ Global Summit 2017. All the copyrights are reserved with the author
Infrastructure Considerations for Analytical WorkloadsCognizant
Using Apache Hadoop clusters and Mahout for analyzing big data workloads yields extraordinary performance; we offer a detailed comparison of running Hadoop in a physical vs. virtual infrastructure environment.
Hadoop Reporting and Analysis - JaspersoftHortonworks
Hadoop is deployed for a variety of uses, including web analytics, fraud detection, security monitoring, healthcare, environmental analysis, social media monitoring, and other purposes.
Compare and contrast big data processing platforms RDBMS, Hadoop, and Spark. pros and cons of each platform are discussed. Business use cases are also included.
AN OVERVIEW OF BIGDATA AND HADOOP . THE ARCHITECHTURE IT USES AND THE WAY IT WORKS ON THE DATA SETS. THE SIDES ALSO SHOW THE VARIOUS FIELDS WHERE THEY ARE MOSTLY USED AND IMPLIMENTED
How One Company Offloaded Data Warehouse ETL To Hadoop and Saved $30 MillionDataWorks Summit
A Fortune 100 company recently introduced Hadoop into their data warehouse environment and ETL workflow to save $30 Million. This session examines the specific use case to illustrate the design considerations, as well as the economics behind ETL offload with Hadoop. Additional information about how the Hadoop platform was leveraged to support extended analytics will also be referenced.
Advance Map reduce - Apache hadoop Bigdata training by Design PathshalaDesing Pathshala
Learn Hadoop and Bigdata Analytics, Join Design Pathshala training programs on Big data and analytics.
This slide covers the Advance Map reduce concepts of Hadoop and Big Data.
For training queries you can contact us:
Email: admin@designpathshala.com
Call us at: +91 98 188 23045
Visit us at: http://designpathshala.com
Join us at: http://www.designpathshala.com/contact-us
Course details: http://www.designpathshala.com/course/view/65536
Big data Analytics Course details: http://www.designpathshala.com/course/view/1441792
Business Analytics Course details: http://www.designpathshala.com/course/view/196608
Big data lambda architecture - Streaming Layer Hands Onhkbhadraa
This presentation describes Hands on guide BIG Data Streaming Pipeline AWS Cloud Platform using Apache Kafka, Apache Hadoop, Apache Spark and Apache Cassandra.
Introduction to HDInsight Hadoop on Windows Azure services, including using the interactive console with JavaScript and running WordCount via other methods (Streaming, Hive, etc..)
This is an in-depth look at the future of data warehouses and how SQL-on-Hadoop technologies play a pivotal role in those settings.
Matt Aslett, Research Director for 451 Research, is joined by Apache Drill architect Jacques Nadeau to share what lies ahead for enterprise data warehouse architects and BI users in 2015 and beyond.
Enterprises have been using both Big Data and Cloud Computing technologies for years. Until recently, the two have not been combined. Now the agility and efficiency benefits of self-service elastic infrastructure are being extended to Big Data initiatives – whether on-premises or in the public cloud.
This session at Hadoop Summit in San Jose, California (June 2016) discusses the emerging category of Big-Data-as-a-Service (BDaaS) - representing the intersection of Big Data and Cloud Computing.
In this session, Kris Applegate (Cloud and Big Data Solution Architect at Dell) and Thomas Phelan (Co-Founder and Chief Architect at BlueData) outlined the following:
- Innovations that paved the way for Big-Data-as-a-Service
- Definition and categories of Big-Data-as-a-Service
- Key considerations for Big-Data-as-a-Service in the enterprise, including public cloud or on-premises deployment options
A video replay can also be found here: https://youtu.be/_ucPoTKuj8Q
Overview of Big data, Hadoop and Microsoft BI - version1Thanh Nguyen
Big Data and advanced analytics are critical topics for executives today. But many still aren't sure how to turn that promise into value. This presentation provides an overview of 16 examples and use cases that lay out the different ways companies have approached the issue and found value: everything from pricing flexibility to customer preference management to credit risk analysis to fraud protection and discount targeting. For the latest on Big Data & Advanced Analytics: http://mckinseyonmarketingandsales.com/topics/big-data
How Big Data and Hadoop Integrated into BMC ControlM at CARFAXBMC Software
Learn how CARFAX utilized the power of Control-M to help drive big data processing via Cloudera. See why it was a no-brainer to choose Control-M to help manage workflows through Hadoop, some of the challenges faced, and the benefits the business received by using an existing, enterprise-wide workload management system instead of choosing “yet another tool.”
Introduction To Big Data with Hadoop and Spark - For Batch and Real Time Proc...Agile Testing Alliance
Introduction To Big Data with Hadoop and Spark - For Batch and Real Time Processing by "Sampat Kumar" from "Harman". The presentation was done at #doppa17 DevOps++ Global Summit 2017. All the copyrights are reserved with the author
Infrastructure Considerations for Analytical WorkloadsCognizant
Using Apache Hadoop clusters and Mahout for analyzing big data workloads yields extraordinary performance; we offer a detailed comparison of running Hadoop in a physical vs. virtual infrastructure environment.
Hadoop Reporting and Analysis - JaspersoftHortonworks
Hadoop is deployed for a variety of uses, including web analytics, fraud detection, security monitoring, healthcare, environmental analysis, social media monitoring, and other purposes.
Compare and contrast big data processing platforms RDBMS, Hadoop, and Spark. pros and cons of each platform are discussed. Business use cases are also included.
AN OVERVIEW OF BIGDATA AND HADOOP . THE ARCHITECHTURE IT USES AND THE WAY IT WORKS ON THE DATA SETS. THE SIDES ALSO SHOW THE VARIOUS FIELDS WHERE THEY ARE MOSTLY USED AND IMPLIMENTED
How One Company Offloaded Data Warehouse ETL To Hadoop and Saved $30 MillionDataWorks Summit
A Fortune 100 company recently introduced Hadoop into their data warehouse environment and ETL workflow to save $30 Million. This session examines the specific use case to illustrate the design considerations, as well as the economics behind ETL offload with Hadoop. Additional information about how the Hadoop platform was leveraged to support extended analytics will also be referenced.
Advance Map reduce - Apache hadoop Bigdata training by Design PathshalaDesing Pathshala
Learn Hadoop and Bigdata Analytics, Join Design Pathshala training programs on Big data and analytics.
This slide covers the Advance Map reduce concepts of Hadoop and Big Data.
For training queries you can contact us:
Email: admin@designpathshala.com
Call us at: +91 98 188 23045
Visit us at: http://designpathshala.com
Join us at: http://www.designpathshala.com/contact-us
Course details: http://www.designpathshala.com/course/view/65536
Big data Analytics Course details: http://www.designpathshala.com/course/view/1441792
Business Analytics Course details: http://www.designpathshala.com/course/view/196608
Big data lambda architecture - Streaming Layer Hands Onhkbhadraa
This presentation describes Hands on guide BIG Data Streaming Pipeline AWS Cloud Platform using Apache Kafka, Apache Hadoop, Apache Spark and Apache Cassandra.
Shawn Gandhi, head of Solutions Architecture for AWS Canada, takes us on a journey through Big Data and the different strategies and services available to implementers and practicioners.
Hadoop as we know is a Java based massive scalable distributed framework for processing large data (several peta bytes) across a cluster (1000s) of commodity computers.
The Hadoop ecosystem has grown over the last few years and there is a lot of jargon in terms of tools as well as frameworks.
Many organizations are investing & innovating heavily in Hadoop to make it better and easier. The mind map on the next slide should be useful to get a high level picture of the ecosystem.
Enough taking about Big data and Hadoop and let’s see how Hadoop works in action.
We will locate a real dataset, ingest it to our cluster, connect it to a database, apply some queries and data transformations on it , save our result and show it via BI tool.
Presented By :- Rahul Sharma
B-Tech (Cloud Technology & Information Security)
2nd Year 4th Sem.
Poornima University (I.Nurture),Jaipur
www.facebook.com/rahulsharmarh18
Presentation regarding big data. The presentation also contains basics regarding Hadoop and Hadoop components along with their architecture. Contents of the PPT are
1. Understanding Big Data
2. Understanding Hadoop & It’s Components
3. Components of Hadoop Ecosystem
4. Data Storage Component of Hadoop
5. Data Processing Component of Hadoop
6. Data Access Component of Hadoop
7. Data Management Component of Hadoop
8.Hadoop Security Management Tool: Knox ,Ranger
Hortonworks - What's Possible with a Modern Data Architecture?Hortonworks
This is Mark Ledbetter's presentation from the September 22, 2014 Hortonworks webinar “What’s Possible with a Modern Data Architecture?” Mark is vice president for industry solutions at Hortonworks. He has more than twenty-five years experience in the software industry with a focus on Retail and supply chain.
Overview of big data & hadoop version 1 - Tony NguyenThanh Nguyen
Overview of Big data, Hadoop and Microsoft BI - version1
Big Data and Hadoop are emerging topics in data warehousing for many executives, BI practices and technologists today. However, many people still aren't sure how Big Data and existing Data warehouse can be married and turn that promise into value. This presentation provides an overview of Big Data technology and how Big Data can fit to the current BI/data warehousing context.
http://www.quantumit.com.au
http://www.evisional.com
Brief Introduction about Hadoop and Core Services.Muthu Natarajan
I have given quick introduction about Hadoop, Big Data, Business Intelligence and other core services and program involved to use Hadoop as a successful tool for Big Data analysis.
My true understanding in Big-Data:
“Data” become “information” but now big data bring information to “Knowledge” and ‘knowledge” becomes “Wisdom” and “Wisdom” turn into “Business” or “Revenue”, All if you use promptly & timely manner
Similar to Hadoop Platforms - Introduction, Importance, Providers (20)
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
2. 11/2/2016
Introduction
Hadoop was created by Doug Cutting and Mike Cafarella in 2005. It was named after a toy elephant.
It was originally developed to support distribution for the Nutch search engine project.
Hadoop is an open-source software framework for storing data and running applications on clusters. It provides immense
storage for any kind of data, enormous processing power and the ability to handle limitless concurrent tasks.
Hadoop is a highly scalable analytics platform and can process multiple petabytes of data spread across hundreds or
thousands of physical storage servers or nodes.
It provides:
Redundant, fault-tolerant data storage
Parallel computation framework
Job Coordination
Hadoop is a solution to manage Big Data, it is framework for running data management applications on a
large cluster built of commodity hardware.
2
3. 3
11/2/2016
Importance of Hadoop
Ability to store and process huge amounts of any kind of data, quickly.
Computing power- Hadoop's distributed computing model processes big data
faster.
Fault tolerance- Data and application processing are protected against hardware
failure. If a node goes down, jobs are automatically redirected to other nodes to
make sure the distributed computing does not fail.
Flexibility- structured and unstructured both kinds of data can be stored
without pre-processing them.
Low cost- The open-source framework is free and uses commodity hardware to
store large quantities of data.
Scalability- Nodes can be added as and when needed and maintenance cost is
very less.
http://www.sas.com/content/sascom/en_us/insights/big-data/hadoop/_jcr_content/par/styledcontainer_8bf1/par/styledcontainer_a643/par/textimage_ea05/image.img.png/1468851612191.png
3
4. 4
11/2/2016
Hadoop Core Components
Hadoop is a system for large scale data processing.
It has two main components:
1. HDFS – Hadoop Distributed File System (Storage)
Distributed across “nodes”
Natively redundant
NameNode tracks locations.
2. MapReduce (Processing)
Splits a task across processors
“near” the data & assembles results
Self-healing, High Bandwidth
Clustured storage
JobTracker manages the TaskTrackers
http://cdn.edureka.co/blog/wp-content/uploads/2014/08/hadoop1componenets.png
4
5. 5
11/2/2016
Top 5 Hadoop Platform Providers
A software framework which provides the necessary tools to
carry out Big Data analysis is widely used across industries.
It is open-source, designed to be user-friendly, in its “raw”
state it still needs considerable specialist knowledge to set up
and run.
“Hadoop-as-a-Service” has evolved in recent times, all of the
installation will actually take place within the vendors own
cloud, with customers paying a subscription to access the
services.
The top 5 Hadoop platform providers are:
IBM
Amazon Web Services
Hortonworks
Cloudera
MapR
https://media.licdn.com/mpr/mpr/AAEAAQAAAAAAAAclAAAAJDZmZTQwODVlLTAwZGQtNGI3Ny05OTlhLTUzMTEyYTNmMTllMg.jpg
`
5
6. 6
11/2/2016
1. IBM
IBM has deep roots in the computing industry. Its BigInsights package
adds its proprietary analytics and visualization algorithms to the core
Hadoop infrastructure.
IBM Open Platform with Apache Hadoop
Native support for rolling upgrades for Hadoop services
Support for long-running applications within YARN for enhanced
reliability & security
Heterogeneous storage in HDFS for in-memory, SSD in addition to
HDD
Spark in-memory distributed compute engine for dramatic performance increases over MapReduce and simplifies
developer experience, leveraging Java, Python & Scala languages
Apache Hadoop projects included: HDFS, YARN, MapReduce, Ambari, Hbase, Hive, Oozie, Parquet, Parquet Format,
Pig, Snappy, Solr, Spark, Sqoop, Zookeeper, Open JDK, Knox, Slider
https://www-01.ibm.com/software/in/data/images/bd-platform.jpg
6
7. 7
11/2/2016
2. Amazon Web Services
Amazon is a frontrunner and offering Hadoop in its cloud services
package.
Amazon Web Services (AWS) is a hosted solution integrating
Hadoop with Amazon’s Elastic Cloud Compute and Simple Storage
Service (S3) cloud-based data processing and storage services.
AWS offers a broad set of global compute, storage, database,
analytics, application, and deployment services that help
organizations move faster, lower IT costs, and scale applications.
AWS are trusted by the largest enterprises and the hottest start-
ups to power a wide variety of workloads including web and
mobile applications, data processing and warehousing, storage,
archive, and many others.
Big Data on AWS introduces you to cloud-based big data solutions such as Amazon Elastic, MapReduce (EMR),
Amazon Redshift, Amazon Kinesis and the rest of the AWS big data platform.
http://www.strategism.org/wp-content/uploads/2015/06/amazon-800x600.jpg
7
8. 8
11/2/2016
3. Hortonworks
Horton is one of the few which offer 100% open source
Hadoop technology without any proprietary.
Horton were also the first to integrate support for Apache
Catalog, which creates “metadata” – data within data –
simplifying the process of sharing your data across other
layers of service such as Apache Hive or Pig.
HDP (HORTONW0RKS DATA PLATFORM) is the
enterprise-ready open source Apache™
Hadoop® distribution based on a centralized architecture
(YARN).
HDP addresses the complete needs of data-at-rest, powers real-time customer applications and delivers
robust analytics that accelerate decision making and innovation.
Hortonworks is all about data: data-in-motion, data-at-rest, and Modern Data Applications. Our Connected
Data Platforms help customers create actionable intelligence to transform their businesses.
http://hortonworks.com/wp-content/uploads/2014/03/11.png
8
9. 9
11/2/2016
4. Cloudera
Most popular and have largest number of installations running.
Cloudera contribute Impala, which offers real-time massively parallel
processing of Big Data to Hadoop.
Cloudera's open-source Apache Hadoop distribution, CDH (Cloudera
Distribution Including Apache Hadoop), targets enterprise-class
deployments of that technology.
Cloudera says that more than 50% of its engineering output is donated
upstream to the various Apache-licensed open source projects (Apache
Hive, Apache Avro, Apache HBase, and so on) that combine to form the
Hadoop platform.
Cloudera is a sponsor of the Apache Software Foundation.
http://blog.cloudera.com/wp-content/uploads/2013/06/search.png
9
10. 10
11/2/2016
5. MapR
MapR uses some differing concepts, such as native support for
UNIX file systems rather than HDFS.
MapR technologies is spearheading development of the Apache
Drill project, which provides advanced tools for interactive real-
time querying of Big Datasets.
The MapR Converged Data Platform is the industry’s only
platform to integrate the enormous power of Hadoop and Spark
with global event streaming, real-time database capabilities, and
enterprise storage.
The MapR Hadoop distribution replaces HDFS with its proprietary
file system, MapR-FS, which is designed to provide more efficient
management of data, reliability and ease of use.
The MapR Converged Data Platform supports big data storage
and processing through the Apache collection of Hadoop
products, as well as its added-value components.
http://2s7gjr373w3x22jf92z99mgm5w-wpengine.netdna-ssl.com/wp-content/uploads/2016/03/Mapr_Zeta_4-1.png
10