This document provides an overview of distributed databases and the Yahoo! Cloud Serving Benchmark (YCSB). It discusses NoSQL databases Cassandra and HBase and how YCSB can be used to benchmark their performance. Experiments were conducted on Amazon EC2 using YCSB to load data and run workloads on Cassandra and HBase clusters. The results showed Cassandra had lower latency and higher throughput than HBase. YCSB provides a way to compare the performance of different databases.
HBase can be an intimidating beast for someone considering its adoption. For what kinds of workloads is it well suited? How does it integrate into the rest of my application infrastructure? What are the data semantics upon which applications can be built? What are the deployment and operational concerns? In this talk, I'll address each of these questions in turn. As supporting evidence, both high-level application architecture and internal details will be discussed. This is an interactive talk: bring your questions and your use-cases!
HBaseCon 2012 | HBase, the Use Case in eBay Cassini Cloudera, Inc.
eBay marketplace has been working hard on the next generation search infrastructure and software system, code-named Cassini. The new search engine processes over 250 million search queries and serves more than 2 billion page views each day. Its indexing platform is based on Apache Hadoop and Apache HBase. Apache HBase is a distributed persistent layer built on Hadoop to support billions of updates per day. Its easy sharding character, fast writes, and table scans, super fast data bulk load, and natural integration to Hadoop provide the cornerstones for successful continuous index builds. We will share with the audience the technical details and share the difficulties and challenges that we’ve gone through and that we are still facing in the process.
HBaseCon 2015: Apache Phoenix - The Evolution of a Relational Database Layer ...HBaseCon
Phoenix has evolved to become a full-fledged relational database layer over HBase data. We'll discuss the fundamental principles of how Phoenix pushes the computation to the server and why this leads to performance enabling direct support of low-latency applications, along with some major new features. Next, we'll outline our approach for transaction support in Phoenix, a work in-progress, and discuss the pros and cons of the various approaches. Lastly, we'll examine the current means of integrating Phoenix with the rest of the Hadoop ecosystem.
This document provides an overview of Apache Hadoop and HBase. It begins with an introduction to why big data is important and how Hadoop addresses storing and processing large amounts of data across commodity servers. The core components of Hadoop, HDFS for storage and MapReduce for distributed processing, are described. An example MapReduce job is outlined. The document then introduces the Hadoop ecosystem, including Apache HBase for random read/write access to data stored in Hadoop. Real-world use cases of Hadoop at companies like Yahoo, Facebook and Twitter are briefly mentioned before addressing questions.
HBaseCon 2012 | HBase and HDFS: Past, Present, Future - Todd Lipcon, ClouderaCloudera, Inc.
Apache HDFS, the file system on which HBase is most commonly deployed, was originally designed for high-latency high-throughput batch analytic systems like MapReduce. Over the past two to three years, the rising popularity of HBase has driven many enhancements in HDFS to improve its suitability for real-time systems, including durability support for write-ahead logs, high availability, and improved low-latency performance. This talk will give a brief history of some of the enhancements from Hadoop 0.20.2 through 0.23.0, discuss some of the most exciting work currently under way, and explore some of the future enhancements we expect to develop in the coming years. We will include both high-level overviews of the new features as well as practical tips and benchmark results from real deployments.
This document provides an overview of distributed databases and the Yahoo! Cloud Serving Benchmark (YCSB). It discusses NoSQL databases Cassandra and HBase and how YCSB can be used to benchmark their performance. Experiments were conducted on Amazon EC2 using YCSB to load data and run workloads on Cassandra and HBase clusters. The results showed Cassandra had lower latency and higher throughput than HBase. YCSB provides a way to compare the performance of different databases.
HBase can be an intimidating beast for someone considering its adoption. For what kinds of workloads is it well suited? How does it integrate into the rest of my application infrastructure? What are the data semantics upon which applications can be built? What are the deployment and operational concerns? In this talk, I'll address each of these questions in turn. As supporting evidence, both high-level application architecture and internal details will be discussed. This is an interactive talk: bring your questions and your use-cases!
HBaseCon 2012 | HBase, the Use Case in eBay Cassini Cloudera, Inc.
eBay marketplace has been working hard on the next generation search infrastructure and software system, code-named Cassini. The new search engine processes over 250 million search queries and serves more than 2 billion page views each day. Its indexing platform is based on Apache Hadoop and Apache HBase. Apache HBase is a distributed persistent layer built on Hadoop to support billions of updates per day. Its easy sharding character, fast writes, and table scans, super fast data bulk load, and natural integration to Hadoop provide the cornerstones for successful continuous index builds. We will share with the audience the technical details and share the difficulties and challenges that we’ve gone through and that we are still facing in the process.
HBaseCon 2015: Apache Phoenix - The Evolution of a Relational Database Layer ...HBaseCon
Phoenix has evolved to become a full-fledged relational database layer over HBase data. We'll discuss the fundamental principles of how Phoenix pushes the computation to the server and why this leads to performance enabling direct support of low-latency applications, along with some major new features. Next, we'll outline our approach for transaction support in Phoenix, a work in-progress, and discuss the pros and cons of the various approaches. Lastly, we'll examine the current means of integrating Phoenix with the rest of the Hadoop ecosystem.
This document provides an overview of Apache Hadoop and HBase. It begins with an introduction to why big data is important and how Hadoop addresses storing and processing large amounts of data across commodity servers. The core components of Hadoop, HDFS for storage and MapReduce for distributed processing, are described. An example MapReduce job is outlined. The document then introduces the Hadoop ecosystem, including Apache HBase for random read/write access to data stored in Hadoop. Real-world use cases of Hadoop at companies like Yahoo, Facebook and Twitter are briefly mentioned before addressing questions.
HBaseCon 2012 | HBase and HDFS: Past, Present, Future - Todd Lipcon, ClouderaCloudera, Inc.
Apache HDFS, the file system on which HBase is most commonly deployed, was originally designed for high-latency high-throughput batch analytic systems like MapReduce. Over the past two to three years, the rising popularity of HBase has driven many enhancements in HDFS to improve its suitability for real-time systems, including durability support for write-ahead logs, high availability, and improved low-latency performance. This talk will give a brief history of some of the enhancements from Hadoop 0.20.2 through 0.23.0, discuss some of the most exciting work currently under way, and explore some of the future enhancements we expect to develop in the coming years. We will include both high-level overviews of the new features as well as practical tips and benchmark results from real deployments.
With the public confession of Facebook, HBase is on everyone's lips when it comes to the discussion around the new "NoSQL" area of databases. In this talk, Lars will introduce and present a comprehensive overview of HBase. This includes the history of HBase, the underlying architecture, available interfaces, and integration with Hadoop.
Moderated by Lars Hofhansl (Salesforce), with Matteo Bertozzi (Cloudera), John Leach (Splice Machine), Maxim Lukiyanov (Microsoft), Matt Mullins (Facebook), and Carter Page (Google)
The future of HBase, via a variety of viewpoints.
HBaseCon 2013: Project Valta - A Resource Management Layer over Apache HBaseCloudera, Inc.
Valta is a resource management layer over Apache HBase that aims to address issues with shared workloads on a single HBase cluster. It introduces resource limits for HBase clients to prevent ill-behaved clients from monopolizing cluster resources. This is an initial step, and more work is needed to address request scheduling across HBase, HDFS, and lower layers to meet service level objectives. The document outlines ideas for full-stack request scheduling, auto-tuning systems based on high-level SLOs, and using multiple read replicas to improve latency.
Optimizing Apache HBase for Cloud Storage in Microsoft Azure HDInsightHBaseCon
Nitin Verma, Pravin Mittal, and Maxim Lukiyanov (Microsoft)
This session presents our success story of enabling a big internal customer on Microsoft Azure’s HBase service along with the methodology and tools used to meet high-throughput goals. We will also present how new features in HBase (like BucketCache and MultiWAL) are helping our customers in the medium-latency/high-bandwidth cloud-storage scenario.
HBaseCon 2012 | Mignify: A Big Data Refinery Built on HBase - Internet Memory...Cloudera, Inc.
Mignify is a platform for collecting, storing and analyzing Big Data harvested from the web. It aims at providing an easy access to focused and structured information extracted from Web data flows. It consists of a distributed crawler, a resource-oriented storage based on HDFS and HBase, and an extraction framework that produces filtered, enriched, and aggregated data from large document collections, including the temporal aspect. The whole system is deployed in an innovative hardware architecture comprising of a high number of small (low-consumption) nodes. This talk will tackle the decisions made along the design and development of the platform, both under a technical and functional perspective. It will introduce the cloud infrastructure, the LTE-like ingestion of the crawler output into HBase/HDFS, and the triggering mechanism of analytics based on a declarative filter/extraction specification. The design choices will be illustrated with a pilot application targeting Daily Web Monitoring in the context of a national domain.
Speaker: Varun Sharma (Pinterest)
Over the past year, HBase has become an integral component of Pinterest's storage stack. HBase has enabled us to quickly launch and iterate on new products and create amazing pinner experiences. This talk briefly describes some of these applications, the underlying schema, and how our HBase setup stays highly available and performant despite billions of requests every week. It will also include some performance tips for running on SSDs. Finally, we will talk about a homegrown serving technology we built from a mashup of HBase components that has gained wide adoption across Pinterest.
Analyzing Real-World Data with Apache Drilltshiran
This document provides an overview of Apache Drill, an open source SQL query engine for analysis of both structured and unstructured data. It discusses how Drill allows for schema-free querying of data stored in Hadoop, NoSQL databases and other data sources using SQL. The document outlines some key features of Drill, such as its flexible data model, ability to discover schemas on the fly, and distributed execution architecture. It also presents examples of using Drill to analyze real-world data from sources like HDFS, MongoDB and more.
The document provides an overview of the state of the Apache HBase database project. It discusses the project goals of availability, stability, and scalability. It also summarizes the mature codebase, active development areas like region replicas and ProcedureV2, and the growing ecosystem of SQL interfaces and other Hadoop components integrated with HBase. Recent releases include 1.1.2 which improved scanners and introduced quotas and throttling, and the 1.0 release which adopted semantic versioning and added region replicas.
Apache Spark on Apache HBase: Current and Future HBaseCon
- The document discusses Spark HBase Connector which combines Spark and HBase for fast access to key-value data. It allows running Spark and SQL queries directly on top of HBase tables.
- It provides high performance through data locality, partition pruning, and column pruning to reduce network overhead. Operations include bulk load, bulk put, bulk delete, and language integrated queries.
- The connector achieves improvements through a Spark Catalyst engine for query planning and optimization, and implementing HBase as an external data source with built-in filtering capabilities.
Efficient in situ processing of various storage types on apache tajoHyunsik Choi
The document discusses Apache Tajo, an open source data warehouse system that supports efficient in-situ processing of various storage types. It describes Tajo's architecture, how it supports different storage backends like HDFS, S3, HBase and data formats. The key points are:
1) Tajo provides a unified interface to integrate and process data from various storage systems and formats like HDFS, S3, HBase, in a single system.
2) It uses a pluggable storage and data format architecture with tablespaces to abstract different physical storage configurations.
3) Operations can be pushed down to underlying storages for optimization during query execution.
4) Current supported storages include HDFS, S
Harmonizing Multi-tenant HBase Clusters for Managing Workload DiversityHBaseCon
Speakers: Dheeraj Kapur, Rajiv Chittajallu & Anish Mathew (Yahoo!)
In early 2013, Yahoo! introduced multi-tenancy to HBase to offer it as a platform service for all Hadoop users. A certain degree of customization per tenant (a user or a project) was achieved through RegionServer groups, namespaces, and customized configs for each tenant. This talk covers how to accommodate diverse needs to individual tenants on the cluster, as well as operational tips and techniques that allow Yahoo! to automate the management of multi-tenant clusters at petabyte scale without errors.
Tajo Seoul Meetup July 2015 - What's New Tajo 0.11Hyunsik Choi
This document summarizes the key features and updates in Apache Tajo 0.11, an open source distributed data warehouse system for big data. Some major new features in 0.11 include native support for nested data types and JSON, loose schema support for self-describing formats, query federation across multiple data sources, and tablespace support for reusing storage configurations. Performance and stability improvements were also made, along with expanded support for data formats, storages, and Python UDFs. The document encourages involvement through the Tajo community.
HBase Data Modeling and Access Patterns with Kite SDKHBaseCon
This document discusses the Kite SDK and how it provides a higher-level API for developing Hadoop data applications. It introduces the Kite Datasets module, which defines a unified storage interface for datasets. It describes how Kite implements partitioning strategies to map data entities to storage partitions, and column mappings to define how data fields are stored in HBase tables. The document provides examples of using Kite datasets to randomly access and update data stored in HBase.
HBaseCon 2013: Compaction Improvements in Apache HBaseCloudera, Inc.
This document discusses improvements to compaction in Apache HBase. It begins with an overview of what compactions are and how they improve read performance in HBase. It then describes the default compaction algorithm and improvements made, including exploring selection and off-peak compactions. The document also covers making compactions more pluggable and enabling tuning on a per-table/column family basis. Finally, it proposes algorithms for different scenarios, such as level and stripe compactions, to improve compaction performance.
HBaseCon 2012 | Living Data: Applying Adaptable Schemas to HBase - Aaron Kimb...Cloudera, Inc.
HBase application developers face a number of challenges: schema management is performed at the application level, decoupled components of a system can break one another in unexpected ways, less-technical users cannot easily access data, and evolving data collection and analysis needs are difficult to plan for. In this talk, we describe a schema management methodology based on Apache Avro that enables users and applications to share data in HBase in a scalable, evolvable fashion. By adopting these practices, engineers independently using the same data have guarantees on how their applications interact. As data collection needs change, applications are resilient to drift in the underlying data representation. This methodology results in a data dictionary that allows less-technical users to understand what data is available to them for analysis and inspect data using general-purpose tools (for example, export it via Sqoop to an RDBMS). And because of Avro’s cross-language capabilities, HBase’s power can reach new domains, like web apps built in Ruby.
This document provides an overview of Apache Phoenix, including:
- A brief history of how it originated as an internal project at Salesforce before becoming a top-level Apache project.
- An architectural overview explaining that Phoenix provides a SQL interface for Apache HBase and runs on top of HDFS to enable next-generation data applications on HBase.
- Descriptions of Phoenix's key capabilities like SQL support, transactions, user-defined functions, and secondary indexes to boost query performance.
- Examples of how Phoenix can be used for common scenarios like analyzing server metrics data.
HBase Status Report - Hadoop Summit Europe 2014larsgeorge
This document provides a summary of new features and improvements in recent versions of Apache HBase, a distributed, scalable, big data store. It discusses major changes and enhancements in HBase 0.92+, 0.94+, and 0.96+, including new HFile formats, coprocessors, caching improvements, performance tuning, and more. The document is intended to bring readers up to date on the current state and capabilities of HBase.
Using HBase Co-Processors to Build a Distributed, Transactional RDBMS - Splic...Chicago Hadoop Users Group
John Leach Co-Founder and CTO of Splice Machine with 15+ years software development and machine learning experience will discuss how to use HBase co-processors to build an ANSI-99 SQL database with 1) parallelization of SQL execution plans, 2) ACID transactions with snapshot isolation and 3) consistent secondary indexing.
Transactions are critical in traditional RDBMSs because they ensure reliable updates across multiple rows and tables. Most operational applications require transactions, but even analytics systems use transactions to reliably update secondary indexes after a record insert or update.
In the Hadoop ecosystem, HBase is a key-value store with real-time updates, but it does not have multi-row, multi-table transactions, secondary indexes or a robust query language like SQL. Combining SQL with a full transactional model over HBase opens a whole new set of OLTP and OLAP use cases for Hadoop that was traditionally reserved for RDBMSs like MySQL or Oracle. However, a transactional HBase system has the advantage of scaling out with commodity servers, leading to a 5x-10x cost savings over traditional databases like MySQL or Oracle.
HBase co-processors, introduced in release 0.92, provide a flexible and high-performance framework to extend HBase. In this talk, we show how we used HBase co-processors to support a full ANSI SQL RDBMS without modifying the core HBase source. We will discuss how endpoint transactions are used to serialize SQL execution plans over to regions so that computation is local to where the data is stored. Additionally, we will show how observer co-processors simultaneously support both transactions and secondary indexing.
The talk will also discuss how Splice Machine extended the work of Google Percolator, Yahoo Labs’ OMID, and the University of Waterloo on distributed snapshot isolation for transactions. Lastly, performance benchmarks will be provided, including full TPC-C and TPC-H results that show how Hadoop/HBase can be a replacement of traditional RDBMS solutions.
To view the accompanying slide deck: http://www.slideshare.net/ChicagoHUG/
Impala is a massively parallel processing SQL query engine for Apache Hadoop. It allows real-time queries on large datasets by using a new execution engine written in C++ instead of Java and MapReduce. Impala can process queries in milliseconds to hours by distributing query execution across Hadoop clusters. It uses existing Hadoop file formats and metadata but is optimized for performance through techniques like runtime code generation and in-memory processing.
The Hive is a marketing and design company that offers flexible services to clients. They work to understand their clients' products, services, audiences and goals. The Hive can take on projects individually or work on a retained monthly basis according to a strategic plan. They prioritize sustainability and have a supply chain of specialist partners in areas like PR, web development and media. The Hive believes great ideas come from understanding clients and inspiring enthusiasm.
With the public confession of Facebook, HBase is on everyone's lips when it comes to the discussion around the new "NoSQL" area of databases. In this talk, Lars will introduce and present a comprehensive overview of HBase. This includes the history of HBase, the underlying architecture, available interfaces, and integration with Hadoop.
Moderated by Lars Hofhansl (Salesforce), with Matteo Bertozzi (Cloudera), John Leach (Splice Machine), Maxim Lukiyanov (Microsoft), Matt Mullins (Facebook), and Carter Page (Google)
The future of HBase, via a variety of viewpoints.
HBaseCon 2013: Project Valta - A Resource Management Layer over Apache HBaseCloudera, Inc.
Valta is a resource management layer over Apache HBase that aims to address issues with shared workloads on a single HBase cluster. It introduces resource limits for HBase clients to prevent ill-behaved clients from monopolizing cluster resources. This is an initial step, and more work is needed to address request scheduling across HBase, HDFS, and lower layers to meet service level objectives. The document outlines ideas for full-stack request scheduling, auto-tuning systems based on high-level SLOs, and using multiple read replicas to improve latency.
Optimizing Apache HBase for Cloud Storage in Microsoft Azure HDInsightHBaseCon
Nitin Verma, Pravin Mittal, and Maxim Lukiyanov (Microsoft)
This session presents our success story of enabling a big internal customer on Microsoft Azure’s HBase service along with the methodology and tools used to meet high-throughput goals. We will also present how new features in HBase (like BucketCache and MultiWAL) are helping our customers in the medium-latency/high-bandwidth cloud-storage scenario.
HBaseCon 2012 | Mignify: A Big Data Refinery Built on HBase - Internet Memory...Cloudera, Inc.
Mignify is a platform for collecting, storing and analyzing Big Data harvested from the web. It aims at providing an easy access to focused and structured information extracted from Web data flows. It consists of a distributed crawler, a resource-oriented storage based on HDFS and HBase, and an extraction framework that produces filtered, enriched, and aggregated data from large document collections, including the temporal aspect. The whole system is deployed in an innovative hardware architecture comprising of a high number of small (low-consumption) nodes. This talk will tackle the decisions made along the design and development of the platform, both under a technical and functional perspective. It will introduce the cloud infrastructure, the LTE-like ingestion of the crawler output into HBase/HDFS, and the triggering mechanism of analytics based on a declarative filter/extraction specification. The design choices will be illustrated with a pilot application targeting Daily Web Monitoring in the context of a national domain.
Speaker: Varun Sharma (Pinterest)
Over the past year, HBase has become an integral component of Pinterest's storage stack. HBase has enabled us to quickly launch and iterate on new products and create amazing pinner experiences. This talk briefly describes some of these applications, the underlying schema, and how our HBase setup stays highly available and performant despite billions of requests every week. It will also include some performance tips for running on SSDs. Finally, we will talk about a homegrown serving technology we built from a mashup of HBase components that has gained wide adoption across Pinterest.
Analyzing Real-World Data with Apache Drilltshiran
This document provides an overview of Apache Drill, an open source SQL query engine for analysis of both structured and unstructured data. It discusses how Drill allows for schema-free querying of data stored in Hadoop, NoSQL databases and other data sources using SQL. The document outlines some key features of Drill, such as its flexible data model, ability to discover schemas on the fly, and distributed execution architecture. It also presents examples of using Drill to analyze real-world data from sources like HDFS, MongoDB and more.
The document provides an overview of the state of the Apache HBase database project. It discusses the project goals of availability, stability, and scalability. It also summarizes the mature codebase, active development areas like region replicas and ProcedureV2, and the growing ecosystem of SQL interfaces and other Hadoop components integrated with HBase. Recent releases include 1.1.2 which improved scanners and introduced quotas and throttling, and the 1.0 release which adopted semantic versioning and added region replicas.
Apache Spark on Apache HBase: Current and Future HBaseCon
- The document discusses Spark HBase Connector which combines Spark and HBase for fast access to key-value data. It allows running Spark and SQL queries directly on top of HBase tables.
- It provides high performance through data locality, partition pruning, and column pruning to reduce network overhead. Operations include bulk load, bulk put, bulk delete, and language integrated queries.
- The connector achieves improvements through a Spark Catalyst engine for query planning and optimization, and implementing HBase as an external data source with built-in filtering capabilities.
Efficient in situ processing of various storage types on apache tajoHyunsik Choi
The document discusses Apache Tajo, an open source data warehouse system that supports efficient in-situ processing of various storage types. It describes Tajo's architecture, how it supports different storage backends like HDFS, S3, HBase and data formats. The key points are:
1) Tajo provides a unified interface to integrate and process data from various storage systems and formats like HDFS, S3, HBase, in a single system.
2) It uses a pluggable storage and data format architecture with tablespaces to abstract different physical storage configurations.
3) Operations can be pushed down to underlying storages for optimization during query execution.
4) Current supported storages include HDFS, S
Harmonizing Multi-tenant HBase Clusters for Managing Workload DiversityHBaseCon
Speakers: Dheeraj Kapur, Rajiv Chittajallu & Anish Mathew (Yahoo!)
In early 2013, Yahoo! introduced multi-tenancy to HBase to offer it as a platform service for all Hadoop users. A certain degree of customization per tenant (a user or a project) was achieved through RegionServer groups, namespaces, and customized configs for each tenant. This talk covers how to accommodate diverse needs to individual tenants on the cluster, as well as operational tips and techniques that allow Yahoo! to automate the management of multi-tenant clusters at petabyte scale without errors.
Tajo Seoul Meetup July 2015 - What's New Tajo 0.11Hyunsik Choi
This document summarizes the key features and updates in Apache Tajo 0.11, an open source distributed data warehouse system for big data. Some major new features in 0.11 include native support for nested data types and JSON, loose schema support for self-describing formats, query federation across multiple data sources, and tablespace support for reusing storage configurations. Performance and stability improvements were also made, along with expanded support for data formats, storages, and Python UDFs. The document encourages involvement through the Tajo community.
HBase Data Modeling and Access Patterns with Kite SDKHBaseCon
This document discusses the Kite SDK and how it provides a higher-level API for developing Hadoop data applications. It introduces the Kite Datasets module, which defines a unified storage interface for datasets. It describes how Kite implements partitioning strategies to map data entities to storage partitions, and column mappings to define how data fields are stored in HBase tables. The document provides examples of using Kite datasets to randomly access and update data stored in HBase.
HBaseCon 2013: Compaction Improvements in Apache HBaseCloudera, Inc.
This document discusses improvements to compaction in Apache HBase. It begins with an overview of what compactions are and how they improve read performance in HBase. It then describes the default compaction algorithm and improvements made, including exploring selection and off-peak compactions. The document also covers making compactions more pluggable and enabling tuning on a per-table/column family basis. Finally, it proposes algorithms for different scenarios, such as level and stripe compactions, to improve compaction performance.
HBaseCon 2012 | Living Data: Applying Adaptable Schemas to HBase - Aaron Kimb...Cloudera, Inc.
HBase application developers face a number of challenges: schema management is performed at the application level, decoupled components of a system can break one another in unexpected ways, less-technical users cannot easily access data, and evolving data collection and analysis needs are difficult to plan for. In this talk, we describe a schema management methodology based on Apache Avro that enables users and applications to share data in HBase in a scalable, evolvable fashion. By adopting these practices, engineers independently using the same data have guarantees on how their applications interact. As data collection needs change, applications are resilient to drift in the underlying data representation. This methodology results in a data dictionary that allows less-technical users to understand what data is available to them for analysis and inspect data using general-purpose tools (for example, export it via Sqoop to an RDBMS). And because of Avro’s cross-language capabilities, HBase’s power can reach new domains, like web apps built in Ruby.
This document provides an overview of Apache Phoenix, including:
- A brief history of how it originated as an internal project at Salesforce before becoming a top-level Apache project.
- An architectural overview explaining that Phoenix provides a SQL interface for Apache HBase and runs on top of HDFS to enable next-generation data applications on HBase.
- Descriptions of Phoenix's key capabilities like SQL support, transactions, user-defined functions, and secondary indexes to boost query performance.
- Examples of how Phoenix can be used for common scenarios like analyzing server metrics data.
HBase Status Report - Hadoop Summit Europe 2014larsgeorge
This document provides a summary of new features and improvements in recent versions of Apache HBase, a distributed, scalable, big data store. It discusses major changes and enhancements in HBase 0.92+, 0.94+, and 0.96+, including new HFile formats, coprocessors, caching improvements, performance tuning, and more. The document is intended to bring readers up to date on the current state and capabilities of HBase.
Using HBase Co-Processors to Build a Distributed, Transactional RDBMS - Splic...Chicago Hadoop Users Group
John Leach Co-Founder and CTO of Splice Machine with 15+ years software development and machine learning experience will discuss how to use HBase co-processors to build an ANSI-99 SQL database with 1) parallelization of SQL execution plans, 2) ACID transactions with snapshot isolation and 3) consistent secondary indexing.
Transactions are critical in traditional RDBMSs because they ensure reliable updates across multiple rows and tables. Most operational applications require transactions, but even analytics systems use transactions to reliably update secondary indexes after a record insert or update.
In the Hadoop ecosystem, HBase is a key-value store with real-time updates, but it does not have multi-row, multi-table transactions, secondary indexes or a robust query language like SQL. Combining SQL with a full transactional model over HBase opens a whole new set of OLTP and OLAP use cases for Hadoop that was traditionally reserved for RDBMSs like MySQL or Oracle. However, a transactional HBase system has the advantage of scaling out with commodity servers, leading to a 5x-10x cost savings over traditional databases like MySQL or Oracle.
HBase co-processors, introduced in release 0.92, provide a flexible and high-performance framework to extend HBase. In this talk, we show how we used HBase co-processors to support a full ANSI SQL RDBMS without modifying the core HBase source. We will discuss how endpoint transactions are used to serialize SQL execution plans over to regions so that computation is local to where the data is stored. Additionally, we will show how observer co-processors simultaneously support both transactions and secondary indexing.
The talk will also discuss how Splice Machine extended the work of Google Percolator, Yahoo Labs’ OMID, and the University of Waterloo on distributed snapshot isolation for transactions. Lastly, performance benchmarks will be provided, including full TPC-C and TPC-H results that show how Hadoop/HBase can be a replacement of traditional RDBMS solutions.
To view the accompanying slide deck: http://www.slideshare.net/ChicagoHUG/
Impala is a massively parallel processing SQL query engine for Apache Hadoop. It allows real-time queries on large datasets by using a new execution engine written in C++ instead of Java and MapReduce. Impala can process queries in milliseconds to hours by distributing query execution across Hadoop clusters. It uses existing Hadoop file formats and metadata but is optimized for performance through techniques like runtime code generation and in-memory processing.
The Hive is a marketing and design company that offers flexible services to clients. They work to understand their clients' products, services, audiences and goals. The Hive can take on projects individually or work on a retained monthly basis according to a strategic plan. They prioritize sustainability and have a supply chain of specialist partners in areas like PR, web development and media. The Hive believes great ideas come from understanding clients and inspiring enthusiasm.
HBase is a scalable NoSQL database modeled after Google's Bigtable. It is built on top of HDFS for storage, and uses Zookeeper for distributed coordination and failover. Data in HBase is stored in tables and sorted by row key, with columns grouped into families and cells containing values and timestamps. HBase tables are split into regions for scalability and fault tolerance, with a master server coordinating region locations across multiple region servers.
The document summarizes several popular options for SQL on Hadoop including Hive, SparkSQL, Drill, HAWQ, Phoenix, Trafodion, and Splice Machine. Each option is reviewed in terms of key features, architecture, usage patterns, and strengths/limitations. While all aim to enable SQL querying of Hadoop data, they differ in support for transactions, latency, data types, and whether they are native to Hadoop or require separate processes. Hive and SparkSQL are best for batch jobs while Drill, HAWQ and Splice Machine provide lower latency but with different integration models and capabilities.
This presentation covers several aspects of modeling data and domains with a graph database like Neo4j. The graph data model allows high fidelity modeling. Using the first class relationships of the graph model allow to use much higher forms of normalization than you would use in a relational database.
Video here: https://vimeo.com/67371996
Introduction to Apache HBase, MapR Tables and SecurityMapR Technologies
This talk with focus on two key aspects of applications that are using the HBase APIs. The first part will provide a basic overview of how HBase works followed by an introduction to the HBase APIs with a simple example. The second part will extend what we've learned to secure the HBase application running on MapR's industry leading Hadoop.
Keys Botzum is a Senior Principal Technologist with MapR Technologies. He has over 15 years of experience in large scale distributed system design. At MapR his primary responsibility is working with customers as a consultant, but he also teaches classes, contributes to documentation, and works with MapR engineering. Previously he was a Senior Technical Staff Member with IBM and a respected author of many articles on WebSphere Application Server as well as a book. He holds a Masters degree in Computer Science from Stanford University and a B.S. in Applied Mathematics/Computer Science from Carnegie Mellon University.
Introduction to HBase - Phoenix HUG 5/14Jeremy Walsh
This document provides an overview of using HBase and MapR Tables to implement an employee database. It discusses storing employee data in column families, with dynamic salary columns stored by year. An Employee class is used to represent the data. Methods are shown for getting a table handle, retrieving rows, and parsing the result into an Employee object. The example illustrates how HBase and MapR Tables can be used to build a flexible schema for semi-structured employee data.
HBase is a distributed, scalable, big data store that is built on top of HDFS. It is a column-oriented NoSQL database that provides fast lookups and updates for large tables. Key features include scalability, automatic failover, consistent reads/writes, sharding of tables, and Java and REST APIs for client access. HBase is not a replacement for an RDBMS as it does not support SQL, joins, or relations between tables.
Big Data Everywhere Chicago: Unleash the Power of HBase Shell (Conversant) BigDataEverywhere
Jayesh Thakrar, Senior Systems Engineer, Conversant
The venerable HBase shell is often regarded as a simple utility to perform basic DDL and maintenance activities. However, it is in fact a powerful, interactive programming environment, primarily due to the JRuby engine under the covers. In this presentation, I'll describe its JRuby heritage and show some of the things that can be done with the "ird" (interactive ruby shell), as well as show how to exploit JRuby and Java integration via concrete working examples. In addition, I will demonstrate how the "shell" can be used in Hadoop streaming to quickly perform complex and large volume batch jobs.
This document discusses performing data science on HBase using the WibiData platform. It introduces WibiData Language (WDL), which allows analyzing data stored in HBase columns in a concise and interactive way using Scala and Apache Crunch. The document demonstrates building a histogram of editor metrics by reading user data from an HBase table, filtering and binning average edit deltas, and visualizing the results. WDL aims to make HBase data exploration more accessible for data scientists compared to other frameworks like Hive and Pig.
NoSQL HBase schema design and SQL with Apache Drill Carol McDonald
The document provides an overview of HBase, including:
- HBase is a column-oriented NoSQL database modeled after Google's Bigtable. It is designed to handle large volumes of sparse data across clusters in a distributed fashion.
- Data in HBase is stored in tables containing rows, column families, columns, and versions. Tables are partitioned into regions distributed across region servers. The HMaster manages the cluster and Zookeeper coordinates operations.
- Common operations on HBase include put (insert/update), get, scan, and delete. The meta table stored in Zookeeper maps rows to their regions. This allows clients to efficiently access data in HBase's distributed architecture.
Hadoop is an open-source framework for storing and processing large datasets in a distributed computing environment. It allows for the storage and analysis of datasets that are too large for single servers. The document discusses several key Hadoop components including HDFS for storage, MapReduce for processing, HBase for column-oriented storage, Hive for SQL-like queries, Pig for data flows, and Sqoop for data transfer between Hadoop and relational databases. It provides examples of how each component can be used and notes that Hadoop is well-suited for large-scale batch processing of data.
This document provides an overview of Hive and HBase. It discusses how Hive allows SQL-like queries over data stored in Hadoop files, and how data can be loaded into and manipulated within Hive tables. It also describes HBase as a column-oriented NoSQL database built on Hadoop that allows for fast random reads and updates of large datasets. Key concepts covered include HiveQL, user defined functions, dynamic partitioning, and loading data. For HBase, it discusses tables, rows, columns, and cells as well as its architecture, client APIs, and integration with MapReduce.
This document provides an overview and objectives of a session on getting started with HBase application development. It discusses why NoSQL and HBase are needed due to limitations of relational databases in scaling horizontally to handle big data. It provides an introduction to the HBase data model, architecture, and basic operations like put, get, scan, and delete. It explains how HBase stores data in a sorted map structure and how writes flow through the write ahead log, memstore, and are flushed to HFiles on disk.
HBase is a distributed column-oriented database built on top of HDFS. It provides big data storage for Hadoop and allows for fast random read/write access and incremental addition of data. HBase tables are split into regions that are distributed across region servers. The master server coordinates the region servers and ZooKeeper maintains metadata. Common operations include get, scan, put, and delete. HBase is well-suited for applications requiring fast random read/write versus HDFS which is better for batch processing.
Apache Drill is a distributed SQL query engine that enables fast analytics over NoSQL databases and distributed file systems. It has a plugin-based architecture that allows it to access different data sources. For NoSQL databases, Drill leverages secondary indexes to generate index-based query plans for predicates on non-key columns. For distributed file systems like HDFS, Drill performs partition pruning based on directory metadata and filter pushdown based on Parquet row group statistics to speed up queries. Drill's extensible framework allows data sources to provide metadata like indexes, statistics, and partitioning functions to optimize query execution.
This document provides an overview of NoSQL databases, including what they are, how they differ from relational databases, and examples like HBase and Cassandra. NoSQL databases are designed for huge data sizes, high performance, scalability, and availability. They use techniques like partitioning, eventual consistency, flexible data models, and log-structured storage. HBase is based on BigTable and runs on Hadoop, while Cassandra is based on Dynamo and works across platforms. Both are open-source and widely adopted.
Hadoop and HBase experiences in perf log projectMao Geng
This document discusses experiences using Hadoop and HBase in the Perf-Log project. It provides an overview of the Perf-Log data format and architecture, describes how Hadoop and HBase were configured, and gives examples of using MapReduce jobs and HBase APIs like Put and Scan to analyze log data. Key aspects covered include matching Hadoop and HBase versions, running MapReduce jobs, using column families in HBase, and filtering Scan results.
Hypertable is an open source, massively scalable database modeled after Google's Bigtable. It is written in C++ for high performance and supports Apache Thrift interfaces for popular languages. Hypertable is actively developed, has over 8 years of development, and supports features like namespaces, atomic counters, secondary indexes, regex filtering, and Hadoop integration. It is designed for horizontal scalability and sparse data structures, allowing for high throughput on both reads and writes even with large datasets.
Large Scale Machine Learning with Apache SparkCloudera, Inc.
Spark offers a number of advantages over its predecessor MapReduce that make it ideal for large-scale machine learning. For example, Spark includes MLLib, a library of machine learning algorithms for large data. The presentation will cover the state of MLLib and the details of some of the scalable algorithms it includes.
Adding Value to HBase with IBM InfoSphere BigInsights and BigSQLPiotr Pruski
This is the extended deck I used for my presentation at the Information On Demand 2013 conference for Session Number 1687 - Adding Value to HBase with IBM InfoSphere BigInsights and BigSQL.
This presentation covers accessing HBase using Big SQL. It starts by going over general HBase concepts, than delves into how Big SQL adds an SQL layer on top of HBase (via HBase storage handler), secondary index support, queries, etc.
HUG France Feb 2016 - Migration de données structurées entre Hadoop et RDBMS ...Modern Data Stack France
Migration de données structurées entre Hadoop et RDBMS par Louis Rabiet (Squid Solution)
Avec l'extraction de données stockées dans une base de données relationnelle à l'aide d'un outil de BI avancé, et avec l'envoi via Kafka des données vers Tachyon, plusieurs sessions Spark peuvent travailler sur le même dataset en limitant la duplication. On obtient grâce à cela une communication à coût contrôlé entre la base de données d'origine et Spark ce qui permet de réintroduire de manière dynamique les données modifiées avec MLlib tout en travaillant sur des données à jour. Les résultats préliminaires seront partagés durant cette présentation.
Apache Spark presentation at HasGeek FifthElelephant
https://fifthelephant.talkfunnel.com/2015/15-processing-large-data-with-apache-spark
Covering Big Data Overview, Spark Overview, Spark Internals and its supported libraries
Docker Overview detail about docker introduction, architecture, components and orchestration
Meetup Details of my presentation here:
http://www.meetup.com/DevOps-Meetup/events/222569192/
http://www.meetup.com/Scale-Warriors-of-Bangalore/events/223008532/
Introduction to Flocker which is a lightweight volume and container manager.
Meetup details of my presentation:
http://www.meetup.com/Docker-Bangalore/events/222476025/
Covers different types of big data benchmarking, different suites, details into terasort, demo with TPCx-HS
Meetup Details of presentation:
http://www.meetup.com/lspe-in/events/203918952/
Go is a compiled, garbage-collected programming language that supports concurrent programming through lightweight threads called goroutines and communication between goroutines via channels. It aims to provide both high-level and low-level programming with a clean syntax and minimal features. The document discusses Go's concurrency model, syntax, goroutines, channels, and use cases including cloud infrastructure, mobile development, and audio synthesis.
Kubernetes is an open-source system for managing containerized applications across multiple hosts. It groups related containers into pods that are scheduled together on the same host. Key components include the master node for managing the cluster, minion nodes for hosting pods, and kubelet software for running pods and managing containers. Pods allow tight coupling of related containers, while labels provide loose organization of cooperating pods.
Presentation detailed about SDN (Software Defined Network) overview . It covers from basics like different controllers and touches upon some technical details.
Covers Terminologies used, OpenFlow, Controllers, Open Day light, Cisco ONE, Google B4, NFV,etc
This document provides an overview of Docker, including what it is, how it compares to virtual machines and containers, its architecture and features. It discusses that Docker virtualizes using lightweight Linux containers rather than full virtual machines, and how this provides benefits like smaller size and faster performance compared to VMs. It also covers Docker's components like the Docker Engine, Hub and images, and how Docker can be used to develop, ship and run applications on any infrastructure.
Presentation provides introduction and detailed explanation of the Java 8 Lambda and Streams. Lambda covers with Method references, default methods and Streams covers with stream operations,types of streams, collectors. Also streams are elaborated with parallel streams and benchmarking comparison of sequential and parallel streams.
Additional slides are covered with Optional, Splitators, certain projects based on lambda and streams
Presentation detailed about capabilities of In memory Analytic using Apache Spark. Apache Spark overview with programming mode, cluster mode with Mosos, supported operations and comparison with Hadoop Map Reduce. Elaborating Apache Spark Stack expansion like Shark, Streaming, MLib, GraphX
OpenMetadata Community Meeting - 5th June 2024OpenMetadata
The OpenMetadata Community Meeting was held on June 5th, 2024. In this meeting, we discussed about the data quality capabilities that are integrated with the Incident Manager, providing a complete solution to handle your data observability needs. Watch the end-to-end demo of the data quality features.
* How to run your own data quality framework
* What is the performance impact of running data quality frameworks
* How to run the test cases in your own ETL pipelines
* How the Incident Manager is integrated
* Get notified with alerts when test cases fail
Watch the meeting recording here - https://www.youtube.com/watch?v=UbNOje0kf6E
Utilocate offers a comprehensive solution for locate ticket management by automating and streamlining the entire process. By integrating with Geospatial Information Systems (GIS), it provides accurate mapping and visualization of utility locations, enhancing decision-making and reducing the risk of errors. The system's advanced data analytics tools help identify trends, predict potential issues, and optimize resource allocation, making the locate ticket management process smarter and more efficient. Additionally, automated ticket management ensures consistency and reduces human error, while real-time notifications keep all relevant personnel informed and ready to respond promptly.
The system's ability to streamline workflows and automate ticket routing significantly reduces the time taken to process each ticket, making the process faster and more efficient. Mobile access allows field technicians to update ticket information on the go, ensuring that the latest information is always available and accelerating the locate process. Overall, Utilocate not only enhances the efficiency and accuracy of locate ticket management but also improves safety by minimizing the risk of utility damage through precise and timely locates.
DDS Security Version 1.2 was adopted in 2024. This revision strengthens support for long runnings systems adding new cryptographic algorithms, certificate revocation, and hardness against DoS attacks.
Flutter is a popular open source, cross-platform framework developed by Google. In this webinar we'll explore Flutter and its architecture, delve into the Flutter Embedder and Flutter’s Dart language, discover how to leverage Flutter for embedded device development, learn about Automotive Grade Linux (AGL) and its consortium and understand the rationale behind AGL's choice of Flutter for next-gen IVI systems. Don’t miss this opportunity to discover whether Flutter is right for your project.
SOCRadar's Aviation Industry Q1 Incident Report is out now!
The aviation industry has always been a prime target for cybercriminals due to its critical infrastructure and high stakes. In the first quarter of 2024, the sector faced an alarming surge in cybersecurity threats, revealing its vulnerabilities and the relentless sophistication of cyber attackers.
SOCRadar’s Aviation Industry, Quarterly Incident Report, provides an in-depth analysis of these threats, detected and examined through our extensive monitoring of hacker forums, Telegram channels, and dark web platforms.
Odoo ERP software
Odoo ERP software, a leading open-source software for Enterprise Resource Planning (ERP) and business management, has recently launched its latest version, Odoo 17 Community Edition. This update introduces a range of new features and enhancements designed to streamline business operations and support growth.
The Odoo Community serves as a cost-free edition within the Odoo suite of ERP systems. Tailored to accommodate the standard needs of business operations, it provides a robust platform suitable for organisations of different sizes and business sectors. Within the Odoo Community Edition, users can access a variety of essential features and services essential for managing day-to-day tasks efficiently.
This blog presents a detailed overview of the features available within the Odoo 17 Community edition, and the differences between Odoo 17 community and enterprise editions, aiming to equip you with the necessary information to make an informed decision about its suitability for your business.
Why Mobile App Regression Testing is Critical for Sustained Success_ A Detail...kalichargn70th171
A dynamic process unfolds in the intricate realm of software development, dedicated to crafting and sustaining products that effortlessly address user needs. Amidst vital stages like market analysis and requirement assessments, the heart of software development lies in the meticulous creation and upkeep of source code. Code alterations are inherent, challenging code quality, particularly under stringent deadlines.
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
Atelier - Innover avec l’IA Générative et les graphes de connaissancesNeo4j
Atelier - Innover avec l’IA Générative et les graphes de connaissances
Allez au-delà du battage médiatique autour de l’IA et découvrez des techniques pratiques pour utiliser l’IA de manière responsable à travers les données de votre organisation. Explorez comment utiliser les graphes de connaissances pour augmenter la précision, la transparence et la capacité d’explication dans les systèmes d’IA générative. Vous partirez avec une expérience pratique combinant les relations entre les données et les LLM pour apporter du contexte spécifique à votre domaine et améliorer votre raisonnement.
Amenez votre ordinateur portable et nous vous guiderons sur la mise en place de votre propre pile d’IA générative, en vous fournissant des exemples pratiques et codés pour démarrer en quelques minutes.
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
E-commerce Application Development Company.pdfHornet Dynamics
Your business can reach new heights with our assistance as we design solutions that are specifically appropriate for your goals and vision. Our eCommerce application solutions can digitally coordinate all retail operations processes to meet the demands of the marketplace while maintaining business continuity.
Software Engineering, Software Consulting, Tech Lead, Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Transaction, Spring MVC, OpenShift Cloud Platform, Kafka, REST, SOAP, LLD & HLD.
3. NoSQL
• NoSQL is all about scalability
• Scaling to size
• Scaling to complexity
• Deliver Heavy R/W workloads
• Data duplication and denormalization are first-class
citizens
8. Re Check..
• What is CAP theorem?
• Does NoSQL supports Transaction?
• NoSQL Types?
9. HBase
• Scalable, distributed data store
• Sorted map of maps / Key- Value store
• Open source avatar of Google’s Bigtable
• Sparse
• Multi dimensional
• Tightly integrated with Hadoop
• Not a RDBMS
15. Important Terms
• Table
• Consists of rows and columns
• Row
• Has a bunch of columns.
• Identified by a rowkey (primary’ key)
• Column Qualifier
• Dynamic column name
• Column Family
• Column groups - logical and physical (Similar access pattern)
• Cell
• The actual element that contains the data for a row-column insertion
• Version
• Every cell has multiple versions
16. Logical & Tall(v/s(Wide(tab Plehsy(sstiocraal gSet(rfuocottuprreint
CF1 CF2
r1 c1:v1 c1:v9 c6:v2
r2 c1:v2 c3:v6
r3 c2:v3 c5:v6
r4 c2:v4
r5 c1:v1 c3:v5 c7:v8
HFile for CF1 HFile for CF2
r1:CF1:c1:t1:v1
r2:CF1:c1:t2:v2
r2:CF1:c3:t3:v6
r3:CF1:c2:t1:v3
r4:CF1:c2:t1:v4
r5:CF1:c1:t2:v1
r5:CF1:c3:t3:v5
r1:CF2:c1:t1:v9
r1:CF2:c6:t4:v2
r3:CF2:c5:t4:v6
r5:CF2:c7:t3:v8
Result object returned for a Get() on row r5
r5:CF1:c1:t2:v1
r5:CF1:c3:t3:v5
r5:cf2:c7:t3:v8
KeyValue objects
Cell
Value
Time
Stamp
Col
Qual
Col
Fam
Row
Key
Key Value
Logical representation of an HBase table.
We'll look at what it means to Get() row r5 from this table.
Actual physical storage of the table
Structure of a KeyValue object
17. (J)Ruby Shell Commands
• General
• DDL
• Create
• Describe
• Namespace
• DML
• Put
• Get
• Scan
• Delete
• Tools
• Replication
• Snapshot
• Security
• Visibility
Creating Table:
create 'DEVICE_DETAIL','BASIC_INFO','CONTRACT_INFO'
Data Generation :
put 'DEVICE_DETAIL','Device1','BASIC_INFO:IP_ADDR','10.10.10.10'
put 'DEVICE_DETAIL','Device2','BASIC_INFO:IP_ADDR','20.20.20.20'
Descripting Table:
describe 'DEVICE_DETAIL'
Alert Info :
alter 'DEVICE_DETAIL',{NAME => 'CONTRACT_INFO',VERSIONS => 3 }
Update Data:
put 'DEVICE_DETAIL','Device2','CONTRACT_INFO:CONTRACT_NUMBER','22222222'
Multi- Version Example :
get 'DEVICE_DETAIL','Device2', {COLUMN=>'CONTRACT_INFO:CONTRACT_NUMBER', VERSIONS=>2}
Scan Info:
scan 'DEVICE_DETAIL’
Scan with Filter :
scan 'DEVICE_DETAIL' , { COLUMNS => 'CONTRACT_INFO:STATUS', LIMIT => 10, FILTER =>
"ValueFilter( =, 'binary:IN_ACTIVE' )" }
Delete Info:
delete 'DEVICE_DETAIL','Device2','CONTRACT_INFO:STATUS'
18. Java API
• HTable
• HBaseAdmin
• HTablePool
• Get
• Put
• Delete
• Scan
• Increment
• HTableDescriptor
• HTableInterface
• Result
• ResultScanner
• KeyValue
HTable table = new HTable(configuration, hbasetablename);
Put row = new Put(Bytes.toBytes(rowKey));
row.add(Bytes.toBytes(columnFamily), Bytes.toBytes(key),
Bytes.toBytes(value));
Get getKey = new Get(Bytes.toBytes(key));
Result result = table.get(getKey);
19. Spark HBase
// create configuration
val config = HBaseConfiguration.create()
config.set("hbase.zookeeper.quorum", "localhost")
config.set("hbase.zookeeper.property.clientPort","2181")
config.set("hbase.mapreduce.inputtable", "hbaseTableName")
// read data
val hbaseData = sparkContext.hadoopRDD(new JobConf(config), classOf[TableInputFormat],
classOf[ImmutableBytesWritable], classOf[Result])
// count rows
println(hbaseData.count)
28. Use Case
• Canonical(use(case:(storing(crawl(data(and(indices(for(search
14
1
Web Search
powered by Bigtable
Crawlers
Crawlers
1 Crawlers constantly scour the Internet for new pages.
Those pages are stored as individual records in Bigtable. 3
2 A MapReduce job runs over the entire table, generating
search indexes for the Web Search application.
4
2
5
Indexing the Internet
Searching the Internet
3 The user initiates a Web Search request.
4 The Web Search application queries the Search Indexes
and retries matching documents directly from Bigtable.
5 Search results are presented to the user.
Internets Bigtable
Crawlers
Crawlers
MapReduce
You
Search
InSdeeaxrch
InSdeeaxrch
Index
Web Search
Most NoSQL stores lack true ACID transactions, although a few recent systems, such as FairCom c-treeACE, Google Spanner (though technically a NewSQL database) and FoundationDB, have made them central to their designs.
Eventual consistency is a consistency model used in distributed computing to achieve high availability that informally guarantees that, if no new updates are made to a given data item, eventually all accesses to that item will return the last updated value
Eventually consistent services are often classified as providing BASE (Basically Available, Soft state, Eventual consistency) semantics, in contrast to traditional ACID (Atomicity, Consistency, Isolation, Durability) guarantees.
Eric Brewer’s CAP theorem says that if you want consistency, availability, and partition tolerance, you have to settle for two out of three. (For a distributed system, partition tolerance means the system will continue to work unless there is a total network failure. A few nodes can fail and the system keeps going.)
Consistency means that each client always has the same view of the data.
Availability means that all clients can always read and write.
Partition tolerance means that the system works well across physical network partitions.
http://localhost:60010/master-status
Eric Brewer’s CAP theorem says that if you want consistency, availability, and partition tolerance, you have to settle for two out of three. (For a distributed system, partition tolerance means the system will continue to work unless there is a total network failure. A few nodes can fail and the system keeps going.)
Consistency means that each client always has the same view of the data.
Availability means that all clients can always read and write.
Partition tolerance means that the system works well across physical network partitions.