This document provides an overview of Apache Phoenix, including:
- A brief history of how it originated as an internal project at Salesforce before becoming a top-level Apache project.
- An architectural overview explaining that Phoenix provides a SQL interface for Apache HBase and runs on top of HDFS to enable next-generation data applications on HBase.
- Descriptions of Phoenix's key capabilities like SQL support, transactions, user-defined functions, and secondary indexes to boost query performance.
- Examples of how Phoenix can be used for common scenarios like analyzing server metrics data.
Cutting-edge Hadoop clusters are bound to need custom (add-on) services that are not available in the Hadoop distribution of their choice. Agility is crucial for companies to integrate any service into existing large-scale Hadoop clusters with ease.
Apache Ambari manages the Hadoop cluster and solves this problem by extending the stack with add-on services, which can be a new Apache project, different Hadoop file system, or internal tool. This talk covers how to create a service definition in Ambari to manage lifecycle commands and configs, plus advanced topics like packaging, installing from multiple repositories, recommending and validating configs using Service Advisor, running custom commands, defining dependencies on configs and other services, and more. We will also cover how to create custom metrics and dashboards using Ambari Metric System and Grafana, generating alerts, and enabling security by authenticating with Kerberos.
Further, we will discuss the future of service definitions and how Ambari 3.0 will support custom services through Management Packs to enable Hadoop vendors to release software faster.
Speaker
Jayush Luniya, Principal Software Engineer, Hortonworks
Apache HBase™ is the Hadoop database, a distributed, salable, big data store.Its a column-oriented database management system that runs on top of HDFS.
Apache HBase is an open source NoSQL database that provides real-time read/write access to those large data sets. ... HBase is natively integrated with Hadoop and works seamlessly alongside other data access engines through YARN.
Anoop Sam John and Ramkrishna Vasudevan (Intel)
HBase provides an LRU based on heap cache but its size (and so the total data size that can be cached) is limited by Java’s max heap space. This talk highlights our work under HBASE-11425 to allow the HBase read path to work directly from the off-heap area.
Tuning Apache Ambari performance for Big Data at scale with 3000 agentsDataWorks Summit
Apache Ambari manages Hadoop at large-scale and it becomes increasingly difficult for cluster admins to keep the machinery running smoothly as data grows and nodes scale from 30 to 3000 agents. To test at scale, Ambari has a Performance Stack that allows a VM to host as many as 50 Ambari Agents. The simulated stack and 50 Agents per VM can stress-test Ambari Server with the same load as a 3000 node cluster. This talk will cover how to tune the performance of Ambari and MySQL, and share performance benchmarks for features like deploy times, bulk operations, installation of bits, Rolling & Express Upgrade. Moreover, the speaker will show how to use Ambari Metrics System and Grafana to plot performance, detect anomalies, and pinpoint tips on how to improve performance for a more responsive experience. Lastly, the talk will discuss roadmap features in Ambari 3.0 for improving performance and scale.
Cutting-edge Hadoop clusters are bound to need custom (add-on) services that are not available in the Hadoop distribution of their choice. Agility is crucial for companies to integrate any service into existing large-scale Hadoop clusters with ease.
Apache Ambari manages the Hadoop cluster and solves this problem by extending the stack with add-on services, which can be a new Apache project, different Hadoop file system, or internal tool. This talk covers how to create a service definition in Ambari to manage lifecycle commands and configs, plus advanced topics like packaging, installing from multiple repositories, recommending and validating configs using Service Advisor, running custom commands, defining dependencies on configs and other services, and more. We will also cover how to create custom metrics and dashboards using Ambari Metric System and Grafana, generating alerts, and enabling security by authenticating with Kerberos.
Further, we will discuss the future of service definitions and how Ambari 3.0 will support custom services through Management Packs to enable Hadoop vendors to release software faster.
Speaker
Jayush Luniya, Principal Software Engineer, Hortonworks
Apache HBase™ is the Hadoop database, a distributed, salable, big data store.Its a column-oriented database management system that runs on top of HDFS.
Apache HBase is an open source NoSQL database that provides real-time read/write access to those large data sets. ... HBase is natively integrated with Hadoop and works seamlessly alongside other data access engines through YARN.
Anoop Sam John and Ramkrishna Vasudevan (Intel)
HBase provides an LRU based on heap cache but its size (and so the total data size that can be cached) is limited by Java’s max heap space. This talk highlights our work under HBASE-11425 to allow the HBase read path to work directly from the off-heap area.
Tuning Apache Ambari performance for Big Data at scale with 3000 agentsDataWorks Summit
Apache Ambari manages Hadoop at large-scale and it becomes increasingly difficult for cluster admins to keep the machinery running smoothly as data grows and nodes scale from 30 to 3000 agents. To test at scale, Ambari has a Performance Stack that allows a VM to host as many as 50 Ambari Agents. The simulated stack and 50 Agents per VM can stress-test Ambari Server with the same load as a 3000 node cluster. This talk will cover how to tune the performance of Ambari and MySQL, and share performance benchmarks for features like deploy times, bulk operations, installation of bits, Rolling & Express Upgrade. Moreover, the speaker will show how to use Ambari Metrics System and Grafana to plot performance, detect anomalies, and pinpoint tips on how to improve performance for a more responsive experience. Lastly, the talk will discuss roadmap features in Ambari 3.0 for improving performance and scale.
This talk delves into the many ways that a user has to use HBase in a project. Lars will look at many practical examples based on real applications in production, for example, on Facebook and eBay and the right approach for those wanting to find their own implementation. He will also discuss advanced concepts, such as counters, coprocessors and schema design.
Data in Hadoop is getting bigger every day, consumers of the data are growing, organizations are now looking at making their Hadoop cluster compliant to federal regulations and commercial demands. Apache Ranger simplifies the management of security policies across all components in Hadoop. Ranger provides granular access controls to data.
The deck describes what security tools are available in Hadoop and their purpose then it moves on to discuss in detail Apache Ranger.
Apache Knox Gateway "Single Sign On" expands the reach of the Enterprise UsersDataWorks Summit
Apache Knox Gateway is a proxy for interacting with Apache Hadoop clusters in a secure way providing authentication, service level authorization, and many other extensions to secure any HTTP interactions in your cluster. One main feature of Apache Knox Gateway is the ability to extend the reach of your REST APIs to the internet while still securing your cluster and working with Kerberos. Recent contributions to the Apache Knox community have added support for Single Sign On (SSO) based on Pac4j 1.8.9 which is a very powerful security engine which provides SSO support through SAML2, OAuth, OpenID, and CAS. In addition, through recent community contributions Apache Ambari, and Apache Ranger can now also provide SSO authentication through Knox. This paper will discuss the architecture of Knox SSO, it will explain how enterprise user could benefit by this feature and will present enterprise use cases for Knox SSO, and integration with open source Shibboleth, ADFS Windows server Idp support, and Okta cloud Idp.
Apache Hive is a rapidly evolving project which continues to enjoy great adoption in the big data ecosystem. As Hive continues to grow its support for analytics, reporting, and interactive query, the community is hard at work in improving it along with many different dimensions and use cases. This talk will provide an overview of the latest and greatest features and optimizations which have landed in the project over the last year. Materialized views, the extension of ACID semantics to non-ORC data, and workload management are some noteworthy new features.
We will discuss optimizations which provide major performance gains as well as integration with other big data technologies such as Apache Spark, Druid, and Kafka. The talk will also provide a glimpse of what is expected to come in the near future.
Strongly Consistent Global Indexes for Apache PhoenixYugabyteDB
Presentation by Kadir Ozdemir, Principal Architect - Salesforce, recorded at Distributed SQL Summit on Sept 20, 2019.
https://vimeo.com/362358494
distributedsql.org/
Apache phoenix: Past, Present and Future of SQL over HBAseenissoz
HBase as the NoSQL database of choice in the Hadoop ecosystem has already been proven itself in scale and in many mission critical workloads in hundreds of companies. Phoenix as the SQL layer on top of HBase, has been increasingly becoming the tool of choice as the perfect complementary for HBase. Phoenix is now being used more and more for super low latency querying and fast analytics across a large number of users in production deployments. In this talk, we will cover what makes Phoenix attractive among current and prospective HBase users, like SQL support, JDBC, data modeling, secondary indexing, UDFs, and also go over recent improvements like Query Server, ODBC drivers, ACID transactions, Spark integration, etc. We will conclude by looking into items in the pipeline and how Phoenix and HBase interacts with other engines like Hive and Spark.
Performance Tuning RocksDB for Kafka Streams’ State Storesconfluent
Performance Tuning RocksDB for Kafka Streams’ State Stores, Bruno Cadonna, Contributor to Apache Kafka & Software Developer at Confluent and Dhruba Borthakur, CTO & Co-founder Rockset
Meetup link: https://www.meetup.com/Berlin-Apache-Kafka-Meetup-by-Confluent/events/273823025/
This talk delves into the many ways that a user has to use HBase in a project. Lars will look at many practical examples based on real applications in production, for example, on Facebook and eBay and the right approach for those wanting to find their own implementation. He will also discuss advanced concepts, such as counters, coprocessors and schema design.
Data in Hadoop is getting bigger every day, consumers of the data are growing, organizations are now looking at making their Hadoop cluster compliant to federal regulations and commercial demands. Apache Ranger simplifies the management of security policies across all components in Hadoop. Ranger provides granular access controls to data.
The deck describes what security tools are available in Hadoop and their purpose then it moves on to discuss in detail Apache Ranger.
Apache Knox Gateway "Single Sign On" expands the reach of the Enterprise UsersDataWorks Summit
Apache Knox Gateway is a proxy for interacting with Apache Hadoop clusters in a secure way providing authentication, service level authorization, and many other extensions to secure any HTTP interactions in your cluster. One main feature of Apache Knox Gateway is the ability to extend the reach of your REST APIs to the internet while still securing your cluster and working with Kerberos. Recent contributions to the Apache Knox community have added support for Single Sign On (SSO) based on Pac4j 1.8.9 which is a very powerful security engine which provides SSO support through SAML2, OAuth, OpenID, and CAS. In addition, through recent community contributions Apache Ambari, and Apache Ranger can now also provide SSO authentication through Knox. This paper will discuss the architecture of Knox SSO, it will explain how enterprise user could benefit by this feature and will present enterprise use cases for Knox SSO, and integration with open source Shibboleth, ADFS Windows server Idp support, and Okta cloud Idp.
Apache Hive is a rapidly evolving project which continues to enjoy great adoption in the big data ecosystem. As Hive continues to grow its support for analytics, reporting, and interactive query, the community is hard at work in improving it along with many different dimensions and use cases. This talk will provide an overview of the latest and greatest features and optimizations which have landed in the project over the last year. Materialized views, the extension of ACID semantics to non-ORC data, and workload management are some noteworthy new features.
We will discuss optimizations which provide major performance gains as well as integration with other big data technologies such as Apache Spark, Druid, and Kafka. The talk will also provide a glimpse of what is expected to come in the near future.
Strongly Consistent Global Indexes for Apache PhoenixYugabyteDB
Presentation by Kadir Ozdemir, Principal Architect - Salesforce, recorded at Distributed SQL Summit on Sept 20, 2019.
https://vimeo.com/362358494
distributedsql.org/
Apache phoenix: Past, Present and Future of SQL over HBAseenissoz
HBase as the NoSQL database of choice in the Hadoop ecosystem has already been proven itself in scale and in many mission critical workloads in hundreds of companies. Phoenix as the SQL layer on top of HBase, has been increasingly becoming the tool of choice as the perfect complementary for HBase. Phoenix is now being used more and more for super low latency querying and fast analytics across a large number of users in production deployments. In this talk, we will cover what makes Phoenix attractive among current and prospective HBase users, like SQL support, JDBC, data modeling, secondary indexing, UDFs, and also go over recent improvements like Query Server, ODBC drivers, ACID transactions, Spark integration, etc. We will conclude by looking into items in the pipeline and how Phoenix and HBase interacts with other engines like Hive and Spark.
Performance Tuning RocksDB for Kafka Streams’ State Storesconfluent
Performance Tuning RocksDB for Kafka Streams’ State Stores, Bruno Cadonna, Contributor to Apache Kafka & Software Developer at Confluent and Dhruba Borthakur, CTO & Co-founder Rockset
Meetup link: https://www.meetup.com/Berlin-Apache-Kafka-Meetup-by-Confluent/events/273823025/
Extending Spark for Qbeast's SQL Data Source with Paola Pardo and Cesare Cug...Qbeast
Slides of the Barcelona Spark meetup of the 24th of October 2019. The recording is available at https://www.youtube.com/watch?v=eCoCcBH4hIU.
Abstract
One of the key strengths of Spark is its flexibility as it integrates with dozens of different storage systems and file formats. However, it is not the same reading from a CSV file, or a SQL database, or an exotic stratified sampled multidimensional database. And finding the right balance between modularity and flexibility is not easy!
In this presentation, we will talk about the evolution of Spark's DataSource API, and how it integrates with the SQL optimizer, highlighting how we can make much faster queries with logical and the physical plans that better integrates with the storage. From theory to practise, we will then discuss how we extended the Spark's internals, and we built a new source integration that allows the push-down of both sampling and multidimensional filtering.
About the speakers:
Paola Pardo is a Computer Engineer from Barcelona. She graduated in Computer engineer this last summer at the Technical University of Catalunya with a thesis focused on Data storage push down optimization based on Apache Spark. She is, and she is currently working at Barcelona Supercomputing Center and in its spin-off Qbeast developing a Qbeast-Spark connector.
Cesare Cugnasco is a PhD in Computer Architecture and a researcher at the Barcelona Supercomputing Center. His research focuses on NoSQL databases, distributed computing and High-performance storage. He invented and patented a new database architecture for Big Data, and he is building a spin-off for its commercialization.
Hadoop became the most common systm to store big data.
With Hadoop, many supporting systems emerged to complete the aspects that are missing in Hadoop itself.
Together they form a big ecosystem.
This presentation covers some of those systems.
While not capable to cover too many in one presentation, I tried to focus on the most famous/popular ones and on the most interesting ones.
How to make data available for analytics ASAPMariaDB plc
There are many ways to import data into MariaDB ColumnStore, including command-line tools for importing files. However, a combination of bulk and streaming data adapters makes it easy to import data on demand, without having to wait for a scheduled job. MariaDB's Jens Röwekamp and Markus Mäkelä show all of the ways to import data, from manual imports to more advanced options such as C++, Java and Python data adapters, Apache Spark, change-data-capture streams and Apache Kafka message queues – all of which can be used to import data on demand so it’s available for analytics as fast as possible.
Apache Big Data EU 2016: Building Streaming Applications with Apache ApexApache Apex
Stream processing applications built on Apache Apex run on Hadoop clusters and typically power analytics use cases where availability, flexible scaling, high throughput, low latency and correctness are essential. These applications consume data from a variety of sources, including streaming sources like Apache Kafka, Kinesis or JMS, file based sources or databases. Processing results often need to be stored in external systems (sinks) for downstream consumers (pub-sub messaging, real-time visualization, Hive and other SQL databases etc.). Apex has the Malhar library with a wide range of connectors and other operators that are readily available to build applications. We will cover key characteristics like partitioning and processing guarantees, generic building blocks for new operators (write-ahead-log, incremental state saving, windowing etc.) and APIs for application specification.
Speakers: Chris Larsen (Limelight Networks) and Benoit Sigoure (Arista Networks)
The OpenTSDB community continues to grow and with users looking to store massive amounts of time-series data in a scalable manner. In this talk, we will discuss a number of use cases and best practices around naming schemas and HBase configuration. We will also review OpenTSDB 2.0's new features, including the HTTP API, plugins, annotations, millisecond support, and metadata, as well as what's next in the roadmap.
Starting with v4, modules hold a promise for changing how Redis is used and developed for. Enabling custom data types and commands, Redis Modules build upon and extend the core functionality to handle any use case.
The video of the webinar given with these slides is at: https://youtu.be/EglSYFodaqw
Hands-on Session on Big Data processing using Apache Spark and Hadoop Distributed File System
This is the first session in the series of "Apache Spark Hands-on"
Topics Covered
+ Introduction to Apache Spark
+ Introduction to RDD (Resilient Distributed Datasets)
+ Loading data into an RDD
+ RDD Operations - Transformation
+ RDD Operations - Actions
+ Hands-on demos using CloudxLab
Flink Forward SF 2017: Timo Walther - Table & SQL API – unified APIs for bat...Flink Forward
SQL is undoubtedly the most widely used language for data analytics. It is declarative and can be optimized and efficiently executed by most query processors. Therefore the community has made effort to add relational APIs to Apache Flink, a standard SQL API and a language-integrated Table API. Both APIs are semantically compatible and share the same optimization and execution path based on Apache Calcite. Since Flink supports both stream and batch processing and many use cases require both kinds of processing, we aim for a unified relational layer. In this talk we will look at the current API capabilities, find out what's under the hood of Flink’s relational APIs, and give an outlook for future features such as dynamic tables, Flink's way how streams are converted into tables and vice versa leveraging the stream-table duality.
Hadoop became the most common systm to store big data.
With Hadoop, many supporting systems emerged to complete the aspects that are missing in Hadoop itself.
Together they form a big ecosystem.
This presentation covers some of those systems.
While not capable to cover too many in one presentation, I tried to focus on the most famous/popular ones and on the most interesting ones.
Chapel-on-X: Exploring Tasking Runtimes for PGAS LanguagesAkihiro Hayashi
With the shift to exascale computer systems, the importance of productive programming models for distributed systems is increasing. Partitioned Global Address Space (PGAS) programming models aim to reduce the complexity of writing distributed-memory parallel programs by introducing global operations on distributed arrays, distributed task parallelism, directed synchronization, and mutual exclusion. However, a key challenge in the application of PGAS programming models is the improvement of compilers and runtime systems. In particular, one open question is how runtime systems meet the requirement of exascale systems, where a large number of asynchronous tasks are executed.
While there are various tasking runtimes such as Qthreads, OCR, and HClib, there is no existing comparative study on PGAS tasking/threading runtime systems. To explore runtime systems for PGAS programming languages, we have implemented OCR-based and HClib-based Chapel runtimes and evaluated them with an initial focus on tasking and synchronization implementations. The results show that our OCR and HClib-based implementations can improve the performance of PGAS programs compared to the ex- isting Qthreads backend of Chapel.
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...pchutichetpong
M Capital Group (“MCG”) expects to see demand and the changing evolution of supply, facilitated through institutional investment rotation out of offices and into work from home (“WFH”), while the ever-expanding need for data storage as global internet usage expands, with experts predicting 5.3 billion users by 2023. These market factors will be underpinned by technological changes, such as progressing cloud services and edge sites, allowing the industry to see strong expected annual growth of 13% over the next 4 years.
Whilst competitive headwinds remain, represented through the recent second bankruptcy filing of Sungard, which blames “COVID-19 and other macroeconomic trends including delayed customer spending decisions, insourcing and reductions in IT spending, energy inflation and reduction in demand for certain services”, the industry has seen key adjustments, where MCG believes that engineering cost management and technological innovation will be paramount to success.
MCG reports that the more favorable market conditions expected over the next few years, helped by the winding down of pandemic restrictions and a hybrid working environment will be driving market momentum forward. The continuous injection of capital by alternative investment firms, as well as the growing infrastructural investment from cloud service providers and social media companies, whose revenues are expected to grow over 3.6x larger by value in 2026, will likely help propel center provision and innovation. These factors paint a promising picture for the industry players that offset rising input costs and adapt to new technologies.
According to M Capital Group: “Specifically, the long-term cost-saving opportunities available from the rise of remote managing will likely aid value growth for the industry. Through margin optimization and further availability of capital for reinvestment, strong players will maintain their competitive foothold, while weaker players exit the market to balance supply and demand.”
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
4. Overview (Apache Phoenix)
4
● Began as an internal project by the
company (salesforce.com).
MAY 2014
JAN 2014
A Top-Level Apache
Project
Orignially Open-
Sourced on Github
6. Overview (Apache Phoenix)
6
Lorem ipsum
congue tempus
Support for
late-bound,
schema-on-
read
SQL and
JDBC API
support
Access to data
stored and
produced in other
components such
as Apache Spark
and Apache Hive
● Developed as part of Apache Hadoop.
● Runs on top of Hadoop Distributed File System (HDFS).
● HBase scales linearly and shards automatically.
7. Overview (Apache Phoenix)
7
Lorem ipsum
congue tempus
Support for
late-bound,
schema-on-
read
SQL and
JDBC API
support
Access to data
stored and
produced in other
components such
as Apache Spark
and Apache Hive
● Apache Phoenix is an add-on for Apache HBase that provides a
programmatic ANSI SQL interface.
● implements best-practice optimizations to enable software
engineers to develop next-generation data-driven applications
based on HBase.
● Create and interact with tables in the form of typical DDL/DML
statements using the standard JDBC API.
8. Overview (Apache Phoenix)
8
● Written in Java and SQL
● Atomicity, Consistency, Isolation and
Durability (ACID)
● Fully integrated with other Hadoop
products such as Spark, Hive, Pig, Flume,
and Map Reduce.
9. Overview (Apache Phoenix)
9
● included in
○ Cloudera Data Platform 7.0 and above.
○ Hortonworks distribution for HDP 2.1
and above.
○ Available as part of Cloudera labs.
○ Part of the Hadoop ecosystem.
10. Overview (SQL Support)
10
● Compiles SQL to and orchestrate running
of HBase scans.
● Produces JDBC result set.
● All standard SQL query constructs are
supported.
11. Overview (SQL Support)
11
● Direct use of the HBase API, along with
coprocessors and custom filters.
Performance:
○ Milliseconds for small queries
○ Seconds for tens of millions of rows.
12. Overview (Bulk Loading)
12
● MapReduce-based :
○ CSV and JSON
○ Via Phoenix
○ MapReduce library
● Single-Threaded:
○ CSV
○ Via PostgreSQL (PSQL)
○ HBase on local machine
13. Overview (User Defintion Functions)
13
● Temporary UDFs for sessions only.
● Permanent UDFs stored in system functions.
● UDF used in SQL and indexes.
● Tenant specific UDF usage and support.
● UDF jar update require cluster bounce.
14. Overview (Transactions)
14
● Using Apache Tephra cross row/table/ACID support.
● Create tables with flag ‘transactional=true’.
● Enable transactions and snapshot directory and set
timeout value ‘hbase-site.xml’.
● Transactions start with statement against table.
● Transactions end with commit or rollback.
15. Overview (Transactions)
15
● Applications let HBase manage timestamps.
● Incase the application needs to control the timestamp
‘CurrentSCN’ property must be specified at the
connection time.
● ‘CurrentSCN’ controls the timestamp for any DDL,
DML, or query.
16. Overview (Schema)
16
● The table metadata is stored in versioned HBase table
(Up to 1000 versions).
● ‘UPDATE_CACHE_FREQUENCY’ allow the user to
declare how often the server will be checked for meta
data updates. Values:
○ Always
○ Never
○ Millisecond value
17. Overview (Schema)
17
● Phoenix table can be:
○ Built from scratch.
○ Mapped to an existing HBase table.
■ Read-Write Table
■ Read-Only View
18. Overview (Schema)
18
Read-Write Table:
○ column families will be created automatically if they
don’t already exist.
○ An empty key value will be added to the first column
family of each existing row to minimize the size of
the projection for queries.
19. Overview (Schema)
19
Read-Only View:
○ All column families must already exist.
○ Addition of the Phoenix coprocessors used for query
processing (Only change to HBase table).
33. 33
@Override
public void getRowCount(ResultSet resultSet) throws SQLException {
Tuple row = resultSet.unwrap(PhoenixResultSet.class).getCurrentRow();
Cell kv = row.getValue(0);
ImmutableBytesWritable tmpPtr = new
ImmutableBytesWritable(kv.getValueArray(), kv.getValueOffset(),
kv.getValueLength());
// A single Cell will be returned with the count(*) - we decode that here
rowCount = PLong.INSTANCE.getCodec().decodeLong(tmpPtr,
SortOrder.getDefault());
}
Transactions:
● Row Count
34. 34
private void changeInternalStateForTesting(PhoenixResultSet rs) {
// get and set the internal state for testing purposes.
ReadMetricQueue testMetricsQueue = new
TestReadMetricsQueue(LogLevel.OFF,true);
StatementContext ctx = (StatementContext)Whitebox.getInternalState(rs,
"context");
Whitebox.setInternalState(ctx, "readMetricsQueue", testMetricsQueue);
Whitebox.setInternalState(rs, "readMetricsQueue", testMetricsQueue);
}
Transactions:
● Internal State
38. Capabilities
● Secondary indexes:
● Boost the speed of queries without relying
on specific row-key designs.
● Enable users to use star schemes.
● Leverage SQL tools and Online Analytics 38
39. Capabilities
● Row timestamp column.
● Set minimum and maximum time range
for scans.
● Improves performance especially when
querying the tail-end of the data.
39
42. SELECT substr(host,1,3), trunc(date,’DAY’),
avg(response_time) FROM server_metrics
WHERE date > CURRENT_DATE() – 7
AND substr(host, 1, 3) IN (‘sf1’, ‘sf3, ‘sf7’)
GROUP BY substr(host, 1, 3), trunc(date,’DAY’)
42
Scenarios (Chart Response Time Per Cluster)
43. SELECT host, date, gc_time
FROM server_metrics WHERE date >
CURRENT_DATE() – 7
AND substr(host, 1, 3) IN (‘sf1’, ‘sf3, ‘sf7’)
ORDER BY gc_time DESC
LIMIT 5 43
Scenarios (Find 5 Longest GC Times )
Apache Phoenix -> A scale-out RDBMS with evolutionary schema built on Apache HBase
Internal project out of a need to support a higher level, well understood, SQL language.
Apache HBase -> open-source non-relational distributed database modeled after Google's Bigtable and written in Java. Used to have random, real-time read/write access to Big Data. column-oriented, NoSQL database built on top of Hadoop.
Apache Phoenix -> Open source massively parallel relational database engine supporting database for Online Transactional Processing (OLTP) and operational analytics in Hadoop. Provides JDB browser enabling users to create, delete and alter SQL tables, view instances indexes and querying data through SQL.
Apache phoenix is a relational layer over Hbase.
SQL skin for Hbase.
Provides a JDBC driver that hides the intricacies of the noSQL
ACID is a set of properties of database transactions intended to guarantee data validity despite errors, power failures, and other mishaps. All changes to data are performed as if they are a single operation.
1. Atomicity preserves the “completeness” of the business process (all or nothing behavior)
2. Consistency refers to the state of the data both before and after the transaction is executed (Use transaction maintains the consistency of the state of the data)
3. Isolation means that transactions can run at the same time., there is no concurrency (locking mechanism is required)
4. Durability refers to the impact of an outage or a failure on a running transaction (data survives any failures)
To summarize, a transaction will either complete, producing correct results, or terminate, with no effect.
Bulk loading for tables created in phoenix is easier compared to tables created in HBase shell.
(Server Bounce) Adminstrator/Technician removes power to the device in a "non-controlled shutdown.“ The "down" part of the bounce. Once the server is completely off, and all activity has ceased, the administrator restarts the server.
Set phoenix.transactions.enabled property to true along with running transaction manager (included in distribution) to enable full ACID transactions (Tables may optionally be declared as transactionaltable may optionally be declared as transactional).
A concurrency model is used to detect row level conflicts with first commit wins semantics. The later commit would produce an exception indicating that a conflict was detected.
A transaction is started implicitly when a transactional table is referenced in a statement. at which no updates can be seen from other connections until either a commit or rollback occurs.
A non transactional tables will not see their updates until after a commit has occurred.
Phoenix uses the value of this connection property as the max timestamp of scans.
Timestamps may not be controlled for transactional tables. Instead, the transaction manager assigns timestamps which become the HBase cell timestamps after a commit.
Timestamps are multiplied by 1,000,000 to ensure enough granularity for uniqueness across the cluster.
Snapshot queries over older data will pick up and use the correct schema based on the time of connection (Based on CurrentSCN).
Data updates such as addition or removal of a table column or the updates of table statistics.
1. ALWAYS value will cause the client to check with the server each time a statement is executed that references a table (or once per commit for an UPSERT VALUES statement.
2. Millisecond value indicates how long the client will hold on to its cached version of the metadata before checking back with the server for updates.
From scratch -> HBase table and column families will be created automatically.
Mapped to existing -> The binary representation of the row key and key values must match that of the Phoenix data types
1. The primary use case for a VIEW is to transfer existing data into a Phoenix table.
A table could also be declared as salted to prevent HBase region hot spotting.
The table catalog argument in the metadata APIs is used to filter based on the tenant ID for multi-tenant tables.
2. since data modification are not allowed on a VIEW and query performance will likely be less than as with a TABLE.
Phoenix supports updatable views on top of tables with the unique feature leveraging the schemaless capabilities of HBase of being able to add columns to them. All views all share the same underlying physical HBase table and may even be indexed independently.
A multi-tenant view may add columns which are defined solely for that user.
1. The primary use case for a VIEW is to transfer existing data into a Phoenix table.
A table could also be declared as salted to prevent HBase region hot spotting.
The table catalog argument in the metadata APIs is used to filter based on the tenant ID for multi-tenant tables.
2. since data modification are not allowed on a VIEW and query performance will likely be less than as with a TABLE.
Phoenix supports updatable views on top of tables with the unique feature leveraging the schemaless capabilities of HBase of being able to add columns to them. All views all share the same underlying physical HBase table and may even be indexed independently.
A multi-tenant view may add columns which are defined solely for that user.
Phoenix chunks up query using guidePosts, which means more threads working on a single region.
Phoenix runs the queries in parallel on the client using a configurable number of threads. Aggregation is done in a coprocessor on the server-side, reducing the amount of data that is returned to the client.
Phoenix chunks up query using guidePosts, which means more threads working on a single region.
Phoenix runs the queries in parallel on the client using a configurable number of threads. Aggregation is done in a coprocessor on the server-side, reducing the amount of data that is returned to the client.
Phoenix chunks up query using guidePosts, which means more threads working on a single region.
Phoenix runs the queries in parallel on the client using a configurable number of threads. Aggregation is done in a coprocessor on the server-side, reducing the amount of data that is returned to the client.
ETL is a type of data integration that refers to the three steps used to blend data from multiple sources. It's often used to build a data warehouse.
Data Manipulation Language (DML).
Data Definition Language (DDL).
For CREATE TABLE:
1. Any HBase metadata (table, column families) that doesn’t already exist will be created.
2. KEEP_DELETED_CELLS option is enabled to allow for flashback queries to work correctly.
3. an empty key value will also be added for each row so that queries behave as expected (without requiring all columns to be projected during scans).
For CREATE VIEW:
Instead the existing HBase metadata must match the metadata specified in the DDL statement (or table read only error).
For UPSERT VALUES:
Use It multiple times before comminting mutations batching
For UPSERT SELECT:
Configure phoenix.mutate.batchSize based on row size
Write scans directly to Hbase and to write on the server while running upsert select on the same table by setting auto-commit to true
Enhance existing statistics collection by enabling further query optmizations based on the size and cardinality of the data.
Generate histograms to drive query optimization decisions such as secondary index usage and join ordering based on cardinalities to produce the most efficient query plan.
Secondary Indexies Types: Global Index (Optimized for read heavy use case), local index (Optimized for write heavy space constrained use cases) and functional index (Create index on arbitrary expression).
Hbase tables are sorted maps.
Star schema is the simplest style of data mart schema (separates business process data into facts), approach is widely used to develop data warehouses and dimensional data mart.
The star schema consists of one or more fact tables referencing any number of dimension tables.
Fact table contains measurements, metrics, and facts about a business process while the Dimension table is a companion to the fact table which contains descriptive attributes to be used as query constraining
Types of Dimension Table: Slowly Changing Dimension, Conformed Dimension, Junk Dimension, Degenerate Dimension, Roleplay Dimension
Maps Hbase native timestamp to a Phoenix column.
Take advantage of various optimizations that HBase provides for time ranges.
ROW_TIMESTAMP needs to be a primary key column in a date or time format (Specified in documentations for more details).
Only one primary key can be designated with ROW_TIMESTAMP, decleration upon table creation (No null or negative values allowed).
Cache content on server through 2 main parts (SQL Read, SQL Write) with end user and collecting content from content providers.
Cache content on server through 2 main parts (SQL Read, SQL Write) with end user and collecting content from content providers.