Netflix’s architecture involves thousands of microservices built to serve unique business needs. As this architecture grew, it became clear that the data storage and query needs were unique to each area; there is no one silver bullet which fits the data needs for all microservices. CDE (Cloud Database Engineering team) offers polyglot persistence, which promises to offer ideal matches between problem spaces and persistence solutions. In this meetup you will get a deep dive into the Self service platform, our solution to repairing Cassandra data reliably across different datacenters, Memcached Flash and cross region replication and Graph database evolution at Netflix.
Delta Lake, an open-source innovations which brings new capabilities for transactions, version control and indexing your data lakes. We uncover how Delta Lake benefits and why it matters to you. Through this session, we showcase some of its benefits and how they can improve your modern data engineering pipelines. Delta lake provides snapshot isolation which helps concurrent read/write operations and enables efficient insert, update, deletes, and rollback capabilities. It allows background file optimization through compaction and z-order partitioning achieving better performance improvements. In this presentation, we will learn the Delta Lake benefits and how it solves common data lake challenges, and most importantly new Delta Time Travel capability.
Introduction and Overview of Apache Kafka, TriHUG July 23, 2013mumrah
Apache Kafka is a distributed publish-subscribe messaging system that allows both publishing and subscribing to streams of records. It uses a distributed commit log that provides low latency and high throughput for handling real-time data feeds. Key features include persistence, replication, partitioning, and clustering.
Redis is an in-memory key-value store that is often used as a database, cache, and message broker. It supports various data structures like strings, hashes, lists, sets, and sorted sets. While data is stored in memory for fast access, Redis can also persist data to disk. It is widely used by companies like GitHub, Craigslist, and Engine Yard to power applications with high performance needs.
This document discusses using Apache Cassandra for business intelligence, reporting and analytics. It covers:
- Data modeling and querying Cassandra data using CQL
- Accessing Cassandra data through drivers, ODBC/JDBC, and analytics frameworks like Spark and Hadoop
- Doing reporting, dashboards, and analytics on Cassandra data using CQL, Solr, Spark, and BI tools
- Capabilities of DataStax Enterprise for integrated search, batch analytics, and real-time analytics on Cassandra
- Example architectures that isolate workloads and handle hot vs cold data
Apache Iceberg - A Table Format for Hige Analytic DatasetsAlluxio, Inc.
Data Orchestration Summit
www.alluxio.io/data-orchestration-summit-2019
November 7, 2019
Apache Iceberg - A Table Format for Hige Analytic Datasets
Speaker:
Ryan Blue, Netflix
For more Alluxio events: https://www.alluxio.io/events/
This document provides an overview of large scale graph analytics and JanusGraph. It discusses graph databases and their use cases. JanusGraph is presented as an open source graph database that can scale to billions of vertices and edges across multiple storage backends like HBase, Cassandra and Bigtable. It uses the TinkerPop framework and Gremlin query language. JanusGraph supports ACID transactions, external indices, and evolving schemas. Example graph queries are demonstrated using the Gremlin console.
This is the presentation I made on JavaDay Kiev 2015 regarding the architecture of Apache Spark. It covers the memory model, the shuffle implementations, data frames and some other high-level staff and can be used as an introduction to Apache Spark
Netflix’s architecture involves thousands of microservices built to serve unique business needs. As this architecture grew, it became clear that the data storage and query needs were unique to each area; there is no one silver bullet which fits the data needs for all microservices. CDE (Cloud Database Engineering team) offers polyglot persistence, which promises to offer ideal matches between problem spaces and persistence solutions. In this meetup you will get a deep dive into the Self service platform, our solution to repairing Cassandra data reliably across different datacenters, Memcached Flash and cross region replication and Graph database evolution at Netflix.
Delta Lake, an open-source innovations which brings new capabilities for transactions, version control and indexing your data lakes. We uncover how Delta Lake benefits and why it matters to you. Through this session, we showcase some of its benefits and how they can improve your modern data engineering pipelines. Delta lake provides snapshot isolation which helps concurrent read/write operations and enables efficient insert, update, deletes, and rollback capabilities. It allows background file optimization through compaction and z-order partitioning achieving better performance improvements. In this presentation, we will learn the Delta Lake benefits and how it solves common data lake challenges, and most importantly new Delta Time Travel capability.
Introduction and Overview of Apache Kafka, TriHUG July 23, 2013mumrah
Apache Kafka is a distributed publish-subscribe messaging system that allows both publishing and subscribing to streams of records. It uses a distributed commit log that provides low latency and high throughput for handling real-time data feeds. Key features include persistence, replication, partitioning, and clustering.
Redis is an in-memory key-value store that is often used as a database, cache, and message broker. It supports various data structures like strings, hashes, lists, sets, and sorted sets. While data is stored in memory for fast access, Redis can also persist data to disk. It is widely used by companies like GitHub, Craigslist, and Engine Yard to power applications with high performance needs.
This document discusses using Apache Cassandra for business intelligence, reporting and analytics. It covers:
- Data modeling and querying Cassandra data using CQL
- Accessing Cassandra data through drivers, ODBC/JDBC, and analytics frameworks like Spark and Hadoop
- Doing reporting, dashboards, and analytics on Cassandra data using CQL, Solr, Spark, and BI tools
- Capabilities of DataStax Enterprise for integrated search, batch analytics, and real-time analytics on Cassandra
- Example architectures that isolate workloads and handle hot vs cold data
Apache Iceberg - A Table Format for Hige Analytic DatasetsAlluxio, Inc.
Data Orchestration Summit
www.alluxio.io/data-orchestration-summit-2019
November 7, 2019
Apache Iceberg - A Table Format for Hige Analytic Datasets
Speaker:
Ryan Blue, Netflix
For more Alluxio events: https://www.alluxio.io/events/
This document provides an overview of large scale graph analytics and JanusGraph. It discusses graph databases and their use cases. JanusGraph is presented as an open source graph database that can scale to billions of vertices and edges across multiple storage backends like HBase, Cassandra and Bigtable. It uses the TinkerPop framework and Gremlin query language. JanusGraph supports ACID transactions, external indices, and evolving schemas. Example graph queries are demonstrated using the Gremlin console.
This is the presentation I made on JavaDay Kiev 2015 regarding the architecture of Apache Spark. It covers the memory model, the shuffle implementations, data frames and some other high-level staff and can be used as an introduction to Apache Spark
MongoDB is an open-source, document-oriented database that provides high performance and horizontal scalability. It uses a document-model where data is organized in flexible, JSON-like documents rather than rigidly defined rows and tables. Documents can contain multiple types of nested objects and arrays. MongoDB is best suited for applications that need to store large amounts of unstructured or semi-structured data and benefit from horizontal scalability and high performance.
This document provides an overview and summary of TiDB, an open-source distributed SQL database compatible with MySQL. It discusses TiDB's architecture which includes TiDB for the SQL layer, TiKV for storage, and PD for placement driving. TiDB provides features like horizontal scalability, distributed transactions, and high availability. Example use cases are also presented, like Mobike's use of TiDB for locking/unlocking bikes and real-time analytics of bike usage data across 200 cities in China.
Trino (formerly known as PrestoSQL) is an open source distributed SQL query engine for running fast analytical queries against data sources of all sizes. Some key updates since being rebranded from PrestoSQL to Trino include new security features, language features like window functions and temporal types, performance improvements through dynamic filtering and partition pruning, and new connectors. Upcoming improvements include support for MERGE statements, MATCH_RECOGNIZE patterns, and materialized view enhancements.
The document discusses the challenges of modern data, analytics, and AI workloads. Most enterprises struggle with siloed data systems that make integration and productivity difficult. The future of data lies with a data lakehouse platform that can unify data engineering, analytics, data warehousing, and machine learning workloads on a single open platform. The Databricks Lakehouse platform aims to address these challenges with its open data lake approach and capabilities for data engineering, SQL analytics, governance, and machine learning.
Introduction SQL Analytics on Lakehouse ArchitectureDatabricks
This document provides an introduction and overview of SQL Analytics on Lakehouse Architecture. It discusses the instructor Doug Bateman's background and experience. The course goals are outlined as describing key features of a data Lakehouse, explaining how Delta Lake enables a Lakehouse architecture, and defining features of the Databricks SQL Analytics user interface. The course agenda is then presented, covering topics on Lakehouse Architecture, Delta Lake, and a Databricks SQL Analytics demo. Background is also provided on Lakehouse architecture, how it combines the benefits of data warehouses and data lakes, and its key features.
Hudi: Large-Scale, Near Real-Time Pipelines at Uber with Nishith Agarwal and ...Databricks
Uber has real needs to provide faster, fresher data to data consumers & products, running hundreds of thousands of analytical queries everyday. Uber engineers will share the design, architecture & use-cases of the second generation of ‘Hudi’, a self contained Apache Spark library to build large scale analytical datasets designed to serve such needs and beyond. Hudi (formerly Hoodie) is created to effectively manage petabytes of analytical data on distributed storage, while supporting fast ingestion & queries. In this talk, we will discuss how we leveraged Spark as a general purpose distributed execution engine to build Hudi, detailing tradeoffs & operational experience. We will also show to ingest data into Hudi using Spark Datasource/Streaming APIs and build Notebooks/Dashboards on top using Spark SQL.
Squirreling Away $640 Billion: How Stripe Leverages Flink for Change Data Cap...Flink Forward
Flink Forward San Francisco 2022.
Being in the payments space, Stripe requires strict correctness and freshness guarantees. We rely on Flink as the natural solution for delivering on this in support of our Change Data Capture (CDC) infrastructure. We heavily rely on CDC as a tool for capturing data change streams from our databases without critically impacting database reliability, scalability, and maintainability. Data derived from these streams is used broadly across the business and powers many of our critical financial reporting systems totalling over $640 Billion in payment volume annually. We use many components of Flink’s flexible DataStream API to perform aggregations and abstract away the complexities of stream processing from our downstreams. In this talk, we’ll walk through our experience from the very beginning to what we have in production today. We’ll share stories around the technical details and trade-offs we encountered along the way.
by
Jeff Chao
More and more organizations are moving their ETL workloads to a Hadoop based ELT grid architecture. Hadoop`s inherit capabilities, especially it`s ability to do late binding addresses some of the key challenges with traditional ETL platforms. In this presentation, attendees will learn the key factors, considerations and lessons around ETL for Hadoop. Areas such as pros and cons for different extract and load strategies, best ways to batch data, buffering and compression considerations, leveraging HCatalog, data transformation, integration with existing data transformations, advantages of different ways of exchanging data and leveraging Hadoop as a data integration layer. This is an extremely popular presentation around ETL and Hadoop.
Data Lakehouse Symposium | Day 1 | Part 2Databricks
The world of data architecture began with applications. Next came data warehouses. Then text was organized into a data warehouse.
Then one day the world discovered a whole new kind of data that was being generated by organizations. The world found that machines generated data that could be transformed into valuable insights. This was the origin of what is today called the data lakehouse. The evolution of data architecture continues today.
Come listen to industry experts describe this transformation of ordinary data into a data architecture that is invaluable to business. Simply put, organizations that take data architecture seriously are going to be at the forefront of business tomorrow.
This is an educational event.
Several of the authors of the book Building the Data Lakehouse will be presenting at this symposium.
Delta Lake brings reliability, performance, and security to data lakes. It provides ACID transactions, schema enforcement, and unified handling of batch and streaming data to make data lakes more reliable. Delta Lake also features lightning fast query performance through its optimized Delta Engine. It enables security and compliance at scale through access controls and versioning of data. Delta Lake further offers an open approach and avoids vendor lock-in by using open formats like Parquet that can integrate with various ecosystems.
(BDT318) How Netflix Handles Up To 8 Million Events Per SecondAmazon Web Services
In this session, Netflix provides an overview of Keystone, their new data pipeline. The session covers how Netflix migrated from Suro to Keystone, including the reasons behind the transition and the challenges of zero loss while processing over 400 billion events daily. The session covers in detail how they deploy, operate, and scale Kafka, Samza, Docker, and Apache Mesos in AWS to manage 8 million events & 17 GB per second during peak.
Introduction to memcached, a caching service designed for optimizing performance and scaling in the web stack, seen from perspective of MySQL/PHP users. Given for 2nd year students of professional bachelor in ICT at Kaho St. Lieven, Gent.
Real-time Analytics with Trino and Apache PinotXiang Fu
Trino summit 2021:
Overview of Trino Pinot Connector, which bridges the flexibility of Trino's full SQL support to the power of Apache Pinot's realtime analytics, giving you the best of both worlds.
This document discusses how MongoDB can help enterprises meet modern data and application requirements. It outlines the many new technologies and demands placing pressure on enterprises, including big data, mobile, cloud computing, and more. Traditional databases struggle to meet these new demands due to limitations like rigid schemas and difficulty scaling. MongoDB provides capabilities like dynamic schemas, high performance at scale through horizontal scaling, and low total cost of ownership. The document examines how MongoDB has been successfully used by enterprises for use cases like operational data stores and as an enterprise data service to break down silos.
Real-Life Use Cases & Architectures for Event Streaming with Apache KafkaKai Wähner
Streaming all over the World: Real-Life Use Cases & Architectures for Event Streaming with Apache Kafka.
Learn about various case studies for event streaming with Apache Kafka across industries. The talk explores architectures for real-world deployments from Audi, BMW, Disney, Generali, Paypal, Tesla, Unity, Walmart, William Hill, and more. Use cases include fraud detection, mainframe offloading, predictive maintenance, cybersecurity, edge computing, track&trace, live betting, and much more.
Considerations for Data Access in the LakehouseDatabricks
Organizations are increasingly exploring lakehouse architectures with Databricks to combine the best of data lakes and data warehouses. Databricks SQL Analytics introduces new innovation on the “house” to deliver data warehousing performance with the flexibility of data lakes. The lakehouse supports a diverse set of use cases and workloads that require distinct considerations for data access. On the lake side, tables with sensitive data require fine-grained access control that are enforced across the raw data and derivative data products via feature engineering or transformations. Whereas on the house side, tables can require fine-grained data access such as row level segmentation for data sharing, and additional transformations using analytics engineering tools. On the consumption side, there are additional considerations for managing access from popular BI tools such as Tableau, Power BI or Looker.
The product team at Immuta, a Databricks partner, will share their experience building data access governance solutions for lakehouse architectures across different data lake and warehouse platforms to show how to set up data access for common scenarios for Databricks teams new to SQL Analytics.
This document discusses using Apache Spark and Apache NiFi together for data lakes. It outlines the goals of a data lake including having a central data repository, reducing costs, enabling easier discovery and prototyping. It also discusses what is needed for a Hadoop data lake, including automation of pipelines, governance, and interactive data discovery. The document then provides an example ingestion project and describes using Apache Spark for functions like cleansing, validating, and profiling data. It outlines using Apache NiFi for the pipeline design with drag and drop functionality. Finally, it demonstrates ingesting and preparing data, data self-service and transformation, data discovery, and operational monitoring capabilities.
Modernizing to a Cloud Data ArchitectureDatabricks
Organizations with on-premises Hadoop infrastructure are bogged down by system complexity, unscalable infrastructure, and the increasing burden on DevOps to manage legacy architectures. Costs and resource utilization continue to go up while innovation has flatlined. In this session, you will learn why, now more than ever, enterprises are looking for cloud alternatives to Hadoop and are migrating off of the architecture in large numbers. You will also learn how elastic compute models’ benefits help one customer scale their analytics and AI workloads and best practices from their experience on a successful migration of their data and workloads to the cloud.
Streaming Event Time Partitioning with Apache Flink and Apache Iceberg - Juli...Flink Forward
Netflix’s playback data records every user interaction with video on the service, from trailers on the home page to full-length movies. This is a critical dataset with high volume that is used broadly across Netflix, powering product experiences, AB test metrics, and offline insights. In processing playback data, we depend heavily on event-time partitioning to handle a long tail of late arriving events. In this talk, I’ll provide an overview of our recent implementation of generic event-time partitioning on high volume streams using Apache Flink and Apache Iceberg (Incubating). Built as configurable Flink components that leverage Iceberg as a new output table format, we are now able to write playback data and other large scale datasets directly from a stream into a table partitioned on event time, replacing the common pattern of relying on a post-processing batch job that “puts the data in the right place”. We’ll talk through what it took to apply this to our playback data in practice, as well as challenges we hit along the way and tradeoffs with a streaming approach to event-time partitioning.
Storm is a distributed and fault-tolerant realtime computation system. It was created at BackType/Twitter to analyze tweets, links, and users on Twitter in realtime. Storm provides scalability, reliability, and ease of programming. It uses components like Zookeeper, ØMQ, and Thrift. A Storm topology defines the flow of data between spouts that read data and bolts that process data. Storm guarantees processing of all data through its reliability APIs and guarantees no data loss even during failures.
This document discusses various technologies related to architectures, frameworks, infrastructure, services, data stores, analytics, logging and metrics. It covers Java 8 features like lambda expressions and method references. It also discusses microservices, Spring Boot basics and features, Gradle vs Maven, Swagger, AngularJS, Gulp, Jasmine, Karma, Nginx, CloudFront, Couchbase, Lambda Architecture, logging with Fluentd and Elasticsearch, metrics collection with Collectd and Statsd, and visualization with Graphite and Grafana.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
MongoDB is an open-source, document-oriented database that provides high performance and horizontal scalability. It uses a document-model where data is organized in flexible, JSON-like documents rather than rigidly defined rows and tables. Documents can contain multiple types of nested objects and arrays. MongoDB is best suited for applications that need to store large amounts of unstructured or semi-structured data and benefit from horizontal scalability and high performance.
This document provides an overview and summary of TiDB, an open-source distributed SQL database compatible with MySQL. It discusses TiDB's architecture which includes TiDB for the SQL layer, TiKV for storage, and PD for placement driving. TiDB provides features like horizontal scalability, distributed transactions, and high availability. Example use cases are also presented, like Mobike's use of TiDB for locking/unlocking bikes and real-time analytics of bike usage data across 200 cities in China.
Trino (formerly known as PrestoSQL) is an open source distributed SQL query engine for running fast analytical queries against data sources of all sizes. Some key updates since being rebranded from PrestoSQL to Trino include new security features, language features like window functions and temporal types, performance improvements through dynamic filtering and partition pruning, and new connectors. Upcoming improvements include support for MERGE statements, MATCH_RECOGNIZE patterns, and materialized view enhancements.
The document discusses the challenges of modern data, analytics, and AI workloads. Most enterprises struggle with siloed data systems that make integration and productivity difficult. The future of data lies with a data lakehouse platform that can unify data engineering, analytics, data warehousing, and machine learning workloads on a single open platform. The Databricks Lakehouse platform aims to address these challenges with its open data lake approach and capabilities for data engineering, SQL analytics, governance, and machine learning.
Introduction SQL Analytics on Lakehouse ArchitectureDatabricks
This document provides an introduction and overview of SQL Analytics on Lakehouse Architecture. It discusses the instructor Doug Bateman's background and experience. The course goals are outlined as describing key features of a data Lakehouse, explaining how Delta Lake enables a Lakehouse architecture, and defining features of the Databricks SQL Analytics user interface. The course agenda is then presented, covering topics on Lakehouse Architecture, Delta Lake, and a Databricks SQL Analytics demo. Background is also provided on Lakehouse architecture, how it combines the benefits of data warehouses and data lakes, and its key features.
Hudi: Large-Scale, Near Real-Time Pipelines at Uber with Nishith Agarwal and ...Databricks
Uber has real needs to provide faster, fresher data to data consumers & products, running hundreds of thousands of analytical queries everyday. Uber engineers will share the design, architecture & use-cases of the second generation of ‘Hudi’, a self contained Apache Spark library to build large scale analytical datasets designed to serve such needs and beyond. Hudi (formerly Hoodie) is created to effectively manage petabytes of analytical data on distributed storage, while supporting fast ingestion & queries. In this talk, we will discuss how we leveraged Spark as a general purpose distributed execution engine to build Hudi, detailing tradeoffs & operational experience. We will also show to ingest data into Hudi using Spark Datasource/Streaming APIs and build Notebooks/Dashboards on top using Spark SQL.
Squirreling Away $640 Billion: How Stripe Leverages Flink for Change Data Cap...Flink Forward
Flink Forward San Francisco 2022.
Being in the payments space, Stripe requires strict correctness and freshness guarantees. We rely on Flink as the natural solution for delivering on this in support of our Change Data Capture (CDC) infrastructure. We heavily rely on CDC as a tool for capturing data change streams from our databases without critically impacting database reliability, scalability, and maintainability. Data derived from these streams is used broadly across the business and powers many of our critical financial reporting systems totalling over $640 Billion in payment volume annually. We use many components of Flink’s flexible DataStream API to perform aggregations and abstract away the complexities of stream processing from our downstreams. In this talk, we’ll walk through our experience from the very beginning to what we have in production today. We’ll share stories around the technical details and trade-offs we encountered along the way.
by
Jeff Chao
More and more organizations are moving their ETL workloads to a Hadoop based ELT grid architecture. Hadoop`s inherit capabilities, especially it`s ability to do late binding addresses some of the key challenges with traditional ETL platforms. In this presentation, attendees will learn the key factors, considerations and lessons around ETL for Hadoop. Areas such as pros and cons for different extract and load strategies, best ways to batch data, buffering and compression considerations, leveraging HCatalog, data transformation, integration with existing data transformations, advantages of different ways of exchanging data and leveraging Hadoop as a data integration layer. This is an extremely popular presentation around ETL and Hadoop.
Data Lakehouse Symposium | Day 1 | Part 2Databricks
The world of data architecture began with applications. Next came data warehouses. Then text was organized into a data warehouse.
Then one day the world discovered a whole new kind of data that was being generated by organizations. The world found that machines generated data that could be transformed into valuable insights. This was the origin of what is today called the data lakehouse. The evolution of data architecture continues today.
Come listen to industry experts describe this transformation of ordinary data into a data architecture that is invaluable to business. Simply put, organizations that take data architecture seriously are going to be at the forefront of business tomorrow.
This is an educational event.
Several of the authors of the book Building the Data Lakehouse will be presenting at this symposium.
Delta Lake brings reliability, performance, and security to data lakes. It provides ACID transactions, schema enforcement, and unified handling of batch and streaming data to make data lakes more reliable. Delta Lake also features lightning fast query performance through its optimized Delta Engine. It enables security and compliance at scale through access controls and versioning of data. Delta Lake further offers an open approach and avoids vendor lock-in by using open formats like Parquet that can integrate with various ecosystems.
(BDT318) How Netflix Handles Up To 8 Million Events Per SecondAmazon Web Services
In this session, Netflix provides an overview of Keystone, their new data pipeline. The session covers how Netflix migrated from Suro to Keystone, including the reasons behind the transition and the challenges of zero loss while processing over 400 billion events daily. The session covers in detail how they deploy, operate, and scale Kafka, Samza, Docker, and Apache Mesos in AWS to manage 8 million events & 17 GB per second during peak.
Introduction to memcached, a caching service designed for optimizing performance and scaling in the web stack, seen from perspective of MySQL/PHP users. Given for 2nd year students of professional bachelor in ICT at Kaho St. Lieven, Gent.
Real-time Analytics with Trino and Apache PinotXiang Fu
Trino summit 2021:
Overview of Trino Pinot Connector, which bridges the flexibility of Trino's full SQL support to the power of Apache Pinot's realtime analytics, giving you the best of both worlds.
This document discusses how MongoDB can help enterprises meet modern data and application requirements. It outlines the many new technologies and demands placing pressure on enterprises, including big data, mobile, cloud computing, and more. Traditional databases struggle to meet these new demands due to limitations like rigid schemas and difficulty scaling. MongoDB provides capabilities like dynamic schemas, high performance at scale through horizontal scaling, and low total cost of ownership. The document examines how MongoDB has been successfully used by enterprises for use cases like operational data stores and as an enterprise data service to break down silos.
Real-Life Use Cases & Architectures for Event Streaming with Apache KafkaKai Wähner
Streaming all over the World: Real-Life Use Cases & Architectures for Event Streaming with Apache Kafka.
Learn about various case studies for event streaming with Apache Kafka across industries. The talk explores architectures for real-world deployments from Audi, BMW, Disney, Generali, Paypal, Tesla, Unity, Walmart, William Hill, and more. Use cases include fraud detection, mainframe offloading, predictive maintenance, cybersecurity, edge computing, track&trace, live betting, and much more.
Considerations for Data Access in the LakehouseDatabricks
Organizations are increasingly exploring lakehouse architectures with Databricks to combine the best of data lakes and data warehouses. Databricks SQL Analytics introduces new innovation on the “house” to deliver data warehousing performance with the flexibility of data lakes. The lakehouse supports a diverse set of use cases and workloads that require distinct considerations for data access. On the lake side, tables with sensitive data require fine-grained access control that are enforced across the raw data and derivative data products via feature engineering or transformations. Whereas on the house side, tables can require fine-grained data access such as row level segmentation for data sharing, and additional transformations using analytics engineering tools. On the consumption side, there are additional considerations for managing access from popular BI tools such as Tableau, Power BI or Looker.
The product team at Immuta, a Databricks partner, will share their experience building data access governance solutions for lakehouse architectures across different data lake and warehouse platforms to show how to set up data access for common scenarios for Databricks teams new to SQL Analytics.
This document discusses using Apache Spark and Apache NiFi together for data lakes. It outlines the goals of a data lake including having a central data repository, reducing costs, enabling easier discovery and prototyping. It also discusses what is needed for a Hadoop data lake, including automation of pipelines, governance, and interactive data discovery. The document then provides an example ingestion project and describes using Apache Spark for functions like cleansing, validating, and profiling data. It outlines using Apache NiFi for the pipeline design with drag and drop functionality. Finally, it demonstrates ingesting and preparing data, data self-service and transformation, data discovery, and operational monitoring capabilities.
Modernizing to a Cloud Data ArchitectureDatabricks
Organizations with on-premises Hadoop infrastructure are bogged down by system complexity, unscalable infrastructure, and the increasing burden on DevOps to manage legacy architectures. Costs and resource utilization continue to go up while innovation has flatlined. In this session, you will learn why, now more than ever, enterprises are looking for cloud alternatives to Hadoop and are migrating off of the architecture in large numbers. You will also learn how elastic compute models’ benefits help one customer scale their analytics and AI workloads and best practices from their experience on a successful migration of their data and workloads to the cloud.
Streaming Event Time Partitioning with Apache Flink and Apache Iceberg - Juli...Flink Forward
Netflix’s playback data records every user interaction with video on the service, from trailers on the home page to full-length movies. This is a critical dataset with high volume that is used broadly across Netflix, powering product experiences, AB test metrics, and offline insights. In processing playback data, we depend heavily on event-time partitioning to handle a long tail of late arriving events. In this talk, I’ll provide an overview of our recent implementation of generic event-time partitioning on high volume streams using Apache Flink and Apache Iceberg (Incubating). Built as configurable Flink components that leverage Iceberg as a new output table format, we are now able to write playback data and other large scale datasets directly from a stream into a table partitioned on event time, replacing the common pattern of relying on a post-processing batch job that “puts the data in the right place”. We’ll talk through what it took to apply this to our playback data in practice, as well as challenges we hit along the way and tradeoffs with a streaming approach to event-time partitioning.
Storm is a distributed and fault-tolerant realtime computation system. It was created at BackType/Twitter to analyze tweets, links, and users on Twitter in realtime. Storm provides scalability, reliability, and ease of programming. It uses components like Zookeeper, ØMQ, and Thrift. A Storm topology defines the flow of data between spouts that read data and bolts that process data. Storm guarantees processing of all data through its reliability APIs and guarantees no data loss even during failures.
This document discusses various technologies related to architectures, frameworks, infrastructure, services, data stores, analytics, logging and metrics. It covers Java 8 features like lambda expressions and method references. It also discusses microservices, Spring Boot basics and features, Gradle vs Maven, Swagger, AngularJS, Gulp, Jasmine, Karma, Nginx, CloudFront, Couchbase, Lambda Architecture, logging with Fluentd and Elasticsearch, metrics collection with Collectd and Statsd, and visualization with Graphite and Grafana.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
How does Apache Pegasus (incubating) community develop at SensorsDataacelyc1112009
A presentation in ApacheCon Asia 2022 from Dan Wang and Yingchun Lai.
Apache Pegasus is a horizontally scalable, strongly consistent and high-performance key-value store.
Know more about Pegasus https://pegasus.apache.org, https://github.com/apache/incubator-pegasus
The document summarizes new features in JBoss Operations Network (JBoss ON), including:
1) New chart types have been added to visualize metrics data. Storage nodes using Cassandra have also been added to improve scalability of storing large volumes of metrics data in a distributed manner.
2) Finer-grained bundle permissions allow restricting bundle creation, deployment and management based on resource groups and roles.
3) The REST API is now fully supported for both retrieving and inputting configuration data to enable out-of-band processing.
4) Upcoming versions of JBoss ON aim to reduce the agent footprint, improve support for EAP 6, and integrate with the Red Hat Access portal.
MongoDB has taken a clear lead in adoption among the new generation of databases, including the enormous variety of NoSQL offerings. A key reason for this lead has been a unique combination of agility and scalability. Agility provides business units with a quick start and flexibility to maintain development velocity, despite changing data and requirements. Scalability maintains that flexibility while providing fast, interactive performance as data volume and usage increase. We'll address the key organizational, operational, and engineering considerations to ensure that agility and scalability stay aligned at increasing scale, from small development instances to web-scale applications. We will also survey some key examples of highly-scaled customer applications of MongoDB.
Two popular tools for doing Machine Learning on top of JVM ecosystem is H2O and SparkML. This presentation compares these two tools as Machine Learning libraries (Didn't consider Spark's Data Munjing perspective). This work was done during June of 2018.
Life In The FastLane: Full Speed XPagesUlrich Krause
Using XPages out of the box lets you build good looking and well performing applications. However, as XPage applications become bigger and more complex, performance can become an issue and, if it comes to scalability and speed optimization, there are a couple of things to take into consideration.
Learn how to use partial refresh and partial execution mode and how to monitor its execution using a JSF LifeCycle monitor to avoid multiple re-calculation of controls. We will show tools that can allow you to profile your code, readily available from OpenNTF, along with a demonstration of how to use them to improve the speed of your code.
Still writing SSJS and encounter a significant slow down when using Script Libraries? See, how you can improve the speed of your application using JAVA instead of JS, JSON and even @formulas.
This document provides best practices for optimizing Blackboard Learn performance. It recommends deploying for performance from the start, optimizing platform components continuously through measurements, using scalable deployments like 64-bit architectures and virtualization, improving page responsiveness through techniques like gzip compression and image optimization, optimizing the web server, Java Virtual Machine, and database through configuration and tools. It emphasizes the importance of understanding resource utilization, wait events, execution plans, and statistics/histograms for database optimization.
The document provides an agenda for a performance optimization workshop for XPages applications to be held from March 11-13, 2013 at the Maritim Hotel in Gelsenkirchen, Germany. Topics to be covered include performance issues related to Java vs JavaScript, view navigation vs getting documents, string concatenation vs StringBuilder, partial updates/execution, scoped variables, and tools for profiling XPages applications. The presenter is listed as Ulrich Krause, an experienced Notes/Domino developer and IBM Champion.
Intro to Apache Apex - Next Gen Platform for Ingest and TransformApache Apex
Introduction to Apache Apex - The next generation native Hadoop platform. This talk will cover details about how Apache Apex can be used as a powerful and versatile platform for big data processing. Common usage of Apache Apex includes big data ingestion, streaming analytics, ETL, fast batch alerts, real-time actions, threat detection, etc.
Bio:
Pramod Immaneni is Apache Apex PMC member and senior architect at DataTorrent, where he works on Apache Apex and specializes in big data platform and applications. Prior to DataTorrent, he was a co-founder and CTO of Leaf Networks LLC, eventually acquired by Netgear Inc, where he built products in core networking space and was granted patents in peer-to-peer VPNs.
This document discusses technologies for creating and maintaining web applications. It covers Ruby and the Rails framework. Ruby is designed to be programmer-focused rather than machine-focused, helping create dynamic and self-explained code. Rails enables quickly building web servers through conventions, reuse, single responsibility principles, and features that provide quick setup, deployment, and built-in scalability. The document also discusses front-end architecture with client-side logic, and Rails features for development, deployment, databases, assets, and multi-environment configuration.
Exploring Java Heap Dumps (Oracle Code One 2018)Ryan Cuprak
Memory leaks are not always simple or easy to find. Heap dumps from production systems are often gigantic (4+ gigs) with millions of objects in memory. Simple spot checking with traditional tools is woefully inadequate in these situations, especially with real data. Leaks can be entire object graphs with enormous amounts of noise. This session will show you how to build custom tools using the Apache NetBeans Profiler/Heapwalker APIs. Using these APIs, you can read and analyze Java heaps programmatically to ask really hard questions. This gives you the power to analyze complex object graphs with tens of thousands of objects in seconds.
This document summarizes a presentation on performance optimization on a budget. It discusses measuring and improving performance at the front-end through asset optimization, latency reduction, and client-side rendering. It also discusses measuring and optimizing performance at the backend through caching, databases, and server-side architecture. The document lists several free and paid tools for profiling, testing, and analyzing performance. It concludes with best practices for performance including establishing goals, architecture, testing, and an SDLC approach.
Low Latency Polyglot Model Scoring using Apache ApexApache Apex
This document discusses challenges in building low-latency machine learning applications and how Apache Apex can help address them. It introduces Apache Apex as a distributed streaming engine and describes how it allows embedding models from frameworks like R, Python, H2O through custom operators. It provides various data and model scoring patterns in Apex like dynamic resource allocation, checkpointing, exactly-once processing to meet SLAs. The document also demonstrates techniques like canary deployment, dormant models, model ensembles through logical overlays on the Apex DAG.
1. The document discusses Microsoft's SCOPE analytics platform running on Apache Tez and YARN. It describes how Graphene was designed to integrate SCOPE with Tez to enable SCOPE jobs to run as Tez DAGs on YARN clusters.
2. Key components of Graphene include a DAG converter, Application Master, and tooling integration. The Application Master manages task execution and communicates with SCOPE engines running in containers.
3. Initial experience running SCOPE on Tez has been positive though challenges remain around scaling to very large workloads with over 15,000 parallel tasks and optimizing for opportunistic containers and Application Master recovery.
This document summarizes new features in .NET Framework 4.5, including improvements to WeakReferences, streams, ReadOnlyDictionary, compression, and large objects. It describes enhancements to server GC, asynchronous programming, the Task Parallel Library, ASP.NET, Entity Framework, WCF, WPF, and more. The .NET 4.5 update focuses on performance improvements, support for asynchronous code and parallel operations, and enabling modern app development patterns.
The document provides an introduction to Typesafe Activator and the Play Framework. It discusses how Activator is a tool that helps developers get started with the Typesafe Reactive Platform and Play applications. It also covers some core features of Play like routing, templates, assets, data access with Slick and JSON, and concurrency with Futures, Actors, and WebSockets.
This document discusses Typesafe's Reactive Platform and Apache Spark. It describes Typesafe's Fast Data strategy of using a microservices architecture with Spark, Kafka, HDFS and databases. It outlines contributions Typesafe has made to Spark, including backpressure support, dynamic resource allocation in Mesos, and integration tests. The document also discusses Typesafe's customer support and roadmap, including plans to introduce Kerberos security and evaluate Tachyon.
Lessons Learned From PayPal: Implementing Back-Pressure With Akka Streams And...Lightbend
Akka Streams and its amazing handling of streaming with back-pressure should be no surprise to anyone. But it takes a couple of use cases to really see it in action - especially in use cases where the amount of work continues to increase as you’re processing it. This is where back-pressure really shines.
In this talk for Architects and Dev Managers by Akara Sucharitakul, Principal MTS for Global Platform Frameworks at PayPal, Inc., we look at how back-pressure based on Akka Streams and Kafka is being used at PayPal to handle very bursty workloads.
In addition, Akara will also share experiences in creating a platform based on Akka and Akka Streams that currently processes over 1 billion transactions per day (on just 8 VMs), with the aim of helping teams adopt these technologies. In this webinar, you will:
*Start with a sample web crawler use case to examine what happens when each processing pass expands to a larger and larger workload to process.
*Review how we use the buffering capabilities in Kafka and the back-pressure with asynchronous processing in Akka Streams to handle such bursts.
*Look at lessons learned, plus some constructive “rants” about the architectural components, the maturity, or immaturity you’ll expect, and tidbits and open source goodies like memory-mapped stream buffers that can be helpful in other Akka Streams and/or Kafka use cases.
The Autobahn Has No Speed Limit - Your XPages Shouldn't Either!Teamstudio
Using XPages out of the box lets you build good-looking and well-performing applications. As XPage applications become bigger and more complex, performance can become an issue. There are several ways to improve scalability and performance that you should take into consideration. In this webinar, learn how to use partial refresh and partial execution mode and how to monitor its execution using a JSFLifeCycle monitor to avoid multiple re-calculations. See how readily available tools from OpenNTF will allow you to profile and analyze your code to improve the speed of your applications. Using Server Side Java Script and encountering a significant slow down when using Script Libraries? Learn how you can improve the speed of your application using JAVA instead of JavaScript, JSON and even @formulas.
Similar to On-boarding with JanusGraph Performance (20)
We are pleased to share with you the latest VCOSA statistical report on the cotton and yarn industry for the month of March 2024.
Starting from January 2024, the full weekly and monthly reports will only be available for free to VCOSA members. To access the complete weekly report with figures, charts, and detailed analysis of the cotton fiber market in the past week, interested parties are kindly requested to contact VCOSA to subscribe to the newsletter.
Open Source Contributions to Postgres: The Basics POSETTE 2024ElizabethGarrettChri
Postgres is the most advanced open-source database in the world and it's supported by a community, not a single company. So how does this work? How does code actually get into Postgres? I recently had a patch submitted and committed and I want to share what I learned in that process. I’ll give you an overview of Postgres versions and how the underlying project codebase functions. I’ll also show you the process for submitting a patch and getting that tested and committed.
Introduction to Jio Cinema**:
- Brief overview of Jio Cinema as a streaming platform.
- Its significance in the Indian market.
- Introduction to retention and engagement strategies in the streaming industry.
2. **Understanding Retention and Engagement**:
- Define retention and engagement in the context of streaming platforms.
- Importance of retaining users in a competitive market.
- Key metrics used to measure retention and engagement.
3. **Jio Cinema's Content Strategy**:
- Analysis of the content library offered by Jio Cinema.
- Focus on exclusive content, originals, and partnerships.
- Catering to diverse audience preferences (regional, genre-specific, etc.).
- User-generated content and interactive features.
4. **Personalization and Recommendation Algorithms**:
- How Jio Cinema leverages user data for personalized recommendations.
- Algorithmic strategies for suggesting content based on user preferences, viewing history, and behavior.
- Dynamic content curation to keep users engaged.
5. **User Experience and Interface Design**:
- Evaluation of Jio Cinema's user interface (UI) and user experience (UX).
- Accessibility features and device compatibility.
- Seamless navigation and search functionality.
- Integration with other Jio services.
6. **Community Building and Social Features**:
- Strategies for fostering a sense of community among users.
- User reviews, ratings, and comments.
- Social sharing and engagement features.
- Interactive events and campaigns.
7. **Retention through Loyalty Programs and Incentives**:
- Overview of loyalty programs and rewards offered by Jio Cinema.
- Subscription plans and benefits.
- Promotional offers, discounts, and partnerships.
- Gamification elements to encourage continued usage.
8. **Customer Support and Feedback Mechanisms**:
- Analysis of Jio Cinema's customer support infrastructure.
- Channels for user feedback and suggestions.
- Handling of user complaints and queries.
- Continuous improvement based on user feedback.
9. **Multichannel Engagement Strategies**:
- Utilization of multiple channels for user engagement (email, push notifications, SMS, etc.).
- Targeted marketing campaigns and promotions.
- Cross-promotion with other Jio services and partnerships.
- Integration with social media platforms.
10. **Data Analytics and Iterative Improvement**:
- Role of data analytics in understanding user behavior and preferences.
- A/B testing and experimentation to optimize engagement strategies.
- Iterative improvement based on data-driven insights.
Predictably Improve Your B2B Tech Company's Performance by Leveraging DataKiwi Creative
Harness the power of AI-backed reports, benchmarking and data analysis to predict trends and detect anomalies in your marketing efforts.
Peter Caputa, CEO at Databox, reveals how you can discover the strategies and tools to increase your growth rate (and margins!).
From metrics to track to data habits to pick up, enhance your reporting for powerful insights to improve your B2B tech company's marketing.
- - -
This is the webinar recording from the June 2024 HubSpot User Group (HUG) for B2B Technology USA.
Watch the video recording at https://youtu.be/5vjwGfPN9lw
Sign up for future HUG events at https://events.hubspot.com/b2b-technology-usa/
Orchestrating the Future: Navigating Today's Data Workflow Challenges with Ai...Kaxil Naik
Navigating today's data landscape isn't just about managing workflows; it's about strategically propelling your business forward. Apache Airflow has stood out as the benchmark in this arena, driving data orchestration forward since its early days. As we dive into the complexities of our current data-rich environment, where the sheer volume of information and its timely, accurate processing are crucial for AI and ML applications, the role of Airflow has never been more critical.
In my journey as the Senior Engineering Director and a pivotal member of Apache Airflow's Project Management Committee (PMC), I've witnessed Airflow transform data handling, making agility and insight the norm in an ever-evolving digital space. At Astronomer, our collaboration with leading AI & ML teams worldwide has not only tested but also proven Airflow's mettle in delivering data reliably and efficiently—data that now powers not just insights but core business functions.
This session is a deep dive into the essence of Airflow's success. We'll trace its evolution from a budding project to the backbone of data orchestration it is today, constantly adapting to meet the next wave of data challenges, including those brought on by Generative AI. It's this forward-thinking adaptability that keeps Airflow at the forefront of innovation, ready for whatever comes next.
The ever-growing demands of AI and ML applications have ushered in an era where sophisticated data management isn't a luxury—it's a necessity. Airflow's innate flexibility and scalability are what makes it indispensable in managing the intricate workflows of today, especially those involving Large Language Models (LLMs).
This talk isn't just a rundown of Airflow's features; it's about harnessing these capabilities to turn your data workflows into a strategic asset. Together, we'll explore how Airflow remains at the cutting edge of data orchestration, ensuring your organization is not just keeping pace but setting the pace in a data-driven future.
Session in https://budapestdata.hu/2024/04/kaxil-naik-astronomer-io/ | https://dataml24.sessionize.com/session/667627
3. Onboarding with graph performance
Exciting era with many new technologies!!
Onboarding users/developers to graph databases
• Typical focus areas: features and benefits, ease of use, suitability,
extensibility, APIs…
• Performance is one of the most important differentiators for any
application
• Is performance just for system testing?!
• Performance and scalability are key considerations for design,
development, and operations
Journey to JanusGraph with a performance mind!
• Check out graph structures and traversals
• Evaluate reads and writes in high volume
• Can JanusGraph scale out for future data/user growth?
• Look for bottlenecks and provide improvements
5. JanusGraph performance tool - Graph schema loader
Enable the graph model creation via the
gremlin console or embedded in java
• Use JSON to describe your graph model
• Support:
• Property
• Vertex
• Edge
• Index
Benefit: Create schema on-the-fly
without single line of code!
https://github.com/yhwang/janusgraph-utils
6. Bulk load performance – Use case and data
Data Migration
OneTimeBulkLoader
Batch Update
IncrementalBulkLoader
011110100101100101Gryo:
{“id”: 1, “label”:…}GraphSON:
1:person:marko:29Script:
Supported Formats
• OneTimeBulkLoader
• 128GB GraphSON file
• 31 million vertices
• 38 million edges
• 3277 propertyKeys
• 5 vertex labels
• 3 edge labels
• 78.9 properties per edge
• 18.7 properties per vertex
7. Bulk load performance – Topology
HDFS
• Spark - 1.6.1
• Standalone Cluster
• 2 worker nodes
• 8 executors per node
• 8 cores per executor
• 2GB per executor
• Hadoop - 2.7.2
• Use HDFS to store the GraphSON file
• Cassandra - 2.1.17
• 2-node cluster
• Tinkerpop3 – 3.2.3
• GraphComputer
• JanusGraph – 0.1.1
• JanusGraphBulkLoaderVertexProgram
• Astyanax persistence provider
Worker Node
Cassandra
BulkLoader
+ + HDFS client
Executor
X 8
Worker Node
Executor X 8
Cassandra
Cluster Master
11. Data import performance tooling- Graph data generator
A Java application
• Vertices and edges labels
• Number of vertices and edges
• Number of properties and data types
• Native and mixed index
• Relations patterns
• Super-nodes
• Generate graph-db schema in JSON
• Generate datamap JSON for BatchImporter
https://github.ibm.com/htchang/JanusGraphBench
12. Data import performance tooling - Graph data batch importer
Java application to Import CSV data into JanusGraph
Features:
• Multiple Threads
• Worker record size
• Commit size
• Import schema
• Import CSV to JanusGraph with configurable data mapping
https://github.com/sdmonov/JanusGraphBatchImporter
13. Data import performance – results
0.2 1
10 10
0.2 2
25
241
55
70
73 73
0
10
20
30
40
50
60
70
80
0
50
100
150
200
250
300
CPU%
ImportTime(min)
Size of DB
Batch Import Time V.S. # of Records
Vertex Import(min) Edges Import(min) CPU%
1,648
319
35 10
788
450
133
52
70
80
90 90
0
10
20
30
40
50
60
70
80
90
100
0
200
400
600
800
1,000
1,200
1,400
1,600
1,800
medium (8mil) 10x Properties(80mil) 50x Properties(400mil) 100x Properties(800mil)
CPU%
records/sec
Size of DB
Insert rate V.S. # of Properties Per Record
Vertex/sec/core Edges/sec/core CPU%
14. Data query performance – use case and data
Flight search
• All flights from airport A to airport B on a given date and time
• # of stops: non-stop, one-stop, two-stop…
Data spec
• 600+ airports, 350K+ flight schedules
Performance analysis
• How many requests per second can JanusGraph handle?
• Can JanusGraph scale with future volume growth?
15. Data query performance - Topology and configuration
JanusG
raph
Server
ElasticSearchCassandra
Storage Backend Node Index Backend Node
JanusGraph Node 1 JanusGraph Node 2
JanusG
raph
Server
JanusG
raph
Server
JanusG
raph
Server
JanusG
raph
Server
JanusG
raph
Server
Load Driver Node
jMeter (thread groups)
REST Calls, http post
• JanusGraph server with REST
• 1 or 10 instances per server
• Astyanax persistence provider
• threadPoolBoss: 2
• threadPoolWorker: 20
• Java heap: -Xms512m -Xmx8G
• Concurrent threads (users): 1, 5, 10, 20,
40, 100, 200
• Think time: 0 ms
• Run duration: 5 minutes
• Multiple test configurations
• 10 instances on 1 node
• 20 instances on 2 nodes
• 30 instances on 3 nodes
. . . . . . . .. .
16. Data query performance – Non-stop flights (one level deep traversals)
Response Time
Concurrent threads Concurrent threads
TPS
milliseconds
transactions
Performs well regardless number of instances and nodes
17. Data query performance – One-stop flights (two levels deep traversals)
People would like to see more than just non stop flights…
Response Time
Concurrent threads Concurrent threads
TPS
milliseconds
transactions
18. Data query performance – Two-stop flights (three levels deep traversals)
The query gets complicated because we need to operate and filter on
multiple vertices and edges.
Response Time
Concurrent threads Concurrent threads
TPS
milliseconds
transactions
19. Lessons Learned
Model your graph database for performance
• Data is yours. Design the data model for your use cases!
• What kind of queries you want to support? How many levels deep into a traversal?
• Consider denormalization…
• Design and use indexes, graph indexes and vertex-centric indexes in JanusGraph, for better performance, but
not over-use indexes
• It is recommended to create the complete data model before inserting content
Use batch commits with caution
• Batch commits allow multiple transactions to be committed together. The batch size affects performance and the
optimal size depends on the characteristics of data.
• Need to handle conflicts for inserts and updates in a multi-threads/multi-clients implementation
• Make sure the commit is completed and closed
20. Lessons Learned
Fine-tune for your workloads and systems
• JanusGraph supports storage and index backends therefore tune your backends!
• JanusGraph server configurations, such as threadPoolBoss and threadPoolWorker
• JVM configurations, such as Xms (initial and minimum Java heap size) and Xmx (maximum Java heap size)
You don’t want to see the annoying java.lang.OutOfMemoryError exceptions But at the same time an
oversized Xmx has negative impact on performance due to long and slower GCs.
• Use multiple threads and/or instances to your system’s capacity
• Next step… consider cloud and auto-scaling
• Be thorough and be patient because it will take a few iterations
• Just like a fine-tuned instrument, you will enjoy the beautiful music for a long time!
21. Compose for JanusGraph
What is it?
• Compose is an open-source database hosting provider
• Supports backups, monitoring, performance tuning, and a full-suite of deployment management tools backed
by a 24x7 support and operations team
• Offers JanusGraph technology with Scylla database
• https://www.compose.com/janusgraph
22. Thank you for keeping performance in mind !!
Chin Huang, chhuang@us.ibm.com;github:chinhuang007
Yi-Hong Wang, yh.wang@ibm.com;github:yhwang
Ted Chang, htchang@ibm.com;github:tedhtchang
What’s next?
The journey continues…
• Find ways to improve JanusGraph performance
• Join us if you are interested in graph performance
• Work with us if you have graph datasets
• Talk to us if you have any comments or suggestions