Memory optimization and upgrading to Ruby 2.1 can improve performance of Ruby applications. The document discusses 5 strategies for memory optimization: 1) tuning the garbage collector, 2) limiting growth of the Ruby instance, 3) controlling GC manually, 4) writing less Ruby code, and 5) avoiding memory-intensive Ruby and Rails features. It provides examples for each strategy and recommends tools like GC.stat and ObjectSpace for memory profiling. While Ruby 2.1 improves performance by default, optimizing memory is more important and can make applications perform the same in Ruby 1.9, 2.0 and 2.1.
Presentation on replacing ActiveRecord with DataMapper in a Ruby on Rails application. Blog posting with info: http://degenportnoy.blogspot.com/2008/12/replacing-activerecord-with-datamapper.html
Source: http://github.com/PeterDP/ar2dm
Tuning Apache Spark for Large-Scale Workloads Gaoxiang Liu and Sital KediaDatabricks
Apache Spark is a fast and flexible compute engine for a variety of diverse workloads. Optimizing performance for different applications often requires an understanding of Spark internals and can be challenging for Spark application developers. In this session, learn how Facebook tunes Spark to run large-scale workloads reliably and efficiently. The speakers will begin by explaining the various tools and techniques they use to discover performance bottlenecks in Spark jobs. Next, you’ll hear about important configuration parameters and their experiments tuning these parameters on large-scale production workload. You’ll also learn about Facebook’s new efforts towards automatically tuning several important configurations based on nature of the workload. The speakers will conclude by sharing their results with automatic tuning and future directions for the project.ing several important configurations based on nature of the workload. We will conclude by sharing our result with automatic tuning and future directions for the project.
Cost-Based Optimizer in Apache Spark 2.2 Databricks
Apache Spark 2.2 ships with a state-of-art cost-based optimization framework that collects and leverages a variety of per-column data statistics (e.g., cardinality, number of distinct values, NULL values, max/min, avg/max length, etc.) to improve the quality of query execution plans. Leveraging these reliable statistics helps Spark to make better decisions in picking the most optimal query plan. Examples of these optimizations include selecting the correct build side in a hash-join, choosing the right join type (broadcast hash-join vs. shuffled hash-join) or adjusting a multi-way join order, among others. In this talk, we’ll take a deep dive into Spark’s cost based optimizer and discuss how we collect/store these statistics, the query optimizations it enables, and its performance impact on TPC-DS benchmark queries.
Building a Distributed Data Streaming Architecture for Modern Hardware with S...ScyllaDB
Computers now have 1000x faster disks, 100x bigger network pipes and 20x the number of cores than a decade ago. These advances are often wasted on software designs that ignore these revolutionary changes to the hardware underlying their platform. Learn how Vectorized leverages Seastar to build a new streaming engine that is Kafka API compatible yet offering 10x performance improvements.
Fine Tuning and Enhancing Performance of Apache Spark JobsDatabricks
Apache Spark defaults provide decent performance for large data sets but leave room for significant performance gains if able to tune parameters based on resources and job.
Presentation on replacing ActiveRecord with DataMapper in a Ruby on Rails application. Blog posting with info: http://degenportnoy.blogspot.com/2008/12/replacing-activerecord-with-datamapper.html
Source: http://github.com/PeterDP/ar2dm
Tuning Apache Spark for Large-Scale Workloads Gaoxiang Liu and Sital KediaDatabricks
Apache Spark is a fast and flexible compute engine for a variety of diverse workloads. Optimizing performance for different applications often requires an understanding of Spark internals and can be challenging for Spark application developers. In this session, learn how Facebook tunes Spark to run large-scale workloads reliably and efficiently. The speakers will begin by explaining the various tools and techniques they use to discover performance bottlenecks in Spark jobs. Next, you’ll hear about important configuration parameters and their experiments tuning these parameters on large-scale production workload. You’ll also learn about Facebook’s new efforts towards automatically tuning several important configurations based on nature of the workload. The speakers will conclude by sharing their results with automatic tuning and future directions for the project.ing several important configurations based on nature of the workload. We will conclude by sharing our result with automatic tuning and future directions for the project.
Cost-Based Optimizer in Apache Spark 2.2 Databricks
Apache Spark 2.2 ships with a state-of-art cost-based optimization framework that collects and leverages a variety of per-column data statistics (e.g., cardinality, number of distinct values, NULL values, max/min, avg/max length, etc.) to improve the quality of query execution plans. Leveraging these reliable statistics helps Spark to make better decisions in picking the most optimal query plan. Examples of these optimizations include selecting the correct build side in a hash-join, choosing the right join type (broadcast hash-join vs. shuffled hash-join) or adjusting a multi-way join order, among others. In this talk, we’ll take a deep dive into Spark’s cost based optimizer and discuss how we collect/store these statistics, the query optimizations it enables, and its performance impact on TPC-DS benchmark queries.
Building a Distributed Data Streaming Architecture for Modern Hardware with S...ScyllaDB
Computers now have 1000x faster disks, 100x bigger network pipes and 20x the number of cores than a decade ago. These advances are often wasted on software designs that ignore these revolutionary changes to the hardware underlying their platform. Learn how Vectorized leverages Seastar to build a new streaming engine that is Kafka API compatible yet offering 10x performance improvements.
Fine Tuning and Enhancing Performance of Apache Spark JobsDatabricks
Apache Spark defaults provide decent performance for large data sets but leave room for significant performance gains if able to tune parameters based on resources and job.
In this lecture I will cover the following:
1. Present the architecture of Vertica
2. Compare row store to column store
3. Explain how Vertica achieve Fast query time
4. Show few use cases .
5. Explains what does Liveperson do with Vertica? Why we chose Vertica?
6. Talk about why we Love Vertica and Why we hate it .
7. Is Vertica SQL DB or NoSQL? Is vertica Consistent or Eventually consistent?
8. How Vertica differ from other SQL and noSQL technologies?
SparkR - Play Spark Using R (20160909 HadoopCon)wqchen
1. Introduction to SparkR
2. Demo
Starting to use SparkR
DataFrames: dplyr style, SQL style
RDD v.s. DataFrames
SparkR on MLlib: GLM, K-means
3. User Case
Median: approxQuantile()
ID Match: dplyr style, SQL style, SparkR function
SparkR + Shiny
4. The Future of SparkR
At a blistering pace and for a variety of reasons, companies are migrating their on-premise database infrastructures to cloud-based solutions - to save costs on hardware, tame the impact of disaster recovery, or even to improve security. Zalando is not an exception: more than two years ago we migrated our first production services to AWS.
In addition to the fully managed database services like RDS and Aurora, Amazon offers a wide spectra of EC2 Instances with different types of performance and price. Without a lot of experience in running cloud databases it’s not easy to make the right choice, and as a result you will either have pure database performance or will overpay for over-provisioned resources.
In this talk I will compare different ways of running PostgreSQL on AWS, explain why we decided to run most of our databases on EC2 Instances instead of RDS, how we chose EC2 Instance types and EBS Volumes, which AWS CloudWatch metrics MUST be monitored (and why), and what problems we hit plus how to avoid them.
Query optimizers and people have one thing in common: the better they understand their data, the better they can do their jobs. Optimizing queries is hard if you don't have good estimates for the sizes of the intermediate join and aggregate results. Data profiling is a technique that scans data, looking for patterns within the data such as keys, functional dependencies, and correlated columns. These richer statistics can be used in Apache Calcite's query optimizer, and the projects that use it, such as Apache Hive, Phoenix and Drill. We describe how we built a data profiler as a table function in Apache Calcite, review the recent research and algorithms that made it possible, and show how you can use the profiler to improve the quality of your data.
How Adobe Does 2 Million Records Per Second Using Apache Spark!Databricks
Adobe’s Unified Profile System is the heart of its Experience Platform. It ingests TBs of data a day and is PBs large. As part of this massive growth we have faced multiple challenges in our Apache Spark deployment which is used from Ingestion to Processing.
ClickHouse Data Warehouse 101: The First Billion Rows, by Alexander Zaitsev a...Altinity Ltd
Columnar stores like ClickHouse enable users to pull insights from big data in seconds, but only if you set things up correctly. This talk will walk through how to implement a data warehouse that contains 1.3 billion rows using the famous NY Yellow Cab ride data. We'll start with basic data implementation including clustering and table definitions, then show how to load efficiently. Next, we'll discuss important features like dictionaries and materialized views, and how they improve query efficiency. We'll end by demonstrating typical queries to illustrate the kind of inferences you can draw rapidly from a well-designed data warehouse. It should be enough to get you started--the next billion rows is up to you!
Abstract:
We present a novel approach to parallel materialisation (i.e.,
fixpoint computation) of datalog programs in centralised,
main-memory, multi-core RDF systems. Our approach comprises an algorithm that evenly distributes the workload to cores, and an RDF indexing data structure that supports efficient, ‘mostly’ lock-free parallel updates. Our empirical evaluation shows that our approach parallelises computation very well: with 16 physical cores, materialisation can be up to 13.9 times faster than with just one core.
HBaseCon2017 Quanta: Quora's hierarchical counting system on HBaseHBaseCon
Hundreds of millions of people use Quora to find accurate, informative, and trustworthy answers to their questions. As it so happens, counting things at scale is both an important and a difficult problem to solve.
In this talk, we will be talking about Quanta, Quora's counting system built on top of HBase that powers our high-volume near-realtime analytics that serves many applications like ads, content views, and many dashboards. In addition to regular counting, Quanta supports count propagation along the edges of an arbitrary DAG. HBase is the underlying data store for both the counting data and the graph data.
We will describe the high-level architecture of Quanta and share our design goals, constraints, and choices that enabled us to build Quanta very quickly on top of our existing infrastructure systems.
Prediction as a service with ensemble model in SparkML and Python ScikitLearnJosef A. Habdank
Watch the recording of the speech done at Spark Summit Brussles 2016 here:
https://www.youtube.com/watch?v=wyfTjd9z1sY
Data Science with SparkML on DataBricks is a perfect platform for application of Ensemble Learning on massive a scale. This presentation describes Prediction-as-a-Service platform which can predict trends on 1 billion observed prices daily. In order to train ensemble model on a multivariate time series in thousands/millions dimensional space, one has to fragment the whole space into subspaces which exhibit a significant similarity. In order to achieve this, the vastly sparse space has to undergo dimensionality reduction into a parameters space which then is used to cluster the observations. The data in the resulting clusters is modeled in parallel using machine learning tools capable of coefficient estimation at the massive scale (SparkML and Scikit Learn). The estimated model coefficients are stored in a database to be used when executing predictions on demand via a web service. This approach enables training models fast enough to complete the task within a couple of hours, allowing daily or even real time updates of the coefficients. The above machine learning framework is used to predict the airfares used as support tool for the airline Revenue Management systems.
Accumulo Summit 2015: Ferrari on a Bumpy Road: Shock Absorbers to Smooth Out ...Accumulo Summit
Talk Abstract
Accumulo has a solid theoretical foundation, endowing it with huge scalability, high reliability, and the makings of class-leading performance for NoSQL operations. Several publications show Accumulo achieving multi-petabyte scalability and outperforming other databases in its class by orders of magnitude. However, there are challenges arising in practice that slow down that performance and introduce bottlenecks.
The root of Accumulo's distributed scale and performance while maintaining consistency comes from a multi-level amplification. Zookeeper bootstraps the consistency with a highly durable quorum. The Accumulo root table uses buffering and caching to boost that performance for sorted key/value operations. With the metadata tablets and data tables, Accumulo continues to boost performance and divides and conqures a highly scalable key/value space to leverage the resources of a large cluster. The challenge arrises when metadata operations at the core of Accumulo bottleneck performance for the entire cluster.
In this talk we will describe the Accumulo metadata operations model in detail. With a couple of prototypical application scenarios, we will show a few areas that are current bottlenecks or that we can expect to be bottlenecks in the near future. We will also propose modifications to the current model and outline projects that the community can take on to keep Accumulo in the lead for performance and scalability.
Speaker
Adam Fuchs
Chief Technology Officer, Sqrrl
As the Chief Technology Officer and co-founder of Sqrrl, Adam Fuchs is responsible for ensuring that Sqrrl is leading the world in Big Data Infrastructure technology. Previously at the National Security Agency, Adam was an innovator and technical director for several database projects, handling some of the world’s largest and most diverse data sets. He is a co-founder of the Apache Accumulo project. Adam has a BS in Computer Science from the University of Washington and has completed extensive graduate-level course work at the University of Maryland.
Accumulo Summit 2015: Performance Models for Apache Accumulo: The Heavy Tail ...Accumulo Summit
Talk Abstract
As with all open-source databases, Accumulo developers often compete between building exciting new features and hacking on performance and stability. As the core features solidify and expand, we see many opportunities to improve performance. An effective methodology for performance improvement is scientific in nature, and follows a well-definite modeling and simulation approach, matching theory to experimentation in an iterative fashion.
Ingest performance is one of the most differentiating characteristics of Accumulo. However, there is much room for improvement for typical ingest-heavy applications. Accumulo supports two mechanisms to bring data in: streaming ingest and bulk ingest. In bulk ingest, the goal is to maximize throughput without constraining latency. Bulk ingest involves creating a set of files that conform to Accumulo's internal RFile format and then registering those files with Accumulo. MapReduce provides a framework for generating, sorting, and storing key/value pairs, which form the primary elements of preparing RFiles for bulk ingest. MapReduce has been used many times over the years to break sorting records, such as Terasort. We can expect it is a reasonable choice for maximizing bulk ingest throughput. However, the theory often proves challenging to implement as there are many performance pitfalls along the way.
In this talk, we dive deep into optimizing MapReduce for Accumulo bulk ingest. We share detailed theoretical and empirical performance models, we discuss techniques for profiling performance, and we suggest reusable techniques for squeezing the maximum performance out of enterprise-grade Accumulo bulk ingest.
Speaker
Chris McCubbin
Director of Data Science, Sqrrl
Chris is the Director of Data Science for Sqrrl. He has extensive experience with the Hadoop ecosystem and applying scientific computation algorithms to real-world datasets. Previously, Chris developed Big Data analysis tools for the Intelligence Community and applied artificial intelligence techniques to unmanned vehicle systems. He holds a MS in Computer Science and BS in Computer Science and Mathematics from the University of Maryland.
Apache Sqoop: A Data Transfer Tool for HadoopCloudera, Inc.
Apache Sqoop is a tool designed for efficiently transferring bulk data between Hadoop and structured datastores such as relational databases. This slide deck aims at familiarizing the user with Sqoop and how to effectively use it in real deployments.
In this lecture I will cover the following:
1. Present the architecture of Vertica
2. Compare row store to column store
3. Explain how Vertica achieve Fast query time
4. Show few use cases .
5. Explains what does Liveperson do with Vertica? Why we chose Vertica?
6. Talk about why we Love Vertica and Why we hate it .
7. Is Vertica SQL DB or NoSQL? Is vertica Consistent or Eventually consistent?
8. How Vertica differ from other SQL and noSQL technologies?
SparkR - Play Spark Using R (20160909 HadoopCon)wqchen
1. Introduction to SparkR
2. Demo
Starting to use SparkR
DataFrames: dplyr style, SQL style
RDD v.s. DataFrames
SparkR on MLlib: GLM, K-means
3. User Case
Median: approxQuantile()
ID Match: dplyr style, SQL style, SparkR function
SparkR + Shiny
4. The Future of SparkR
At a blistering pace and for a variety of reasons, companies are migrating their on-premise database infrastructures to cloud-based solutions - to save costs on hardware, tame the impact of disaster recovery, or even to improve security. Zalando is not an exception: more than two years ago we migrated our first production services to AWS.
In addition to the fully managed database services like RDS and Aurora, Amazon offers a wide spectra of EC2 Instances with different types of performance and price. Without a lot of experience in running cloud databases it’s not easy to make the right choice, and as a result you will either have pure database performance or will overpay for over-provisioned resources.
In this talk I will compare different ways of running PostgreSQL on AWS, explain why we decided to run most of our databases on EC2 Instances instead of RDS, how we chose EC2 Instance types and EBS Volumes, which AWS CloudWatch metrics MUST be monitored (and why), and what problems we hit plus how to avoid them.
Query optimizers and people have one thing in common: the better they understand their data, the better they can do their jobs. Optimizing queries is hard if you don't have good estimates for the sizes of the intermediate join and aggregate results. Data profiling is a technique that scans data, looking for patterns within the data such as keys, functional dependencies, and correlated columns. These richer statistics can be used in Apache Calcite's query optimizer, and the projects that use it, such as Apache Hive, Phoenix and Drill. We describe how we built a data profiler as a table function in Apache Calcite, review the recent research and algorithms that made it possible, and show how you can use the profiler to improve the quality of your data.
How Adobe Does 2 Million Records Per Second Using Apache Spark!Databricks
Adobe’s Unified Profile System is the heart of its Experience Platform. It ingests TBs of data a day and is PBs large. As part of this massive growth we have faced multiple challenges in our Apache Spark deployment which is used from Ingestion to Processing.
ClickHouse Data Warehouse 101: The First Billion Rows, by Alexander Zaitsev a...Altinity Ltd
Columnar stores like ClickHouse enable users to pull insights from big data in seconds, but only if you set things up correctly. This talk will walk through how to implement a data warehouse that contains 1.3 billion rows using the famous NY Yellow Cab ride data. We'll start with basic data implementation including clustering and table definitions, then show how to load efficiently. Next, we'll discuss important features like dictionaries and materialized views, and how they improve query efficiency. We'll end by demonstrating typical queries to illustrate the kind of inferences you can draw rapidly from a well-designed data warehouse. It should be enough to get you started--the next billion rows is up to you!
Abstract:
We present a novel approach to parallel materialisation (i.e.,
fixpoint computation) of datalog programs in centralised,
main-memory, multi-core RDF systems. Our approach comprises an algorithm that evenly distributes the workload to cores, and an RDF indexing data structure that supports efficient, ‘mostly’ lock-free parallel updates. Our empirical evaluation shows that our approach parallelises computation very well: with 16 physical cores, materialisation can be up to 13.9 times faster than with just one core.
HBaseCon2017 Quanta: Quora's hierarchical counting system on HBaseHBaseCon
Hundreds of millions of people use Quora to find accurate, informative, and trustworthy answers to their questions. As it so happens, counting things at scale is both an important and a difficult problem to solve.
In this talk, we will be talking about Quanta, Quora's counting system built on top of HBase that powers our high-volume near-realtime analytics that serves many applications like ads, content views, and many dashboards. In addition to regular counting, Quanta supports count propagation along the edges of an arbitrary DAG. HBase is the underlying data store for both the counting data and the graph data.
We will describe the high-level architecture of Quanta and share our design goals, constraints, and choices that enabled us to build Quanta very quickly on top of our existing infrastructure systems.
Prediction as a service with ensemble model in SparkML and Python ScikitLearnJosef A. Habdank
Watch the recording of the speech done at Spark Summit Brussles 2016 here:
https://www.youtube.com/watch?v=wyfTjd9z1sY
Data Science with SparkML on DataBricks is a perfect platform for application of Ensemble Learning on massive a scale. This presentation describes Prediction-as-a-Service platform which can predict trends on 1 billion observed prices daily. In order to train ensemble model on a multivariate time series in thousands/millions dimensional space, one has to fragment the whole space into subspaces which exhibit a significant similarity. In order to achieve this, the vastly sparse space has to undergo dimensionality reduction into a parameters space which then is used to cluster the observations. The data in the resulting clusters is modeled in parallel using machine learning tools capable of coefficient estimation at the massive scale (SparkML and Scikit Learn). The estimated model coefficients are stored in a database to be used when executing predictions on demand via a web service. This approach enables training models fast enough to complete the task within a couple of hours, allowing daily or even real time updates of the coefficients. The above machine learning framework is used to predict the airfares used as support tool for the airline Revenue Management systems.
Accumulo Summit 2015: Ferrari on a Bumpy Road: Shock Absorbers to Smooth Out ...Accumulo Summit
Talk Abstract
Accumulo has a solid theoretical foundation, endowing it with huge scalability, high reliability, and the makings of class-leading performance for NoSQL operations. Several publications show Accumulo achieving multi-petabyte scalability and outperforming other databases in its class by orders of magnitude. However, there are challenges arising in practice that slow down that performance and introduce bottlenecks.
The root of Accumulo's distributed scale and performance while maintaining consistency comes from a multi-level amplification. Zookeeper bootstraps the consistency with a highly durable quorum. The Accumulo root table uses buffering and caching to boost that performance for sorted key/value operations. With the metadata tablets and data tables, Accumulo continues to boost performance and divides and conqures a highly scalable key/value space to leverage the resources of a large cluster. The challenge arrises when metadata operations at the core of Accumulo bottleneck performance for the entire cluster.
In this talk we will describe the Accumulo metadata operations model in detail. With a couple of prototypical application scenarios, we will show a few areas that are current bottlenecks or that we can expect to be bottlenecks in the near future. We will also propose modifications to the current model and outline projects that the community can take on to keep Accumulo in the lead for performance and scalability.
Speaker
Adam Fuchs
Chief Technology Officer, Sqrrl
As the Chief Technology Officer and co-founder of Sqrrl, Adam Fuchs is responsible for ensuring that Sqrrl is leading the world in Big Data Infrastructure technology. Previously at the National Security Agency, Adam was an innovator and technical director for several database projects, handling some of the world’s largest and most diverse data sets. He is a co-founder of the Apache Accumulo project. Adam has a BS in Computer Science from the University of Washington and has completed extensive graduate-level course work at the University of Maryland.
Accumulo Summit 2015: Performance Models for Apache Accumulo: The Heavy Tail ...Accumulo Summit
Talk Abstract
As with all open-source databases, Accumulo developers often compete between building exciting new features and hacking on performance and stability. As the core features solidify and expand, we see many opportunities to improve performance. An effective methodology for performance improvement is scientific in nature, and follows a well-definite modeling and simulation approach, matching theory to experimentation in an iterative fashion.
Ingest performance is one of the most differentiating characteristics of Accumulo. However, there is much room for improvement for typical ingest-heavy applications. Accumulo supports two mechanisms to bring data in: streaming ingest and bulk ingest. In bulk ingest, the goal is to maximize throughput without constraining latency. Bulk ingest involves creating a set of files that conform to Accumulo's internal RFile format and then registering those files with Accumulo. MapReduce provides a framework for generating, sorting, and storing key/value pairs, which form the primary elements of preparing RFiles for bulk ingest. MapReduce has been used many times over the years to break sorting records, such as Terasort. We can expect it is a reasonable choice for maximizing bulk ingest throughput. However, the theory often proves challenging to implement as there are many performance pitfalls along the way.
In this talk, we dive deep into optimizing MapReduce for Accumulo bulk ingest. We share detailed theoretical and empirical performance models, we discuss techniques for profiling performance, and we suggest reusable techniques for squeezing the maximum performance out of enterprise-grade Accumulo bulk ingest.
Speaker
Chris McCubbin
Director of Data Science, Sqrrl
Chris is the Director of Data Science for Sqrrl. He has extensive experience with the Hadoop ecosystem and applying scientific computation algorithms to real-world datasets. Previously, Chris developed Big Data analysis tools for the Intelligence Community and applied artificial intelligence techniques to unmanned vehicle systems. He holds a MS in Computer Science and BS in Computer Science and Mathematics from the University of Maryland.
Apache Sqoop: A Data Transfer Tool for HadoopCloudera, Inc.
Apache Sqoop is a tool designed for efficiently transferring bulk data between Hadoop and structured datastores such as relational databases. This slide deck aims at familiarizing the user with Sqoop and how to effectively use it in real deployments.
This presentation was prepared for a Webcast where John Yerhot, Engine Yard US Support Lead, and Chris Kelly, Technical Evangelist at New Relic discussed how you can scale and improve the performance of your Ruby web apps. They shared detailed guidance on issues like:
Caching strategies
Slow database queries
Background processing
Profiling Ruby applications
Picking the right Ruby web server
Sharding data
Attendees will learn how to:
Gain visibility on site performance
Improve scalability and uptime
Find and fix key bottlenecks
See the on-demand replay:
http://pages.engineyard.com/6TipsforImprovingRubyApplicationPerformance.html
DevoxxUK: Optimizating Application Performance on KubernetesDinakar Guniguntala
Now that you have your apps running on K8s, wondering how to get the response time that you need ? Tuning a polyglot set of microservices to get the performance that you need can be challenging in Kubernetes. The key to overcoming this is observability. Luckily there are a number of tools such as Prometheus that can provide all the metrics you need, but here is the catch, there is so much of data and metrics that is difficult make sense of it all. This is where Hyperparameter tuning can come to the rescue to help build the right models.
This talk covers best practices that will help attendees
1. To understand and avoid common performance related problems.
2. Discuss observability tools and how they can help identify perf issues.
3. Look closer into Kruize Autotune which is a Open Source Autonomous Performance Tuning Tool for Kubernetes and where it can help.
(DAT402) Amazon RDS PostgreSQL:Lessons Learned & New FeaturesAmazon Web Services
Learn the specifics of Amazon RDS for PostgreSQL’s capabilities and extensions that make it powerful. This session begins with a brief overview of the RDS PostgreSQL service, how it provides High Availability & Durability and will then deep dive into the new features that we have released since re:Invent 2014, including major version upgrade and newly added PostgreSQL extensions to RDS PostgreSQL. During the session, we will also discuss lessons learned running a large fleet of PostgreSQL instances, including specific recommendations. In addition we will present benchmarking results looking at differences between the 9.3, 9.4 and 9.5 releases.
This presentation was given to the system adminstration team to give them an idea of how GC works and what to look for when there is abottleneck and troubles.
Operating multi-tenant clusters requires careful planning of capacity for on-time launch of big data projects and applications within expected budget and with appropriate SLA guarantees. Making such guarantees with a set of standard hardware configurations is key to operate big data platforms as a hosted service for your organization.
This talk highlights the tools, techniques and methodology applied on a per-project or user basis across three primary multi-tenant deployments in the Apache Hadoop ecosystem, namely MapReduce/YARN and HDFS, HBase, and Storm due to the significance of capital investments with increasing scale in data nodes, region servers, and supervisor nodes respectively. We will demo the estimation tools developed for these deployments that can be used for capital planning and forecasting, and cluster resource and SLA management, including making latency and throughput guarantees to individual users and projects.
As we discuss the tools, we will share considerations that got incorporated to come up with the most appropriate calculation across these three primary deployments. We will discuss the data sources for calculations, resource drivers for different use cases, and how to plan for optimum capacity allocation per project with respect to given standard hardware configurations.
Java is finally elastic! OpenJDK improvements and new features in Garbage Collection technology resulted in enhancing Java vertical scaling and resource consumption. Now JVM can promptly return unused memory and, as result it can go up and down automatically. In this presentation, we cover the main achievements in vertical scaling direction, as well as share peculiarities and tuning details of different GCs. Find out how to make your Java environments more elastic to follow the load and lower down the total cost of ownership at a large scale.
Big data challenges are common : we are all doing aggregations , machine learning , anomaly detection, OLAP ...
This presentation describe how InnerActive answer those requirements
This session brings to your attention how several millions of dollars are wasted and what you can do to save money. Optimizing garbage collection performance not only saves money, but also improves the overall customer experience as well.
Similar to Alexander Dymo - RailsConf 2014 - Improve performance: Optimize Memory and Upgrade to Ruby 2.1 (20)
Enhancing Project Management Efficiency_ Leveraging AI Tools like ChatGPT.pdfJay Das
With the advent of artificial intelligence or AI tools, project management processes are undergoing a transformative shift. By using tools like ChatGPT, and Bard organizations can empower their leaders and managers to plan, execute, and monitor projects more effectively.
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
A Comprehensive Look at Generative AI in Retail App Testing.pdfkalichargn70th171
Traditional software testing methods are being challenged in retail, where customer expectations and technological advancements continually shape the landscape. Enter generative AI—a transformative subset of artificial intelligence technologies poised to revolutionize software testing.
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...Mind IT Systems
Healthcare providers often struggle with the complexities of chronic conditions and remote patient monitoring, as each patient requires personalized care and ongoing monitoring. Off-the-shelf solutions may not meet these diverse needs, leading to inefficiencies and gaps in care. It’s here, custom healthcare software offers a tailored solution, ensuring improved care and effectiveness.
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadar’s Dark Web News.
Stay Informed on Threat Actors’ Activity on the Dark Web with SOCRadar!
9. require "csv"
data = CSV.open("data.csv")
output = data.readlines.map do |line|
line.map do |col|
col.downcase.gsub(/b('?[a-z])/) { $1.capitalize } }
end
end
File.open("output.csv", "w+") do |f|
f.write output.join("n")
end
Unoptimized Program
11. require "csv"
output = File.open("output.csv", "w+")
CSV.open("examples/data.csv", "r").each do |line|
output.puts line.map do |col|
col.downcase!
col.gsub!(/b('?[a-z])/) { $1.capitalize! }
end.join(",")
end
Memory Optimized Program
12. Ruby 2.1 Is NOT Faster
...once your program is memory optimized
Ruby 1.9 & 2.0
Ruby 2.1
0 2 4 6 8 10 12 14
13. Takeaways
1. Ruby 2.1 is not a silver performance bullet
2. Memory optimized Ruby app performs the same in 1.9, 2.0 and 2.1
3. Ruby 2.1 merely makes performance adequate by default
4. Optimize memory to make a difference
15. 5 Memory Optimization Strategies
1. Tune garbage collector
2. Do not allow Ruby instance to grow
3. Control GC manually
4. Write less Ruby
5. Avoid memory-intensive Ruby and Rails features
17. Ruby GC Tuning Goal
Goal: balance the number of GC runs and peak memory usage
How to check:
> GC.stat[:minor_gc_count]
> GC.stat[:major_gc_count]
> `ps -o rss= -p #{Process.pid}`.chomp.to_i / 1024 #MB
18. When Is Ruby GC Triggered?
Minor GC (faster, only new objects collected):
- not enough space on the Ruby heap to allocate new objects
- every 16MB-32MB of memory allocated in new objects
Major GC (slower, all objects collected):
- number of old or shady objects increases more than 2x
- every 16MB-128MB of memory allocated in old objects
19. Environment Variables
Initial number of slots on the heap RUBY_GC_HEAP_INIT_SLOTS 1000
Min number of slots that GC must free RUBY_GC_HEAP_FREE_SLOTS 4096
Heap growth factor RUBY_GC_HEAP_GROWTH_FACTOR 1.8
Maximum heap slots to add RUBY_GC_HEAP_GROWTH_MAX_SLOTS -
New generation malloc limit RUBY_GC_MALLOC_LIMIT 16M
Maximum new generation malloc limit RUBY_GC_MALLOC_LIMIT_MAX 32M
New generation malloc growth factor RUBY_GC_MALLOC_LIMIT_GROWTH_FACTOR 1.4
Old generation malloc limit RUBY_GC_OLDMALLOC_LIMIT 16M
Maximum old generation malloc limit RUBY_GC_OLDMALLOC_LIMIT_MAX 128M
Old generation malloc growth factor RUBY_GC_OLDMALLOC_LIMIT_GROWTH_FACTOR 1.2
20. When Is Ruby GC Triggered?
ruby-performance-book.com
http://samsaffron.com/archive/2013/11/22/demystifying-the-ruby-gc
http://thorstenball.com/blog/2014/03/12/watching-understanding-ruby-2.1-garbage-collector/
28. GC Between Requests in Unicorn
OobGC for Ruby < 2.1
require 'unicorn/oob_gc'
use(Unicorn::OobGC, 1)
gctools for Ruby >= 2.1 https://github.com/tmm1/gctools
require 'gctools/oobgc'
use(GC::OOB::UnicornMiddleware)
29. GC Between Requests in Unicorn
Things to have in mind:
- make sure you have enough workers
- make sure CPU utilization < 50%
- this improves only “perceived” performance
- overall performance might be worse
- only effective for memory-intensive applications
35. Operations That Copy Data
● String::gsub! instead of String::gsub and similar
● String::<< instead of String::+=
● File::readline or File::each instead of File::readlines or File.read
● CSV::parseline instead of CSV::parse
36. ActiveRecord Also Copies Data
● ActiveRecord::Base::update_all
Book.where('title LIKE ?', '%Rails%').
order(:created_at).limit(5).
update_all(author: 'David')
● Direct manipulation over query result
result = ActiveRecord::Base.execute 'select * from books'
result.each do |row|
# do something with row.values_at('col1', 'col2')
end
37. Rails Serializers Copy Too Much
class Smth < ActiveRecord::Base
serialize :data, JSON
end
class Smth < ActiveRecord::Base
def data
JSON.parse(read_attribute(:data))
end
def data=(value)
write_attribute(:data, value.to_json)
end
end
42. RubyProf Memory Profiling
require 'ruby-prof'
RubyProf.measure_mode = RubyProf::MEMORY
RubyProf.start
str = 'x'*1024*1024*10
result = RubyProf.stop
printer = RubyProf::FlatPrinter.new(result)
printer.print(STDOUT)
This requires patched Ruby, will work only for 1.8 and 1.9
https://github.com/ruby-prof/ruby-prof/issues/86
Ok, Let&apos;s talk about performance
Can I have a show of hands. Who here thinks Ruby is fast:
C&apos;mon, only a few people – I disagree, Ruby is fast, especially the latest version except for one thing – memory consumption and garbage collection make it slow.
Oh, most people here think it&apos;s fast – I do agree, ruby is fast until your program takes so much memory that it becomes slow.
Why am I talking so much about memory? Here&apos;s why.
Why? Two reasons:
Large memory overhead where every object takes at least 40 bytes in memory
Plus
Slow gc algorithm that got improved in 2.1 but not as much as we will later see
That all equals not universal love an peace
But high memory consumption and because of that enormous time that app spends doing GC
That is why memory optimization is so important. It saves you that GC time
That&apos;s also why Ruby 2.1 is so important. It makes GC so much faster.
Some examples from my own experience.
Here&apos;s another example. No memory optimization done, but Ruby upgraded from 1.9 to 2.1
But here&apos;s another thing. If you can upgrade – fine. If not - you can still get same and better performance by optimizing memory.
How does tuning help?
You can balance...
By default this balance is to do more GC and reduce memory peaks. You can shift this balance.
Change GC settings and see how often GC is called and what your memory usage is
Let&apos;s step back for a minute and look when GC is triggered
There has been a sentiment inside Rails community that sql is somehow bad, that you should avoid it at all costs. People invent more and more things to stay out of sql. Just to mention AREL.
Guys, I wholeheartedly disagree with this. Web frameworks come and go. Sql stays. We had sql for 40 years. It&apos;s not going away.
So, our time is out. If you&apos;d like to learn more about ruby performance optimization, please sign up for my book mailing list updates. If you need help, just email me or airpair with me. And thank you for listening.