This document discusses debugging microservices in production environments. It begins by providing context on the history of debugging and introduces some common anti-patterns in debugging. It then argues that debugging should be approached methodically using the scientific method. The document outlines a method for debugging software called the software debugging method. Finally, it discusses challenges that arise in debugging microservices due to their distributed nature and argues that containers can help address these challenges.
Improving Apache Spark by Taking Advantage of Disaggregated ArchitectureDatabricks
Shuffle in Apache Spark is an intermediate phrase redistributing data across computing units, which has one important primitive that the shuffle data is persisted on local disks. This architecture suffers from some scalability and reliability issues. Moreover, the assumptions of collocated storage do not always hold in today’s data centers. The hardware trend is moving to disaggregated storage and compute architecture for better cost efficiency and scalability.
To address the issues of Spark shuffle and support disaggregated storage and compute architecture, we implemented a new remote Spark shuffle manager. This new architecture writes shuffle data to a remote cluster with different Hadoop-compatible filesystem backends.
Firstly, the failure of compute nodes will no longer cause shuffle data recomputation. Spark executors can also be allocated and recycled dynamically which results in better resource utilization.
Secondly, for most customers currently running Spark with collocated storage, it is usually challenging for them to upgrade the disks on every node to latest hardware like NVMe SSD and persistent memory because of cost consideration and system compatibility. With this new shuffle manager, they are free to build a separated cluster storing and serving the shuffle data, leveraging the latest hardware to improve the performance and reliability.
Thirdly, in HPC world, more customers are trying Spark as their high performance data analytics tools, while storage and compute in HPC clusters are typically disaggregated. This work will make their life easier.
In this talk, we will present an overview of the issues of the current Spark shuffle implementation, the design of new remote shuffle manager, and a performance study of the work.
The Top Five Mistakes Made When Writing Streaming Applications with Mark Grov...Databricks
So you know you want to write a streaming app, but any non-trivial streaming app developer would have to think about these questions:
– How do I manage offsets?
– How do I manage state?
– How do I make my Spark Streaming job resilient to failures? Can I avoid some failures?
– How do I gracefully shutdown my streaming job?
– How do I monitor and manage my streaming job (i.e. re-try logic)?
– How can I better manage the DAG in my streaming job?
– When do I use checkpointing, and for what? When should I not use checkpointing?
– Do I need a WAL when using a streaming data source? Why? When don’t I need one?
This session will share practices that no one talks about when you start writing your streaming app, but you’ll inevitably need to learn along the way.
Communication between Microservices is inherently unreliable. These integration points may produce cascading failures, slow responses, service outages. We will walk through stability patterns like timeouts, circuit breaker, bulkheads and discuss how they improve stability of Microservices.
SOS: Optimizing Shuffle I/O with Brian Cho and Ergin SeyfeDatabricks
Data shuffling is a costly operation. At Facebook, single job shuffles can reach the scale of over 300TB compressed using (relatively cheap) large spinning disks. However, shuffle reads issue large amounts of inefficient, small, random I/O requests to disks and can be a large source of job latency as well as waste of reserved system resources. In order to boost shuffle performance and improve resource efficiency, we have developed Spark-optimized Shuffle (SOS). This shuffle technique effectively converts a large number of small shuffle read requests into fewer large, sequential I/O requests.
In this session, we present SOS’s multi-stage shuffle architecture and implementation. We will also share our production results and future optimizations.
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...Databricks
Spark SQL is a highly scalable and efficient relational processing engine with ease-to-use APIs and mid-query fault tolerance. It is a core module of Apache Spark. Spark SQL can process, integrate and analyze the data from diverse data sources (e.g., Hive, Cassandra, Kafka and Oracle) and file formats (e.g., Parquet, ORC, CSV, and JSON). This talk will dive into the technical details of SparkSQL spanning the entire lifecycle of a query execution. The audience will get a deeper understanding of Spark SQL and understand how to tune Spark SQL performance.
Iceberg: A modern table format for big data (Strata NY 2018)Ryan Blue
Hive tables are an integral part of the big data ecosystem, but the simple directory-based design that made them ubiquitous is increasingly problematic. Netflix uses tables backed by S3 that, like other object stores, don’t fit this directory-based model: listings are much slower, renames are not atomic, and results are eventually consistent. Even tables in HDFS are problematic at scale, and reliable query behavior requires readers to acquire locks and wait.
Owen O’Malley and Ryan Blue offer an overview of Iceberg, a new open source project that defines a new table layout addresses the challenges of current Hive tables, with properties specifically designed for cloud object stores, such as S3. Iceberg is an Apache-licensed open source project. It specifies the portable table format and standardizes many important features, including:
* All reads use snapshot isolation without locking.
* No directory listings are required for query planning.
* Files can be added, removed, or replaced atomically.
* Full schema evolution supports changes in the table over time.
* Partitioning evolution enables changes to the physical layout without breaking existing queries.
* Data files are stored as Avro, ORC, or Parquet.
* Support for Spark, Pig, and Presto.
Improving Apache Spark by Taking Advantage of Disaggregated ArchitectureDatabricks
Shuffle in Apache Spark is an intermediate phrase redistributing data across computing units, which has one important primitive that the shuffle data is persisted on local disks. This architecture suffers from some scalability and reliability issues. Moreover, the assumptions of collocated storage do not always hold in today’s data centers. The hardware trend is moving to disaggregated storage and compute architecture for better cost efficiency and scalability.
To address the issues of Spark shuffle and support disaggregated storage and compute architecture, we implemented a new remote Spark shuffle manager. This new architecture writes shuffle data to a remote cluster with different Hadoop-compatible filesystem backends.
Firstly, the failure of compute nodes will no longer cause shuffle data recomputation. Spark executors can also be allocated and recycled dynamically which results in better resource utilization.
Secondly, for most customers currently running Spark with collocated storage, it is usually challenging for them to upgrade the disks on every node to latest hardware like NVMe SSD and persistent memory because of cost consideration and system compatibility. With this new shuffle manager, they are free to build a separated cluster storing and serving the shuffle data, leveraging the latest hardware to improve the performance and reliability.
Thirdly, in HPC world, more customers are trying Spark as their high performance data analytics tools, while storage and compute in HPC clusters are typically disaggregated. This work will make their life easier.
In this talk, we will present an overview of the issues of the current Spark shuffle implementation, the design of new remote shuffle manager, and a performance study of the work.
The Top Five Mistakes Made When Writing Streaming Applications with Mark Grov...Databricks
So you know you want to write a streaming app, but any non-trivial streaming app developer would have to think about these questions:
– How do I manage offsets?
– How do I manage state?
– How do I make my Spark Streaming job resilient to failures? Can I avoid some failures?
– How do I gracefully shutdown my streaming job?
– How do I monitor and manage my streaming job (i.e. re-try logic)?
– How can I better manage the DAG in my streaming job?
– When do I use checkpointing, and for what? When should I not use checkpointing?
– Do I need a WAL when using a streaming data source? Why? When don’t I need one?
This session will share practices that no one talks about when you start writing your streaming app, but you’ll inevitably need to learn along the way.
Communication between Microservices is inherently unreliable. These integration points may produce cascading failures, slow responses, service outages. We will walk through stability patterns like timeouts, circuit breaker, bulkheads and discuss how they improve stability of Microservices.
SOS: Optimizing Shuffle I/O with Brian Cho and Ergin SeyfeDatabricks
Data shuffling is a costly operation. At Facebook, single job shuffles can reach the scale of over 300TB compressed using (relatively cheap) large spinning disks. However, shuffle reads issue large amounts of inefficient, small, random I/O requests to disks and can be a large source of job latency as well as waste of reserved system resources. In order to boost shuffle performance and improve resource efficiency, we have developed Spark-optimized Shuffle (SOS). This shuffle technique effectively converts a large number of small shuffle read requests into fewer large, sequential I/O requests.
In this session, we present SOS’s multi-stage shuffle architecture and implementation. We will also share our production results and future optimizations.
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...Databricks
Spark SQL is a highly scalable and efficient relational processing engine with ease-to-use APIs and mid-query fault tolerance. It is a core module of Apache Spark. Spark SQL can process, integrate and analyze the data from diverse data sources (e.g., Hive, Cassandra, Kafka and Oracle) and file formats (e.g., Parquet, ORC, CSV, and JSON). This talk will dive into the technical details of SparkSQL spanning the entire lifecycle of a query execution. The audience will get a deeper understanding of Spark SQL and understand how to tune Spark SQL performance.
Iceberg: A modern table format for big data (Strata NY 2018)Ryan Blue
Hive tables are an integral part of the big data ecosystem, but the simple directory-based design that made them ubiquitous is increasingly problematic. Netflix uses tables backed by S3 that, like other object stores, don’t fit this directory-based model: listings are much slower, renames are not atomic, and results are eventually consistent. Even tables in HDFS are problematic at scale, and reliable query behavior requires readers to acquire locks and wait.
Owen O’Malley and Ryan Blue offer an overview of Iceberg, a new open source project that defines a new table layout addresses the challenges of current Hive tables, with properties specifically designed for cloud object stores, such as S3. Iceberg is an Apache-licensed open source project. It specifies the portable table format and standardizes many important features, including:
* All reads use snapshot isolation without locking.
* No directory listings are required for query planning.
* Files can be added, removed, or replaced atomically.
* Full schema evolution supports changes in the table over time.
* Partitioning evolution enables changes to the physical layout without breaking existing queries.
* Data files are stored as Avro, ORC, or Parquet.
* Support for Spark, Pig, and Presto.
Performance Tuning RocksDB for Kafka Streams' State Stores (Dhruba Borthakur,...confluent
RocksDB is the default state store for Kafka Streams. In this talk, we will discuss how to improve single node performance of the state store by tuning RocksDB and how to efficiently identify issues in the setup. We start with a short description of the RocksDB architecture. We discuss how Kafka Streams restores the state stores from Kafka by leveraging RocksDB features for bulk loading of data. We give examples of hand-tuning the RocksDB state stores based on Kafka Streams metrics and RocksDB’s metrics. At the end, we dive into a few RocksDB command line utilities that allow you to debug your setup and dump data from a state store. We illustrate the usage of the utilities with a few real-life use cases. The key takeaway from the session is the ability to understand the internal details of the default state store in Kafka Streams so that engineers can fine-tune their performance for different varieties of workloads and operate the state stores in a more robust manner.
Stop the Guessing: Performance Methodologies for Production SystemsBrendan Gregg
Talk presented at Velocity 2013. Description: When faced with performance issues on complex production systems and distributed cloud environments, it can be difficult to know where to begin your analysis, or to spend much time on it when it isn’t your day job. This talk covers various methodologies, and anti-methodologies, for systems analysis, which serve as guidance for finding fruitful metrics from your current performance monitoring products. Such methodologies can help check all areas in an efficient manner, and find issues that can be easily overlooked, especially for virtualized environments which impose resource controls. Some of the tools and methodologies covered, including the USE Method, were developed by the speaker and have been used successfully in enterprise and cloud environments.
Oracle Real Application Clusters 19c- Best Practices and Internals- EMEA Tour...Sandesh Rao
In this session, I will cover under-the-hood features that power Oracle Real Application Clusters (Oracle RAC) 19c specifically around Cache Fusion and Service management. Improvements in Oracle RAC helps in integration with features such as Multitenant and Data Guard. In fact, these features benefit immensely when used with Oracle RAC. Finally we will talk about changes to the broader Oracle RAC Family of Products stack and the algorithmic changes that helps quickly detect sick/dead nodes/instances and the reconfiguration improvements to ensure that the Oracle RAC Databases continue to function without any disruption
Containerized Stream Engine to Build Modern Delta LakeDatabricks
As days goes, everything is changing, your business, your analytics platform and your data. So, Deriving the real time insights from this humongous volume of data are key for survival. This robust solution can operate you to the speed of change.
High Performance Data Lake with Apache Hudi and Alluxio at T3GoAlluxio, Inc.
Data Orchestration Summit 2020 organized by Alluxio
https://www.alluxio.io/data-orchestration-summit-2020/
High Performance Data Lake with Apache Hudi and Alluxio at T3Go
Trevor Zhang & Vino Yang (T3Go)
About Alluxio: alluxio.io
Engage with the open source community on slack: alluxio.io/slack
Cosco: An Efficient Facebook-Scale Shuffle ServiceDatabricks
Cosco is an efficient shuffle-as-a-service that powers Spark (and Hive) jobs at Facebook warehouse scale. It is implemented as a scalable, reliable and maintainable distributed system. Cosco is based on the idea of partial in-memory aggregation across a shared pool of distributed memory. This provides vastly improved efficiency in disk usage compared to Spark's built-in shuffle. Long term, we believe the Cosco architecture will be key to efficiently supporting jobs at ever larger scale. In this talk we'll take a deep dive into the Cosco architecture and describe how it's deployed at Facebook. We will then describe how it's integrated to run shuffle for Spark, and contrast it with Spark's built-in sort-based shuffle mechanism and SOS (presented at Spark+AI Summit 2018).
Josh Berkus
You've heard that PostgreSQL is the highest-performance transactional open source database, but you're not seeing it on YOUR server. In fact, your PostgreSQL application is kind of poky. What should you do? While doing advanced performance engineering for really high-end systems takes years to learn, you can learn the basics to solve performance issues for 80% of PostgreSQL installations in less than an hour. In this session, you will learn: -- The parts of database application performance -- The performance setup procedure -- Basic troubleshooting tools -- The 13 postgresql.conf settings you need to know -- Where to look for more information.
Best Practices for Becoming an Exceptional Postgres DBA EDB
Drawing from our teams who support hundreds of Postgres instances and production database systems for customers worldwide, this presentation provides real-real best practices from the nation's top DBAs. Learn top-notch monitoring and maintenance practices, get resource planning advice that can help prevent, resolve, or eliminate common issues, learning top database tuning tricks for increasing system performance and ultimately, gain greater insight into how to improve your effectiveness as a DBA.
Apache Spark on Kubernetes Anirudh Ramanathan and Tim ChenDatabricks
Kubernetes is a fast growing open-source platform which provides container-centric infrastructure. Conceived by Google in 2014, and leveraging over a decade of experience running containers at scale internally, it is one of the fastest moving projects on GitHub with 1000+ contributors and 40,000+ commits. Kubernetes has first class support on Google Cloud Platform, Amazon Web Services, and Microsoft Azure.
Unlike YARN, Kubernetes started as a general purpose orchestration framework with a focus on serving jobs. Support for long-running, data intensive batch workloads required some careful design decisions. Engineers across several organizations have been working on Kubernetes support as a cluster scheduler backend within Spark. During this process, we encountered several challenges in translating Spark considerations into idiomatic Kubernetes constructs. In this talk, we describe the challenges and the ways in which we solved them. This talk will be technical and is aimed at people who are looking to run Spark effectively on their clusters. The talk assumes basic familiarity with cluster orchestration and containers.
Scylla Summit 2022: Scylla 5.0 New Features, Part 2ScyllaDB
Discover the new features and capabilities of Scylla Open Source 5.0 directly from the engineers who developed it. This first block of lightning talks will cover the following topics:
- Repair Based Node Operations
- Repair Based Tombstone Garbage Collection
- Compaction Improvements
- Range Tombstone Improvements
To watch all of the recordings hosted during Scylla Summit 2022 visit our website here: https://www.scylladb.com/summit.
The Top 5 Reasons to Deploy Your Applications on Oracle RACMarkus Michalewicz
A presentation for developers, DBAs, and managers. This presentation was first presented in course of the AIOUG Maximum Availability Architecture (MAA)-focus month August 2021. The first reason might surprise you!
Apache Spark is a In Memory Data Processing Solution that can work with existing data source like HDFS and can make use of your existing computation infrastructure like YARN/Mesos etc. This talk will cover a basic introduction of Apache Spark with its various components like MLib, Shark, GrpahX and with few examples.
Fine Tuning and Enhancing Performance of Apache Spark JobsDatabricks
Apache Spark defaults provide decent performance for large data sets but leave room for significant performance gains if able to tune parameters based on resources and job.
Performance Tuning RocksDB for Kafka Streams' State Stores (Dhruba Borthakur,...confluent
RocksDB is the default state store for Kafka Streams. In this talk, we will discuss how to improve single node performance of the state store by tuning RocksDB and how to efficiently identify issues in the setup. We start with a short description of the RocksDB architecture. We discuss how Kafka Streams restores the state stores from Kafka by leveraging RocksDB features for bulk loading of data. We give examples of hand-tuning the RocksDB state stores based on Kafka Streams metrics and RocksDB’s metrics. At the end, we dive into a few RocksDB command line utilities that allow you to debug your setup and dump data from a state store. We illustrate the usage of the utilities with a few real-life use cases. The key takeaway from the session is the ability to understand the internal details of the default state store in Kafka Streams so that engineers can fine-tune their performance for different varieties of workloads and operate the state stores in a more robust manner.
Stop the Guessing: Performance Methodologies for Production SystemsBrendan Gregg
Talk presented at Velocity 2013. Description: When faced with performance issues on complex production systems and distributed cloud environments, it can be difficult to know where to begin your analysis, or to spend much time on it when it isn’t your day job. This talk covers various methodologies, and anti-methodologies, for systems analysis, which serve as guidance for finding fruitful metrics from your current performance monitoring products. Such methodologies can help check all areas in an efficient manner, and find issues that can be easily overlooked, especially for virtualized environments which impose resource controls. Some of the tools and methodologies covered, including the USE Method, were developed by the speaker and have been used successfully in enterprise and cloud environments.
Oracle Real Application Clusters 19c- Best Practices and Internals- EMEA Tour...Sandesh Rao
In this session, I will cover under-the-hood features that power Oracle Real Application Clusters (Oracle RAC) 19c specifically around Cache Fusion and Service management. Improvements in Oracle RAC helps in integration with features such as Multitenant and Data Guard. In fact, these features benefit immensely when used with Oracle RAC. Finally we will talk about changes to the broader Oracle RAC Family of Products stack and the algorithmic changes that helps quickly detect sick/dead nodes/instances and the reconfiguration improvements to ensure that the Oracle RAC Databases continue to function without any disruption
Containerized Stream Engine to Build Modern Delta LakeDatabricks
As days goes, everything is changing, your business, your analytics platform and your data. So, Deriving the real time insights from this humongous volume of data are key for survival. This robust solution can operate you to the speed of change.
High Performance Data Lake with Apache Hudi and Alluxio at T3GoAlluxio, Inc.
Data Orchestration Summit 2020 organized by Alluxio
https://www.alluxio.io/data-orchestration-summit-2020/
High Performance Data Lake with Apache Hudi and Alluxio at T3Go
Trevor Zhang & Vino Yang (T3Go)
About Alluxio: alluxio.io
Engage with the open source community on slack: alluxio.io/slack
Cosco: An Efficient Facebook-Scale Shuffle ServiceDatabricks
Cosco is an efficient shuffle-as-a-service that powers Spark (and Hive) jobs at Facebook warehouse scale. It is implemented as a scalable, reliable and maintainable distributed system. Cosco is based on the idea of partial in-memory aggregation across a shared pool of distributed memory. This provides vastly improved efficiency in disk usage compared to Spark's built-in shuffle. Long term, we believe the Cosco architecture will be key to efficiently supporting jobs at ever larger scale. In this talk we'll take a deep dive into the Cosco architecture and describe how it's deployed at Facebook. We will then describe how it's integrated to run shuffle for Spark, and contrast it with Spark's built-in sort-based shuffle mechanism and SOS (presented at Spark+AI Summit 2018).
Josh Berkus
You've heard that PostgreSQL is the highest-performance transactional open source database, but you're not seeing it on YOUR server. In fact, your PostgreSQL application is kind of poky. What should you do? While doing advanced performance engineering for really high-end systems takes years to learn, you can learn the basics to solve performance issues for 80% of PostgreSQL installations in less than an hour. In this session, you will learn: -- The parts of database application performance -- The performance setup procedure -- Basic troubleshooting tools -- The 13 postgresql.conf settings you need to know -- Where to look for more information.
Best Practices for Becoming an Exceptional Postgres DBA EDB
Drawing from our teams who support hundreds of Postgres instances and production database systems for customers worldwide, this presentation provides real-real best practices from the nation's top DBAs. Learn top-notch monitoring and maintenance practices, get resource planning advice that can help prevent, resolve, or eliminate common issues, learning top database tuning tricks for increasing system performance and ultimately, gain greater insight into how to improve your effectiveness as a DBA.
Apache Spark on Kubernetes Anirudh Ramanathan and Tim ChenDatabricks
Kubernetes is a fast growing open-source platform which provides container-centric infrastructure. Conceived by Google in 2014, and leveraging over a decade of experience running containers at scale internally, it is one of the fastest moving projects on GitHub with 1000+ contributors and 40,000+ commits. Kubernetes has first class support on Google Cloud Platform, Amazon Web Services, and Microsoft Azure.
Unlike YARN, Kubernetes started as a general purpose orchestration framework with a focus on serving jobs. Support for long-running, data intensive batch workloads required some careful design decisions. Engineers across several organizations have been working on Kubernetes support as a cluster scheduler backend within Spark. During this process, we encountered several challenges in translating Spark considerations into idiomatic Kubernetes constructs. In this talk, we describe the challenges and the ways in which we solved them. This talk will be technical and is aimed at people who are looking to run Spark effectively on their clusters. The talk assumes basic familiarity with cluster orchestration and containers.
Scylla Summit 2022: Scylla 5.0 New Features, Part 2ScyllaDB
Discover the new features and capabilities of Scylla Open Source 5.0 directly from the engineers who developed it. This first block of lightning talks will cover the following topics:
- Repair Based Node Operations
- Repair Based Tombstone Garbage Collection
- Compaction Improvements
- Range Tombstone Improvements
To watch all of the recordings hosted during Scylla Summit 2022 visit our website here: https://www.scylladb.com/summit.
The Top 5 Reasons to Deploy Your Applications on Oracle RACMarkus Michalewicz
A presentation for developers, DBAs, and managers. This presentation was first presented in course of the AIOUG Maximum Availability Architecture (MAA)-focus month August 2021. The first reason might surprise you!
Apache Spark is a In Memory Data Processing Solution that can work with existing data source like HDFS and can make use of your existing computation infrastructure like YARN/Mesos etc. This talk will cover a basic introduction of Apache Spark with its various components like MLib, Shark, GrpahX and with few examples.
Fine Tuning and Enhancing Performance of Apache Spark JobsDatabricks
Apache Spark defaults provide decent performance for large data sets but leave room for significant performance gains if able to tune parameters based on resources and job.
This is a talk given by Jason Hoffman at a workshop given by Joyent called "Scale With Rails" in 2006. There's quite a bit of prescience in this presentation, including the first documented use of ZFS in production ("Fsck you if you think ZFS isn't production") and of OS-based virtualization (zones) in the cloud (which, it must be said, was not called "cloud" in 2006).
My (very brief!) presentation at Interzone.io on March 11, 2015. A more in depth exploration of these ideas can be found at http://www.slideshare.net/bcantrill/docker-and-the-future-of-containers-in-production video: https://www.joyent.com/developers/videos/docker-and-the-future-of-containers-in-production
Expect the unexpected: Prepare for failures in microservicesBhakti Mehta
My talk at Confoo 2016 Montreal
It is well said that "The more you sweat on the field, the less you bleed in war". Failures are an inevitable part of complex systems. Accepting that failures happen, will help you design the system's reactions to specific failures.
This talks on best practices for building resilient, stable and predictable services:
preventing Cascading failures, Timeouts pattern, Retry pattern,Circuit breakers
and many more techniques in microservices
Vert.X: Microservices Were Never So Easy (Clement Escoffier)Red Hat Developers
Vert.x 3 is a framework to create reactive applications on the Java Virtual Machine. Vert.x 3 takes the JVM to new levels of performance yet having a small API. It lets you build scalable microservice-based applications transparently distributed and packaged as a single jar file. Due to this simplicity, deploying and managing Vert.x applications on OpenShift 3 is a breeze, upload your jar and Vert.x internal cluster manager will connect all your pods in single distributed network. Several examples are shown during the talk and demonstrate how Vert.x can simplify DevOps daily job when working together with OpenShift 3.
J1 2015 "Debugging Java Apps in Containers: No Heavy Welding Gear Required"Daniel Bryant
It’s easy to get seduced by being able to quickly deploy and scale applications by using containers. However, when things inevitably go wrong, how do you debug your application? This session covers various pro bug hunting tips and tricks. It shows live demos of tools such as the Docker stats API, Docker exec (and top, vmstat, and netstat), and how to use the ELK stack for centralized logging. It also dives into other more sophisticated tools that operate at the application and (micro)service layer, such as Twitter’s Zipkin tracing app, Spring Boot’s Actuator, and DropWizard’s Metrics library. Keep those container-based nightmares away by ensuring that when the worst does happen, you have the tools, info, and experience to debug containerized applications.
Presented at JavaOne 2015 with Steve Poole
Empirical Methods in Software Engineering - an Overviewalessio_ferrari
A first introductory lecture on empirical methods in software engineering. It includes:
1) Motivation for empirical software engineering studies
2) How to define research questions
3) Measures and data collection methods
4) Formulating theories in software engineering
5) Software engineering research strategies
Find the videos at: https://www.youtube.com/playlist?list=PLSKM4VZcJjV-P3fFJYMu2OhlTjEr9Bjl0
Perspectives on salesforce architecture Forcelandia talk 2017Steven Herod
My Forcelandia talk for 2017 on principles of Architecture, although specific to Salesforce. You can find the recording on Youtube: https://www.youtube.com/watch?v=ND-dX-__I1Y&t=7s
devops, microservices, and platforms, oh my!Andrew Shafer
A story about a boy and his quest to build great software delivered at the Cloud Foundry Summit in Santa Clara May 2015. (https://www.youtube.com/watch?v=rX4mQHPWuUY) Walk through the history of my personal career, and the evolution of the industry highlighting themes like devops, microservices and platforms.
This is basically a "lessons learned" talk. While dealing with resilient software design for several years meanwhile, I realized along the way that implementing a specific pattern like timeout detection, circuit breaker, back-pressure, etc. is the smallest of the challenges.
As so often in software development, the actual pitfalls that keep you from being successful with your project - here, creating a robust application - are not to be found in the area of creating code. Based on my experiences, the actual pitfalls are to be found in areas that are at best loosely related to resilient software design.
In this talk, I discuss some of those pitfalls that I have experienced more than once along my way. It starts with not understanding the goals of resilient software design, continues from a lack of understanding the characteristics of distributed system, over missing required feedback loops and deficiencies in functional design, to not understanding the trade-offs of applying resilience patterns, and ends with the problem of our continuous collective insight loss.
The main objective of the talk is to sensitize for the pitfalls. Wherever possible I also added some suggestions how to deal with the topics. Unfortunately, some topics neither have an obvious nor a simple solutions - at least none that I would know about ...
As always the voice track is missing and thus a huge part of the content of the talk. Yet, I hope the slides in themselves are of some use for you and offer some helpful ideas and pointers.
Feedback loops between tooling and cultureChris Winters
Discussion of how tools technologists create impact culture, and how culture impacts those tools. Not really a standalone presentation but hopefully useful.
Facebook, Netflix, Flickr, Etsy, LinkedIn, eSurance, Instagram and Salesforce.com; you know their names. As a consumer, you’ve probably used services provided by many of them. These are some of the “born on the web” companies of the last couple of decades that have helped pioneer new, web-based business models - and in the process become dominant players in their markets, or created new markets altogether. Call them the “Cool Kids”.
What you may not know, however, is that these companies are also strong adopters of a DevOps approach when it comes to software development and delivery. In this presentation we take a look at these companies to discern patterns related to how they have applied DevOps in the areas of Culture, Organization, Practices, Automation and Measurements.
Even if your company bears no resemblance at all to the Cool Kids, you can take away some important learnings from them as you look to apply DevOps to your own software initiatives.
This presentation is a result of a joint project executed by IBM strategists Bill Holtshouser and Carl Zetie, both of the Rational division in IBM Software Group, during the first half of 2014.
Talk given at the OCP Open System Firmware engineering workshop on 5/17/22. Talk was recorded; video at https://www.youtube.com/watch?v=eNI0wFgBNmY#t=7044s
Hardware/software Co-design: The Coming Golden Agebcantrill
Talk I gave as a keynote at RailsConf 2021. There is no Rails in the talk, though; this is all about the revolutions in open source firmware and hardware that are changing the way we build systems. Video to come!
Tockilator: Deducing Tock execution flows from Ibex Verilator tracesbcantrill
Talk given on March 20, 2020 at Oxidize 1K, a virtual conference that went from first idea to 300+ person conference in a week during the COVID-19 pandemic.
Platform values, Rust, and the implications for system softwarebcantrill
Talk given at Scale By The Bay 2018. Video is at https://www.youtube.com/watch?v=2wZ1pCpJUIM. If you are interested in this talk, you might also be interested in my talk on Platform as a Reflection of Values from Node Summit 2017: https://www.slideshare.net/bcantrill/platform-as-reflection-of-values-joyent-nodejs-and-beyond
My Papers We Love talk in San Francisco on October 12, 2017 on "ARC: A self-tuning, low overhead replacement cache." Video at https://www.youtube.com/watch?v=F8sZRBdmqc0
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Navigating the Metaverse: A Journey into Virtual Evolution"Donna Lenk
Join us for an exploration of the Metaverse's evolution, where innovation meets imagination. Discover new dimensions of virtual events, engage with thought-provoking discussions, and witness the transformative power of digital realms."
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...informapgpstrackings
Keep tabs on your field staff effortlessly with Informap Technology Centre LLC. Real-time tracking, task assignment, and smart features for efficient management. Request a live demo today!
For more details, visit us : https://informapuae.com/field-staff-tracking/
5. Debugging in the beginning...
— Sir Maurice Wilkes, 1913 - 2010
“As soon as we started programming,
we found to our surprise that it wasn't
as easy to get programs right as we
had thought. Debugging had to be
discovered. I can remember the exact
instant when I realized that a large
part of my life from then on was going
to be spent in finding mistakes in my
own programs.”
6. Debugging
• The first non-trivial program for the EDSAC (a program to
calculate a table of Airy integrals) had 120 lines and 20 errors —
including one not debugged until four decades later!
• This experience remains modern for anyone in software today,
and many spend much of their career debugging
• Yet there is little formalized about debugging: few books on it;
little research; no conferences — and no university courses!
• Is it any surprise that debugging anti-patterns persist?
7. Debugging anti-patterns
• For too many, debugging is the process of making problems go
away rather than understanding the system!
• The view of bugs-as-nuisance has many knock-on effects:
• Fixes that don’t fix the problem (or introduce new ones!)
• Bug reports closed out as “will not fix” or “works for me”
• Users who are told to “restart” or “reboot” or “log out” or
anything else that amounts to wishful thinking
• And this is only when the process has obviously failed...
8. Darker debugging anti-patterns
• More insidious effects are felt when the problem appears to have
been resolved, but hasn’t actually been fully understood
• These are the fixes that amount to a fresh coat of paint over a
crack in the foundation — and they are worse than nothing
• Not only do these fixes not actually resolve the problem, they give
the engineer a false sense of confidence that spreads virally
• “Debugging” devolves into an oral tradition: folk tales of problems
that were made to go away
9. Thinking methodically
• The way we think about debugging is fundamentally wrong; we
need to think methodically about debugging!
• When we think of debugging as the quest for understanding our
(misbehaving) systems, it allows us to consider it more abstractly
• Namely, how do we explain the phenomena that affect our world?
• We have found that the most powerful explanations reflect an
understanding of underlying structure — beyond what to why
• This deeper understanding allows us to not only to explain but
make predictions
10. Predictive power
• Valuing predictive power allows us to test our explanations: if our
predictions are wrong, our understanding is incomplete
• We can use the understanding from failed predictions to develop
new explanations and new predictions
• We can then test these new predictions to test our understanding
• If all of this is sounding familiar, it’s because it’s science — and
the methodical exploration of it is the scientific method
11. The scientific method
• The scientific method is to:
• Make observations
• Formulate a question
• Formulate a hypothesis that answers the question
• Formulate predictions that test the hypothesis
• Test the predictions by conducting an experiment
• Refine the hypothesis and repeat as needed
13. Science, seriously.
• Software debugging is a pure distillation of scientific thinking
• The limitless amount of data from software systems allows
experiments in seconds instead of weeks/months/years
• The systems we’re reasoning about are entirely synthetic,
discrete and mutable — we made it, we can understand it
• Software is mathematical machine; the conclusions of software
debugging are often mathematical in their unequivocal power!
• Software debugging is so pure, it requires us to refine the
scientific method slightly to reflect its capabilities...
14. The software debugging method
• Make observations
• Based on observations, formulate a question
• If the question can be answered through subsequent observation,
answer the question through observation and refine/iterate
• If the question cannot be answered through observation, make a
hypothesis as to the answer and formulate predictions
• If predictions can be tested through subsequent observation, test
the predictions through observation and refine/iterate
• Otherwise, test predictions through experiment and refine/iterate
15. Observation is the heart of debugging!
• The essence — and art! — of debugging software is making
observations and asking questions, not formulating hypotheses!
• Observations are facts — they constrain hypotheses in that any
hypothesis contradicted by facts can be summarily rejected
• As facts beget questions which beget observations and more
facts, hypotheses become more tightly constrained — like a
cordon being cinched around the truth
• Or, in the words of Sir Arthur Conan Doyle’s Sherlock Holmes,
“when you have eliminated all which is impossible, then whatever
remains, however improbable, must be the truth”
16. Making the hypothetical leap
• Once observation has sufficiently narrowed the gap between what
is known and what is wrong, a hypothetical leap should be made
• Debugging is inefficient when this leap is made too early — like
making a specific guess too early in Twenty Questions
• A hypothesis is only as good as its ability to form a prediction
• A prediction should be tested with either subsequent observation
or by conducting an experiment
• If the prediction proves to be incorrect, understanding is
incomplete; the hypothesis must be rejected — or refined
17. Experiments in software
• A beauty of software is that it is highly amenable to experiment
• Many experiments are programs — and the most satisfying
experiments test predictions about how failure can be induced
• Many “non-reproducible” problems are merely unusual!
• Debugging a putatively non-reproducible problem to the point of a
reproducible test case is a joy unique in software engineering
18. Software debugging in practice
• The specifics of observation depends on the nature of the failure
• Software has different kinds of failure modes:
• Fatal failure (segmentation violation, uncaught exception)
• Non-fatal failure (gives the wrong answer, performs terribly)
• Explicit failure (assertion failure, error message)
• Implicit failure (cheerfully does the wrong thing)
19. Taxonomizing software failure
Implicit
Explicit
Non-fatal Fatal
Gives the wrong answer
Returns the wrong result
Leaks resources
Stops doing work
Performs pathologically
Emits an error message
Returns an error code
Assertion failure
Process explicitly aborts
Exits with an error code
Segmentation violation
Bus Error
Panic
Type Error
Uncaught Exception
20. Microservices prehistory
• The late 1990s saw the rise of three-tier architectures consisting
of presentation, application logic and data tiers
• Many names for roughly the same notion: “Service-oriented
architecture”, “Model/View/Controller”, etc.
• The AJAX+REST revolution of the mid-2000s gave rise to true
web applications in which application logic could live on the edge
• Led to some broader architectural questioning...
21. Post-AJAX questions
• Why should HTTP be restricted to the web?
• Why should REST be restricted to web apps?
• Instead of having one monolithic architecture, why not have a
series of (smaller) services that merely did one thing well?
• In case this sounds vaguely familiar...
22. The Unix Philosophy
• The Unix philosophy, as articulated by Doug McIlroy:
• Write programs that do one thing and do it well
• Write programs to work together
• Write programs that handle text streams, because that is a
universal interface
• The single most important revolution in software systems thinking!
• Applying it to HTTP-based services...
23. Microservices
• Microservices do one thing, and strive to do it well
• Replace a small number of monoliths with many services that
have well-documented, small HTTP-based APIs
• Larger systems can be composed of these smaller services
• While the trend it describes is real, the term “microservices” isn’t
without its controversy...
26. Debugging microservices
• Veteran nerd rage may be being provoked by proponents of
microservices not fully appreciating the risks…
• Microservices turn a monolithic system into a distributed one
• While resilient to certain classes of force majeure failures,
distributed systems remain vulnerable to software defects
• Distributed systems are infamously nasty to debug — not least
because they often must be debugged in production
27. Microservices in production
• Microservices are tautologically small — they don’t need their
own dedicated physical hardware, or even dedicated virtual
hardware!
• Microservices are a particularly good fit for containers, virtual OS
instances pioneered by FreeBSD jails and Solaris zones
28. Containers at Joyent
• Joyent runs OS containers in the cloud via SmartOS — and we
have run containers in multi-tenant production since ~2006
• Adding support for hardware-based virtualization circa 2011
strengthened our resolve with respect to OS-based virtualization
• OS containers are lightweight and efficient — which is especially
important as services become smaller and more numerous:
overhead and latency become increasingly important!
• We emphasized their operational characteristics — performance,
elasticity, tenancy — and for many years, we were a lone voice...
29. Containers as PaaS foundation?
• Some saw the power of OS containers to facilitate up-stack
platform-as-a-service abstractions
• For example, dotCloud — a platform-as-a-service provider — built
their PaaS on OS containers
• Struggling as a PaaS, dotCloud pivoted — and open sourced
their container-based orchestration layer...
31. Docker revolution
• Docker has used the rapid provisioning + shared underlying
filesystem of containers to allow developers to think operationally
• Developers can encode deployment procedures via an image
• Images can be reliably and reproducibly deployed as a container
• Images can be quickly deployed — and re-deployed
• Docker complements the small-system ethos of microservices!
32. Docker at Joyent
• We wanted to create a best-of-all-worlds platform: the developer
ease of Docker on the production-grade substrate of SmartOS
• We developed a Linux system call interface for SmartOS,
allowing SmartOS to run Linux binaries at bare-metal speed
• In March 2015, we introduced Triton, our (open source!) stack
that deploys Docker containers directly on the metal
• Triton virtualizes the notion of a Docker host (i.e., “docker ps”
shows all of one’s containers datacenter-wide)
• Brings full debugging (DTrace, MDB) to Docker containers
35. Microservice failure modes
Implicit
Explicit
Non-fatal Fatal
Gives the wrong answer
Returns the wrong result
Leaks resources
Stops doing work
Performs pathologically
Emits an error message
Returns an error code
Assertion failure
Process explicitly aborts
Exits with an error code
Segmentation violation
Bus Error
Panic
Type Error
Uncaught Exception
36. Cascading microservice failure modes
Implicit
Explicit
Non-fatal Fatal
Gives the wrong answer
Returns the wrong result
Leaks resources
Stops doing work
Performs pathologically
Emits an error message
Returns an error code
Assertion failure
Process explicitly aborts
Exits with an error code
Segmentation violation
Bus Error
Panic
Type Error
Uncaught Exception
37. Debugging fatal failure
• When software fails fatally, we know that the software itself is
broken — its state has become inconsistent
• By saving in-memory state to stable storage, the software can be
debugged postmortem
• To debug, one starts with the invalid state and reasons backwards
to discover a transition from a valid state to an invalid one
• This technique is so old, that the terms for this saved state dates
back to the dawn of the computing age: a core dump
• Not as low-level as the name implies! Modern high-level
languages (e.g., node.js and Go) allow postmortem debugging!
38. Debugging fatal failure: microservices
• Postmortem analysis lends itself very well to microservices:
• There is no run-time overhead; overhead (such as it is) is only
at the time of death
• The microservice/container can be safely (automatically!)
restarted; the core dump can be analyzed asynchronously
• Tooling need not be in container, can be made arbitrarily rich
• In Triton, all core dumps are automatically stored and then
uploaded into a system that allows for analysis, tagging, etc.
• This has been invaluable for debugging our own services!
39. Debugging non-fatal failure
• There is a solace in fatal failure: it always represents a software
defect at some level — and the inconsistent state is static
• Non-fatal failure can be more challenging: the state is valid and
dynamic — it’s difficult to separate symptom from cause
• Non-fatal failure must still be understood empirically!
• Debugging in vivo requires that data be extracted from the system
— either of its own volition (e.g., via logs) or by coercion (e.g., via
instrumentation)
40. Debugging explicit, non-fatal failure
• When failure is explicit (e.g., an error or warning message), it
provides a very important data point
• If failure is non-reproducible or otherwise transient, analysis of
explicit software activity becomes essential
• Action in one container will often need to be associated with
failures in another
• Especially for distributed systems, this becomes log analysis, and
is an essential forensic tool for understanding explicit failure
• Essential observation: a time line of events!
41. Debugging implicit, non-fatal failure
• Problems that are both implicit and non-fatal represent the most
time-consuming, most difficult problems to debug because the
system must be understood against its will
• Wherever possible make software explicit about failure!
• Where errors are programmatic (and not operational), they
should always induce fatal failure!
• Microservices break at the boundaries: two services each think
that they are operating correctly, but together they’re broken
• Data must be coerced from the system via instrumentation
42. Instrumenting production systems
• Traditionally, software instrumentation was hard-coded and static
(necessitating software restart or — worse — recompile)
• Dynamic system instrumentation was historically limited to system
call table (strace/truss) or packet capture (tcpdump/snoop)
• Effective for some problems, but a poor fit for ad hoc analysis
• In 2003, Sun developed DTrace, a facility for arbitrary, dynamic
instrumentation of production systems that has since been ported
to Mac OS X, FreeBSD, NetBSD and (to a degree) Linux
• DTrace has inspired dynamic instrumentation in other systems
(see @brendangregg’s talk!)
43. Instrumenting Docker containers
• In Docker, instrumentation is a challenge as containers may not
include the tooling necessary to understand the system
• Docker host-based techniques for instrumentation may be
tempting, but they should be considered an anti-pattern!
• DTrace has a privilege model that allows it to be safely (and
usefully) used from within a container
• In Triton, DTrace is available from within every container — one
can “docker exec -it bash” and then debug interactively
44. Instrumenting node.js-based microservices
• We have invested heavily in node.js-based infrastructure to allow
us to meaningfully instrument microservices in production:
• We developed Bunyan, a logging facility for node.js that
includes DTrace support
• Added DTrace support for node.js profiling
• An essential vector for iterative observation: turning up the
logging level on a running microservice!
45. Debugging microservices in production
• Debugging methodically requires us to shift our thinking — and
learn how to carefully observe the systems we build
• Different types of failures necessitate different techniques:
• Fatal failure is best debugged via postmortem analysis —
which is particular appropriate in an all-container world
• Non-fatal failure necessitates log analysis and dynamic
instrumentation
• The ability to debug problems in production is essential to
successfully deploy and scale microservices!