James English gave a presentation on tips for preventing and debugging memory leaks in web applications. He began with an overview of his experience developing performant web apps. The presentation covered what causes memory leaks, how to debug them using tools like the Task Manager and Chrome DevTools, and how to measure retained memory after actions to detect leaks. He recommended automating memory measurement over time to monitor for leaks and writing tests to gate performance by ensuring retained memory stays below thresholds. Other areas for monitoring included frame rates and rendering times.
Introduction to Continuous Delivery (BBWorld/DevCon 2013)Mike McGarr
This document provides an introduction and overview of continuous delivery. It discusses why releases are difficult, and proposes continuous delivery as an alternative approach where software is always in a releasable state and deployments can occur frequently through automation. It covers principles like automating everything and keeping the build and release process fast and reliable. Specific practices discussed include configuration management, continuous integration, testing, deployment pipelines, and deployment automation using tools like version control systems, build servers, and configuration management tools.
The document discusses implementing cloud agnostic continuous quality assurance. It recommends using common tools like source control (Gerrit), build automation (Jenkins), code review, and code quality analysis (SonarQube) to ensure quality and allow moving projects between cloud providers. These tools were demonstrated working together using Docker containers to provide continuous quality assurance in a cloud agnostic manner. The document emphasizes automating as much as possible and having processes to enforce coding standards and measure quality.
Today’s cutting edge companies have software release cycles measured in days instead of months. This agility is enabled by the DevOps practice of continuous delivery, which automates building, testing, and deploying all code changes. This automation helps you catch bugs sooner and accelerates developer productivity. In this session, we’ll share best practices (including ones followed internally at Amazon) and how you can bring them to your company by using open source and AWS services.
Speaker: Raghuraman Balachandran, Solutions Architect, Amazon India
One of the cornerstones in Agile development is fast feedback. For engineering, "fast" means "instantly" or "in 5 minutes", not "tomorrow" or "this week". Your engineering practices should ensure that you can answer yes to most of the following questions:
- Do we get all test results in less than 5 minutes after a commit?
- Is our code coverage more than 75% for both front-end and back-end?
- Can we start exploratory testing in less than 15 minutes after a commit?
- Do all our tests pass more than 90% of our commits?
This talk will give you practical advice on how to get to "yes, we get fast feedback".
From Zero to Performance Hero in Minutes - Agile Testing Days 2014 PotsdamAndreas Grabner
As a Tester you need to level up. You can do more than functional verification or reporting Response Time
In my Performance Clinic Workshops I show you real life exampls on why Applications fail and what you can do to find these problems when you are testing these applications.
I am using Free Tools for all of these excercises - especially Dynatrace which gives full End-to-End Visibility (Browser to Database). You can test and download Dynatrace for Free @ http://bit.ly/atd2014challenge
Github Copilot and tools that help us code better are cool. But I’m lucky if I spend 90 minutes a day writing code. We really need to optimize the hours we spend reviewing code, updating tickets and tracing where our code is deployed. Learn how I save an hour a day streamlining non-coding tasks.
This talk is unique because 99% of developer productivity tools and hacks are about coding faster, better, smarter. And yet the vast majority of our time is spent doing all of this other stuff. After I started focusing on optimizing the 10 hours I spend every day on non-coding tasks, I found I my productivity went up and my frustration at annoying stuff went way down. I cover how to save time by reducing cognitive load and by cutting menial, non-coding tasks that we have to perform 10-50 times every day. For example:
Bug or hotfix comes through and you want to start working on it right away so you create a branch and start fixing. What you don’t do is create a Jira ticket but then later your boss/PM/CSM yells at your due to lack of visibility. I share how I automated ticket creation in Slack by correlating Github to Jira.
You have 20 minutes until your next meeting and you open a pull request and start a review. But you get pulled away half way through and when you come back the next day you forgot everything and have to start over. Huge waste of time. I share an ML job I wrote that tells me how long the review will take so I can pick PRs that fit the amount of time I have.
You build. You ship it. You own it. Great. But after I merge my code I never know where it actually is. Did the CI job fail? Is it release under feature flag? Did it just go GA to everyone? I share a bot I wrote that personally tells me where my code is in the pipeline after it leaves my hands so I can actually take full ownership without spending tons of time figuring out what code is in what release.
This document discusses continuous delivery, which aims to build, test, and release software faster through frequent integration and deployment. The goals are quality, speed, and reducing the time it takes to deploy changes from development to production through practices like test-driven development, continuous integration, automated testing, and deployment pipelines. It provides an overview of tools to support continuous delivery processes.
DevOps: Sprinkle Dev, Sprinkle Ops, Let's make Cake, not Mud PiesCentric Consulting
Brian Paulsmeyer, a Sr. Architect at Centric St. Louis, spoke about DevOps on September 29th at Agile Gravy Conference in St. Louis. Here's his presentation, which starts with Agile development pitfalls that plague teams, moves into the actual capabilities that a team requires to be successful, and finally describes concrete implementations to achieve “Done Means Done” development.
Introduction to Continuous Delivery (BBWorld/DevCon 2013)Mike McGarr
This document provides an introduction and overview of continuous delivery. It discusses why releases are difficult, and proposes continuous delivery as an alternative approach where software is always in a releasable state and deployments can occur frequently through automation. It covers principles like automating everything and keeping the build and release process fast and reliable. Specific practices discussed include configuration management, continuous integration, testing, deployment pipelines, and deployment automation using tools like version control systems, build servers, and configuration management tools.
The document discusses implementing cloud agnostic continuous quality assurance. It recommends using common tools like source control (Gerrit), build automation (Jenkins), code review, and code quality analysis (SonarQube) to ensure quality and allow moving projects between cloud providers. These tools were demonstrated working together using Docker containers to provide continuous quality assurance in a cloud agnostic manner. The document emphasizes automating as much as possible and having processes to enforce coding standards and measure quality.
Today’s cutting edge companies have software release cycles measured in days instead of months. This agility is enabled by the DevOps practice of continuous delivery, which automates building, testing, and deploying all code changes. This automation helps you catch bugs sooner and accelerates developer productivity. In this session, we’ll share best practices (including ones followed internally at Amazon) and how you can bring them to your company by using open source and AWS services.
Speaker: Raghuraman Balachandran, Solutions Architect, Amazon India
One of the cornerstones in Agile development is fast feedback. For engineering, "fast" means "instantly" or "in 5 minutes", not "tomorrow" or "this week". Your engineering practices should ensure that you can answer yes to most of the following questions:
- Do we get all test results in less than 5 minutes after a commit?
- Is our code coverage more than 75% for both front-end and back-end?
- Can we start exploratory testing in less than 15 minutes after a commit?
- Do all our tests pass more than 90% of our commits?
This talk will give you practical advice on how to get to "yes, we get fast feedback".
From Zero to Performance Hero in Minutes - Agile Testing Days 2014 PotsdamAndreas Grabner
As a Tester you need to level up. You can do more than functional verification or reporting Response Time
In my Performance Clinic Workshops I show you real life exampls on why Applications fail and what you can do to find these problems when you are testing these applications.
I am using Free Tools for all of these excercises - especially Dynatrace which gives full End-to-End Visibility (Browser to Database). You can test and download Dynatrace for Free @ http://bit.ly/atd2014challenge
Github Copilot and tools that help us code better are cool. But I’m lucky if I spend 90 minutes a day writing code. We really need to optimize the hours we spend reviewing code, updating tickets and tracing where our code is deployed. Learn how I save an hour a day streamlining non-coding tasks.
This talk is unique because 99% of developer productivity tools and hacks are about coding faster, better, smarter. And yet the vast majority of our time is spent doing all of this other stuff. After I started focusing on optimizing the 10 hours I spend every day on non-coding tasks, I found I my productivity went up and my frustration at annoying stuff went way down. I cover how to save time by reducing cognitive load and by cutting menial, non-coding tasks that we have to perform 10-50 times every day. For example:
Bug or hotfix comes through and you want to start working on it right away so you create a branch and start fixing. What you don’t do is create a Jira ticket but then later your boss/PM/CSM yells at your due to lack of visibility. I share how I automated ticket creation in Slack by correlating Github to Jira.
You have 20 minutes until your next meeting and you open a pull request and start a review. But you get pulled away half way through and when you come back the next day you forgot everything and have to start over. Huge waste of time. I share an ML job I wrote that tells me how long the review will take so I can pick PRs that fit the amount of time I have.
You build. You ship it. You own it. Great. But after I merge my code I never know where it actually is. Did the CI job fail? Is it release under feature flag? Did it just go GA to everyone? I share a bot I wrote that personally tells me where my code is in the pipeline after it leaves my hands so I can actually take full ownership without spending tons of time figuring out what code is in what release.
This document discusses continuous delivery, which aims to build, test, and release software faster through frequent integration and deployment. The goals are quality, speed, and reducing the time it takes to deploy changes from development to production through practices like test-driven development, continuous integration, automated testing, and deployment pipelines. It provides an overview of tools to support continuous delivery processes.
DevOps: Sprinkle Dev, Sprinkle Ops, Let's make Cake, not Mud PiesCentric Consulting
Brian Paulsmeyer, a Sr. Architect at Centric St. Louis, spoke about DevOps on September 29th at Agile Gravy Conference in St. Louis. Here's his presentation, which starts with Agile development pitfalls that plague teams, moves into the actual capabilities that a team requires to be successful, and finally describes concrete implementations to achieve “Done Means Done” development.
The benefits of using an APM solution while performance testingDevOpsGroup
The benefits of using an APM solution while performance testing or "why load testing without APM is like Corona without the lime...".
The deck covers a brief overview of APM, the market & major players, and 4 key benefits from using APM tools during your performance testing cycle.
Client side production monitoring using - SyncApp ToolBhupesh Pant
The document discusses the need for production monitoring of web applications. It outlines how production monitoring can help detect unexpected issues during deployments and on non-deployment days. It then describes how a proposed monitoring tool would allow for server-side and client-side monitoring, measuring page load times, application uptime, and network events. The tool would provide a graphical dashboard for monitoring applications directly from the browser without additional clients.
OWASP DefectDojo - Open Source Security SanityMatt Tesauro
Originally given at the project showcase at Global AppSec DC 2019, this talk covered what DefectDojo is, what's new and why you should be using it in your security program.
The document discusses software testing and how to prevent defects. It recommends implementing various types of tests at different stages, including unit tests, integration tests, UI tests, system tests, and manual tests. The faster a test can run, the more often it should be run. Tests should run in parallel and be distributed to improve efficiency. Flaky tests waste time and hurt trust in the test suite, so they must be addressed promptly. Writing automated tests of various granularities helps enable fast development cycles and prevents regressions.
Continuous Load Testing with CloudTest and JenkinsSOASTA
Two key challenges to continuous load testing are provisioning a test system to handle the load and accessing load generators to drive the traffic.
In this webinar from SOASTA & CloudBees, you will learn how to:
Build realistic automated web performance tests and run them in Jenkins
Architect and launch a test environment that auto-provisions in the cloud
Manage a load generation grid to drive load tests in a lights-out mode
Establish a performance baseline in your daily Jenkins reports
Break Up the Monolith- Testing Microservices by Marcus MerrellSauce Labs
This document discusses testing strategies for microservices architectures compared to monolithic architectures. Some key points:
- Testing microservices is more complex due to distributed and asynchronous nature which requires mocking services and generating test data. It also requires coordinating releases across teams.
- During planning and development, testers should understand architecture decisions, service contracts, and help develop automated integration tests. They should identify risks and set expectations.
- The definition of done should focus on user outcomes, not just development tasks. It is important for testers to provide input on the definition of done.
- Migrating a monolith to microservices gradually while maintaining backwards compatibility and fallbacks is challenging but can be done through careful
How Vanguard Got to a CD-CD World by Craig SchwarzwaldSauce Labs
In this SauceCon 2019 presentation, Craig Schwarzwald discusses the main phases Vanguard software testing has undergone over the last 10+ years, from thousands of manual tests, to thousands of automated tests with Selenium, to Shifting Left, and to now focusing on Contract Testing. He discusses what Contract Testing is and talks about its importance in the CI/CD pipeline (giving the team virtually E2E coverage at the speed of Unit tests).
Matt Callanan takes the 15 chapters of the famous "Continuous Delivery" book by Jez Humble & Dave Farey and distills it down into 1 hour of convincing arguments, walking through the pieces involved to make it happen including cultural challenges, automated testing, automated deployment & deployment pipelines. Not sure how to get started with DevOps? Finding it hard to convince colleagues & managers that CD is the way forward? Matt has used this presentation to help facilitate enterprise-wide adoption of Continuous Delivery. Slides from a presentation given at DevOps Brisbane March 2014.
The document discusses advanced deployment strategies including canary releases, deployment rings, and dark launching. It defines canary releases as deploying a new version to a subset of infrastructure initially without routing live traffic to it. Benefits include reducing risk and allowing capacity testing in production. The document reviews how to implement canary releases by routing a percentage of users to the new version while monitoring for issues before routing all users. It also discusses using deployment rings to gradually rollout changes and limit impact, as well as dark launching where new code is executed silently before a full launch.
The document discusses load testing best practices for peak seasons. It recommends measuring site performance now, prioritizing issues, and optimizing the site. Key things to test include popular session paths and peak load times. Testing should start early and continue through development, staging, and production. Automated testing allows for continuous testing. The goal is to measure, optimize, and repeat testing to ensure peak performance.
This document summarizes a presentation about advanced deployment strategies including canary releases, deployment rings, and dark launches. The presentation covers:
- How canary releases work by deploying a new version to a subset of infrastructure initially before gradually routing more users to it while monitoring for issues
- Key considerations for canary releases like ensuring a consistent user experience and having a rollback path
- How deployment rings limit impact on users by gradually deploying and validating changes in production rings
- Dark launches where new code is executed silently before a full launch to test infrastructure changes before high traffic
Outsmarting Merge Edge Cases in Component Based DesignPerforce
This document discusses edge cases and challenges that can occur when merging code changes between component-based software development streams. It outlines several types of complex merge scenarios, such as renames that cross stream views and "shadowed deletes" not caught by integration tools. The key lessons are to consider the big picture problem rather than symptoms, have a simple managed workflow, and continuously test upgrades. An ideal solution would involve source control at the file object level rather than filenames to more easily handle renames and component changes.
Automating The New York Times Crossword by Phil WellsSauce Labs
The New York Times crossword grid is made up of hundreds of individual web elements. Automating game logic via the puzzle interface is a daunting technical (and logical) task. Find out how the New York Times Games team uses Webdriver.io, cheerio.js, event listeners, and Sauce Labs to deliver quality crosswords while continuously improving.
All the fundamental concepts and tools for understanding performance tuning in Java. Garbage collection, memory management and collector types and tools for profiling Java applications.
improving the performance of Rails web ApplicationsJohn McCaffrey
This presentation is the first in a series on Improving Rails application performance. This session covers the basic motivations and goals for improving performance, the best way to approach a performance assessment, and a review of the tools and techniques that will yield the best results. Tools covered include: Firebug, yslow, page speed, speed tracer, dom monster, request log analyzer, oink, rack bug, new relic rpm, rails metrics, showslow.org, msfast, webpagetest.org and gtmetrix.org.
The upcoming sessions will focus on:
Improving sql queries, and active record use
Improving general rails/ruby code
Improving the front-end
And a final presentation will cover how to be a more efficient and effective developer!
This series will be compressed into a best of session for the 2010 http://windycityRails.org conference
Performance is a key aspect when developing an application, but for developers, production performance usually is a black box. When production problems arise, a lack of insight into log files and performance metrics forces us to reproduce issues locally before we can start to tackle the root cause. Using real world examples, we show how a unified performance management platform helps teams across the lifecycle to monitor applications, detect problems early on, and collect data that enables developers to efficiently solve problems.
ATAGTR2017 Unified APM: The new age performance monitoring for production sys...Agile Testing Alliance
The presentation on Unified APM: The new age performance monitoring for production systems was done during #ATAGTR2017, one of the largest global testing conference. All copyright belongs to the author.
Author and presenter : Kaushik Raghavan
Memory leaks in Java can occur due to objects remaining reachable even when no longer needed. The four main causes are unknown references, long-living objects, failure to clean up native resources, and bugs. To detect leaks, one can use verbose GC logging, monitor the Java process, dump the heap to analyze which objects are retaining others, and use profiling tools. Profiling works by insertion of code, sampling, or instrumenting the virtual machine and helps identify where time is being spent and what objects are being allocated.
Server-side JavaScript (SSJS) is gaining popularity due to factors like the rise of NoSQL databases, asynchronous programming, and JavaScript's ubiquity. SSJS environments like Node.js, CommonJS, and AppEngineJS allow developers to use JavaScript beyond the browser by running it on the server. Google App Engine also provides a platform for hosting SSJS applications and automatically scaling them.
The benefits of using an APM solution while performance testingDevOpsGroup
The benefits of using an APM solution while performance testing or "why load testing without APM is like Corona without the lime...".
The deck covers a brief overview of APM, the market & major players, and 4 key benefits from using APM tools during your performance testing cycle.
Client side production monitoring using - SyncApp ToolBhupesh Pant
The document discusses the need for production monitoring of web applications. It outlines how production monitoring can help detect unexpected issues during deployments and on non-deployment days. It then describes how a proposed monitoring tool would allow for server-side and client-side monitoring, measuring page load times, application uptime, and network events. The tool would provide a graphical dashboard for monitoring applications directly from the browser without additional clients.
OWASP DefectDojo - Open Source Security SanityMatt Tesauro
Originally given at the project showcase at Global AppSec DC 2019, this talk covered what DefectDojo is, what's new and why you should be using it in your security program.
The document discusses software testing and how to prevent defects. It recommends implementing various types of tests at different stages, including unit tests, integration tests, UI tests, system tests, and manual tests. The faster a test can run, the more often it should be run. Tests should run in parallel and be distributed to improve efficiency. Flaky tests waste time and hurt trust in the test suite, so they must be addressed promptly. Writing automated tests of various granularities helps enable fast development cycles and prevents regressions.
Continuous Load Testing with CloudTest and JenkinsSOASTA
Two key challenges to continuous load testing are provisioning a test system to handle the load and accessing load generators to drive the traffic.
In this webinar from SOASTA & CloudBees, you will learn how to:
Build realistic automated web performance tests and run them in Jenkins
Architect and launch a test environment that auto-provisions in the cloud
Manage a load generation grid to drive load tests in a lights-out mode
Establish a performance baseline in your daily Jenkins reports
Break Up the Monolith- Testing Microservices by Marcus MerrellSauce Labs
This document discusses testing strategies for microservices architectures compared to monolithic architectures. Some key points:
- Testing microservices is more complex due to distributed and asynchronous nature which requires mocking services and generating test data. It also requires coordinating releases across teams.
- During planning and development, testers should understand architecture decisions, service contracts, and help develop automated integration tests. They should identify risks and set expectations.
- The definition of done should focus on user outcomes, not just development tasks. It is important for testers to provide input on the definition of done.
- Migrating a monolith to microservices gradually while maintaining backwards compatibility and fallbacks is challenging but can be done through careful
How Vanguard Got to a CD-CD World by Craig SchwarzwaldSauce Labs
In this SauceCon 2019 presentation, Craig Schwarzwald discusses the main phases Vanguard software testing has undergone over the last 10+ years, from thousands of manual tests, to thousands of automated tests with Selenium, to Shifting Left, and to now focusing on Contract Testing. He discusses what Contract Testing is and talks about its importance in the CI/CD pipeline (giving the team virtually E2E coverage at the speed of Unit tests).
Matt Callanan takes the 15 chapters of the famous "Continuous Delivery" book by Jez Humble & Dave Farey and distills it down into 1 hour of convincing arguments, walking through the pieces involved to make it happen including cultural challenges, automated testing, automated deployment & deployment pipelines. Not sure how to get started with DevOps? Finding it hard to convince colleagues & managers that CD is the way forward? Matt has used this presentation to help facilitate enterprise-wide adoption of Continuous Delivery. Slides from a presentation given at DevOps Brisbane March 2014.
The document discusses advanced deployment strategies including canary releases, deployment rings, and dark launching. It defines canary releases as deploying a new version to a subset of infrastructure initially without routing live traffic to it. Benefits include reducing risk and allowing capacity testing in production. The document reviews how to implement canary releases by routing a percentage of users to the new version while monitoring for issues before routing all users. It also discusses using deployment rings to gradually rollout changes and limit impact, as well as dark launching where new code is executed silently before a full launch.
The document discusses load testing best practices for peak seasons. It recommends measuring site performance now, prioritizing issues, and optimizing the site. Key things to test include popular session paths and peak load times. Testing should start early and continue through development, staging, and production. Automated testing allows for continuous testing. The goal is to measure, optimize, and repeat testing to ensure peak performance.
This document summarizes a presentation about advanced deployment strategies including canary releases, deployment rings, and dark launches. The presentation covers:
- How canary releases work by deploying a new version to a subset of infrastructure initially before gradually routing more users to it while monitoring for issues
- Key considerations for canary releases like ensuring a consistent user experience and having a rollback path
- How deployment rings limit impact on users by gradually deploying and validating changes in production rings
- Dark launches where new code is executed silently before a full launch to test infrastructure changes before high traffic
Outsmarting Merge Edge Cases in Component Based DesignPerforce
This document discusses edge cases and challenges that can occur when merging code changes between component-based software development streams. It outlines several types of complex merge scenarios, such as renames that cross stream views and "shadowed deletes" not caught by integration tools. The key lessons are to consider the big picture problem rather than symptoms, have a simple managed workflow, and continuously test upgrades. An ideal solution would involve source control at the file object level rather than filenames to more easily handle renames and component changes.
Automating The New York Times Crossword by Phil WellsSauce Labs
The New York Times crossword grid is made up of hundreds of individual web elements. Automating game logic via the puzzle interface is a daunting technical (and logical) task. Find out how the New York Times Games team uses Webdriver.io, cheerio.js, event listeners, and Sauce Labs to deliver quality crosswords while continuously improving.
All the fundamental concepts and tools for understanding performance tuning in Java. Garbage collection, memory management and collector types and tools for profiling Java applications.
improving the performance of Rails web ApplicationsJohn McCaffrey
This presentation is the first in a series on Improving Rails application performance. This session covers the basic motivations and goals for improving performance, the best way to approach a performance assessment, and a review of the tools and techniques that will yield the best results. Tools covered include: Firebug, yslow, page speed, speed tracer, dom monster, request log analyzer, oink, rack bug, new relic rpm, rails metrics, showslow.org, msfast, webpagetest.org and gtmetrix.org.
The upcoming sessions will focus on:
Improving sql queries, and active record use
Improving general rails/ruby code
Improving the front-end
And a final presentation will cover how to be a more efficient and effective developer!
This series will be compressed into a best of session for the 2010 http://windycityRails.org conference
Performance is a key aspect when developing an application, but for developers, production performance usually is a black box. When production problems arise, a lack of insight into log files and performance metrics forces us to reproduce issues locally before we can start to tackle the root cause. Using real world examples, we show how a unified performance management platform helps teams across the lifecycle to monitor applications, detect problems early on, and collect data that enables developers to efficiently solve problems.
ATAGTR2017 Unified APM: The new age performance monitoring for production sys...Agile Testing Alliance
The presentation on Unified APM: The new age performance monitoring for production systems was done during #ATAGTR2017, one of the largest global testing conference. All copyright belongs to the author.
Author and presenter : Kaushik Raghavan
Memory leaks in Java can occur due to objects remaining reachable even when no longer needed. The four main causes are unknown references, long-living objects, failure to clean up native resources, and bugs. To detect leaks, one can use verbose GC logging, monitor the Java process, dump the heap to analyze which objects are retaining others, and use profiling tools. Profiling works by insertion of code, sampling, or instrumenting the virtual machine and helps identify where time is being spent and what objects are being allocated.
Server-side JavaScript (SSJS) is gaining popularity due to factors like the rise of NoSQL databases, asynchronous programming, and JavaScript's ubiquity. SSJS environments like Node.js, CommonJS, and AppEngineJS allow developers to use JavaScript beyond the browser by running it on the server. Google App Engine also provides a platform for hosting SSJS applications and automatically scaling them.
This document discusses Browserscope, an open-source project that crowdsources browser testing to profile browser capabilities and performance. It collects data from over 10,000 browsers to track functionality over time, uncover regressions, and provide a historical resource for web developers. Tests are run on real users' browsers to aggregate unbiased results without dedicated testing resources.
The document discusses shifting performance testing left in the development process. It argues that with increased software complexity, testing needs to start earlier to avoid delays. Single user performance testing can be run by developers as part of their normal testing to gain immediate feedback. This involves measuring responsiveness, network traffic, and device vitals under different conditions. While load testing still has value, splitting it up and combining it with functional and responsiveness testing allows more testing to be done earlier in development.
Isomorphic React Applications: Performance And ScalabilityDenis Izmaylov
Isomorphic React applications allow code to run on both the client and server, improving performance and scalability. Server-side rendering builds HTML on the server so the page loads immediately before JavaScript loads. This improves the user experience but requires loading data asynchronously. Caching pre-rendered components and separating rendering can improve performance. Progressive rendering and Facebook's BigPipe technique can further optimize loading. Performance can also be improved through code optimizations like using direct React files and load balancing with multiple server instances.
This is episode 4 of the building the perfect PHP app for the enterprise webinar series. Nothing is faster than a frustrated user clicking away from your site or abandoning your service. Avoid attrition by learning how to tune your applications towards lightning-fast page loads and response times. Learn: the basic principles behind enterprise PHP performance management; How to optimize workloads through background jobs and caching; How to measure performance and make data-driven decisions.
The document discusses testing and assessing performance in managed code for embedded systems. It begins by explaining what managed code is and how it automatically handles memory allocation and deallocation. It then discusses how memory usage affects performance and the importance of testing managed code applications for memory usage. A variety of tools for performance profiling and memory analysis are presented, with caveats about needing to build test harnesses to use the tools for embedded systems. The presentation recommends understanding how managed code works and using tools to examine performance and memory usage to avoid issues from memory misuse in embedded applications.
This document describes a performance automation solution using load testing scripts to continuously monitor application performance. The solution uses scripts to test functionality, availability, response times, and end-to-end workflows. Load testing engines run the scripts on a periodic schedule and store results. An alerting system analyzes results and sends alerts if response times exceed thresholds or tests fail to run. The system is containerized using Docker for scalability. Potential customers include project managers who need regression testing, monitoring of production applications, and emergency alerts about degradations or failures.
Optimus XPages: An Explosion of Techniques and Best PracticesTeamstudio
Are you starting a new XPages project, but not sure it’s going to be done right the first time? Do you have an existing application that doesn’t seem to have that “X” Factor? In this webinar, John Jardin demonstrates how XPages developers can apply proven techniques and best practices to take their applications to a game-changing level.
You'll learn how to:
-Rapidly develop responsive applications,
-Improve user experience and response times with background and multi-threaded operations,
-Keep your XPages lightweight with code injection,
-Create scheduled tasks the XPages way,
-And much more.
The Autobahn Has No Speed Limit - Your XPages Shouldn't Either!Teamstudio
Using XPages out of the box lets you build good-looking and well-performing applications. As XPage applications become bigger and more complex, performance can become an issue. There are several ways to improve scalability and performance that you should take into consideration. In this webinar, learn how to use partial refresh and partial execution mode and how to monitor its execution using a JSFLifeCycle monitor to avoid multiple re-calculations. See how readily available tools from OpenNTF will allow you to profile and analyze your code to improve the speed of your applications. Using Server Side Java Script and encountering a significant slow down when using Script Libraries? Learn how you can improve the speed of your application using JAVA instead of JavaScript, JSON and even @formulas.
Rundeck is an open source automation tool that allows users to break processes down into reusable workflows called jobs. It provides a central platform for visibility of operations tasks and enables teams to easily share tasks. Rundeck aims to connect disparate tools and resources through its APIs. The document discusses how Rundeck is used in different organizations for tasks like continuous delivery, data processing, test environment provisioning, and more. It provides demonstrations of Rundeck's job scheduling capabilities and plugin ecosystem. The document outlines Rundeck's system architecture and roadmap and encourages users to get involved through discussions, writing plugins, or sponsoring features.
First part of the webinar on Dismantling Wordpress Performance Bottelnecks with Tideways organized by Seravo.
* https://tideways.com/profiler/blog/webinar-on-tideways-and-wordpress-with-our-hosting-partner-seravo
* https://seravo.com/blog/dismantle-wordpress-performance-bottlenecks-with-tideways/
Observability in Java: Getting Started with OpenTelemetryDevOps.com
Our software is more complex than ever: applications must be reliable, predictable, and easy to use to meet modern expectations. As developers, this means our responsibilities have grown while the things we can control have stayed the same. In order to better understand our systems and create truly modern software, we need observability.
This workshop will walk through what observability means for Java developers and how to achieve it in our systems with the least amount of work using the open source observability project OpenTelemetry.
The document discusses the importance of performance and load testing web applications. It defines performance as how fast, robust, and resource-effective a system is. Load testing involves stressing an application with simulated user load to determine its capacity and stability under heavy usage. The document outlines best practices for load testing methodology, tooling, and interpreting results to optimize performance. It promotes using an asynchronous, non-blocking tool like Gatling for efficient, maintainable load testing that provides meaningful reports.
What is cool with Domino V10, Proton and Node.JS, and why would I use it in ...Heiko Voigt
This document discusses using Node.js, React, and Express with Domino V10. It provides an overview of a demo that uses these technologies to build a survey application with a real-time dashboard. The demo includes a Notes/iPad app for surveys, a React frontend, a Node.js/Express REST API, and a Node.js/Socket.io real-time backend. It discusses the benefits of this approach, including scalability, flexibility, and reusability. It also provides recommendations for tooling and resources for learning more.
Similar to Tips from the trenches: Gating Performance (20)
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Northern Engraving | Modern Metal Trim, Nameplates and Appliance PanelsNorthern Engraving
What began over 115 years ago as a supplier of precision gauges to the automotive industry has evolved into being an industry leader in the manufacture of product branding, automotive cockpit trim and decorative appliance trim. Value-added services include in-house Design, Engineering, Program Management, Test Lab and Tool Shops.
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
This talk will cover ScyllaDB Architecture from the cluster-level view and zoom in on data distribution and internal node architecture. In the process, we will learn the secret sauce used to get ScyllaDB's high availability and superior performance. We will also touch on the upcoming changes to ScyllaDB architecture, moving to strongly consistent metadata and tablets.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
The Department of Veteran Affairs (VA) invited Taylor Paschal, Knowledge & Information Management Consultant at Enterprise Knowledge, to speak at a Knowledge Management Lunch and Learn hosted on June 12, 2024. All Office of Administration staff were invited to attend and received professional development credit for participating in the voluntary event.
The objectives of the Lunch and Learn presentation were to:
- Review what KM ‘is’ and ‘isn’t’
- Understand the value of KM and the benefits of engaging
- Define and reflect on your “what’s in it for me?”
- Share actionable ways you can participate in Knowledge - - Capture & Transfer
Discover top-tier mobile app development services, offering innovative solutions for iOS and Android. Enhance your business with custom, user-friendly mobile applications.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
"Scaling RAG Applications to serve millions of users", Kevin GoedeckeFwdays
How we managed to grow and scale a RAG application from zero to thousands of users in 7 months. Lessons from technical challenges around managing high load for LLMs, RAGs and Vector databases.
15. Why do leaks matter?
• Low end devices
• Degrades performance of application over time
• May impact your heaviest users the most
• Reputation can be affected
24. Measuring memory usage
• Visit application URL in Chrome
--expose-gc
• Perform garbage collection
• Take initial memory snapshot
window.performance.memory.usedJSHeapSize
• Perform action a set number of times
• Perform garbage collection
• Take another snapshot and subtract initial snapshot
31. Takeaways
• Understanding memory leaks
• What a memory leak is?
• Debugging
• Measuring retained memory to get visibility
• Automate executing a UI action and measuring the memory retained.
• Storing over time to monitor any memory leaks.
• Preventing memory leaks by gating
• Writing tests to assert retained memory is below a given threshold.
32. • Frame rate – ticking prices
• Rendering times – components
Other monitoring
Hi everyone,
I am James, I work for a fintech trading company called IG on the trading platforms architecture team; focusing on building performant web applications.
I have worked commercially with web technologies for more than 15 years and 7 of those years in my current role at IG.
Today I am going to talk about our experiences in ensuring quality in our web applications.
But first who are IG?
We are the world’s No.1 spread betting and CFD provider, giving retail investors leveraged access to over 15,000 financial markets
We were established 45 years ago in 1974 and currently have 3 developer hubs here in Bangalore(India), Krakow (Poland) and London (UK); working on both collaborative and autonomous projects.
IG supports a wide variety of devices including Android, Android Tablet, iPhone, Ipad and Web.
We have up to 1.5 million trades a day
Over 11 million executions a month
And most orders are filled within 10 milliseconds
Today I am going to focus on our web trading platform.
So now you are familiar with IG and what challenges we face. What will we be discussing today?
I want to share with you the tips and lessons we have learned from developing our web trading platforms.
Firstly I will give an overview of our web platform and what kind of functionality it has.
Then I will do a deep dive into the challenges we faced around memory leaks that will cover
What memory leaks are
Debugging techniques
How to measure retained memory
Then how to prevent memory leaks getting into production
And finally a demo
I will then cover how these techniques could be used in other areas
So during my 7 years at IG we began building our newest web trading offering. Which is a single page application that shows real time financial dataand that has users with high performance expectations; who can have the application open for long periods of time.
If the application crashes or runs slowly, it could cost our users money. This is why performance is paramount at IG.
Our application offers an desktop operating system like experience with users opening “panels” of content.
These panels of content could be used by a client to browse markets for an opportunity to trade while viewing the live prices in our watchlist panel.
Placing a trade by picking a direction; “buy” if they think the market will rise or “sell” if they think the price will fall.
Or managing their current portfolio of trades via the positions panel.
We have had a around 40 developers work on the codebase, who work in an agile environment, releasing the product intraday through a continuous delivery pipeline with post release checks on critical path health. Such as placing a trade and logging in.
Developing this application brought about some challenges and when we released the application to a Beta audience back in 2017; we got some reports from users of the application slowing down over time and even crashing. As many of you may know this can be a warning sign your application has some memory leaks.
This was a blow to the developers and after fixing the existing memory leaks we wanted to come up with a way of preventing them in the future.
OK lets have a quick recap of memory works in JavaScript.
As many other programming languages JavaScript has the following memory lifecycle…
Allocation – where memory allocated by the Operating System. JavaScript is a high level language, so unlike lower level languages this operation isn’t explicit.
Usage – Where the program uses the memory to and performs read/write operations, such as assigning a variable.
Release – where the allocated memory is no longer needed and release. Again this is not explicit
Memory is collected via the mark and sweep method.
Root nodes are identified by the garbage collector, such as Window in the browser environment or Global in Node environment.
Then all child nodes are inspected and marked as active or inactive
The inactive ones are then cleaned up and that memory is then released.
You can see this in the diagram the active nodes are being marked as green and the inactive ones left as blue.
The inactive nodes and then collected when the garbage collection takes place.
Memory leaks can occur if retained paths are accidently left pointing to objects.
We can see in this simple object graph array has a retaining path to the DOM node.
The code below shows the cache array being initialized and the DOM node being pushed into it. Then later the DOM node being removed from the document.
However as long as the cache array still has the retaining path it will not be garbage collected.
This is often how leaks are caused; it is worth considering that leaks are a human error, so can be nearly always be prevented.
But why do leaks matter though right?
It could be your users are on low end devices which maybe hit particularly hard and will not get the best from your application
Performance can degrade over time which in turn may end up impacting your heaviest, and possibly most valuable users. This is especially true if you are building a single page application like us and cannot rely on the user navigating away; we have users who potentially can be using the application for hours while monitoring their current portfolio or waiting for the right time to trade.
Lastly your reputation with your users; it can degrade trust that you are offering reliable software if it is seen to be crashing or running at a sluggish pace.
While our application may not be the most common; a typical routed single page application can still suffer the same kind of issues. If event handlers are not cleaned up at the correct hooks.
So lets talk through the steps for debugging a memory leak. I am going to use a simple example that is a small application that adds 100 items to a list and then allows you to then clear that list.
We would expect an application, such as the one just shown, that creates short lived objects after an action to have a sawtooth type pattern in the performance timeline that represents the objects being created and holding memory and then when removed that memory being freed.
If an application has a leak it would be steadily increasing and not dropping back down to the baseline.
To debug a leak you could begin by opening up the task manager in Chrome.
The Task Manager is allows you monitor memory is being used by a page in realtime.
You would then toggle the memory and Javascript memory columns.
The Memory column represents native memory. DOM nodes are stored here.
And the Javascript memory column which is the JSHeap.
As you can see when we use the application, by adding and removing the items, the memory usage is increasing, so there maybe a memory leak and it is worth investigating further.
Once you have identified an action you suspect of leaking. You would then typically switch to the performance tab to investigate further.
Then enable the Memory checkbox.
Force a garbage collection to free up any available memory by clicking the trashcan icon.
Then start a recording
Repeat the action a number of times, clearing the garbage after each cycle.
By repeating the action you can see a potential leak appearing on the timeline.
You can see here that the memory usage is going up and being retained after a GC.
Lastly you might head to the memory tab and record a heap snapshot before the action and one after the action for a comparison of the objects created. This allows for much more of a detailed view of the problem once you have established where it lies.
You can see here some detached HTML elements that should have been cleared up.
After discussions with the team the performance timeline steps seemed reproduceable and we looked to Codify them.
Just like a bug you have reproduced that you would then go on to cover with a test case to ensure it does not resurface.
So first we needed the memory usage of the application. This is available via the memory object in the performance API.
This is a global object to give access to memory usage information. Which contains…
size limit – which is the memory the heap is limited to.
total Size - The memory the heap has allocated including free space for the page.
used Size - The memory currently being used by the page.
This is one we use as we are interested in as we need the currently used memory at points in our applications.
All these are returned in bytes
Next we needed to find a way to clear the garbage programmatically.
So luckily we found Chrome has a way of doing this. It expose a js flag you can launch Chrome with to enable a global “gc” function that forces a garbage collection when invoked.
Once we had these tools available to us we came up with a recipe to try and get visibility on memory leaks by measuring them…
Visit the application in Chrome with the expose GC to enable garbage collection programmatically.
Perform a GC to release any available memory
Take an initial snapshot, using the memory API
Perform an action number of times to exaggerate the problem, for us this might be launching a ”panel
Perform another GC to free up any memory
Then another snapshot of memory and subtract the initial one
This gives us the retained memory, if any.
Here you can see how we went about fitting this into our release process.
Our current continuous delivery flow runs on after every commit to master.
Once the testing, coverage and linting gates have all passed it will be deployed to production
Our application is static, so this only involves deploying an html entry point and associated assets.
Once a successful build has taken place we then kick off our performance build step.
This launches Chrome via selenium
Runs through the performance scenarios, like launching one of our panels a number of times
Measures the retained memory
And then stores in influxDB
It is worth mentioning these scenarios are run on a box inside our infrastructure, but this could be achieved with a cloud service
Those results stored in InfluxDB are then charted using Grafana to monitor usage over time and get visibility of memory leaks in the wild.
You can see here that on the Y axis we have retained memory in MB and on the X axis time.
We then have a threshold line of 0.25MB.
We allow this threshold due to browser libraries or frameworks often holding onto memory due to caches.
The memory leak on the chart of 0.6MB was introduced in May last year and present for 4 months before being fixed in September and dropping well below the threshold.
The memory leak of 0.6MB that was present for 4 months before being fixed and dropping below the red threshold.
The length of time the leak was open shows us how long it can take to fix an issue like this once it is in production and has to be prioritized against other work.
While monitoring memory retention was a big leap from where we were, but was a very reactive way of dealing with the issue and didn’t actually prevent leaks.
So we decided try and be more proactive and build them into our existing test process therefore allowing us to gate them.
So here you can see the updated Pipeline.
Kicking off a build after a commit to master remains the same
However this time the scenarios have been converted into tests that run on headless Chrome.
You can see that if the test passes a deployment takes place and the build passes
Or if the build failed feedback is given back to the developer and the deployment is prevented
This allowed us to completely integrate memory leak regression testing into our CD pipeline.
So what does a performance test look like?
We can see it is very much the same as the performance scenarios we ran.
We do a garbage collection
We take an initial snapshot
We run a scenario 10 times, rendering a component, then making sure to destroy it afterwards
We then collect the garbage again, take a snapshot and compute the retained memory.
But this time instead of reporting; we assert the memory retained is below a threshold
As shown in the previous slide we can now use this to gate whether a build passes or fails.
Test framework
So to wrap up we have talked about…
Understanding what a memory leak is and how to debug them.
Executing a UI action in a controlled way and compare usage before and after.
Monitoring over time to get visibility of any memory leaks
Writing tests to assert retained memory is below a given threshold which allows you to gate them
For our application we still use our controlled monitoring method, of launching a browser with selenium and running a performance scenario, for some of our other metrics such as…
Monitoring the frame rate of intensive actions – live our ticking prices
And rendering times of our components
Again these run after every successful build of master.
Here we have our price ticking dashboard which shows the frame rate of our ticking prices and the threshold we impose of 30 FPS.
We have Frames per second on the Y axis and time on the X axis.
Displaying for Chrome and Firefox. Chrome in green and Firefox in yellow here.
And here the positions component rendering time in Milliseconds and how it performs across Chrome, Firefox and Internet Explorer. Chrome in green, Firefox in orange and internet explorer blue.
We have milliseconds on the Y axis and datetime on the X axis.
These dashboards could both be moved to tests in the same way to prevent regressions in the future; once an acceptable threshold had been agreed with either the business or the developers depending on the area.
So hopefully this helps you understand how at IG we have gained visibility of our performance and prevented performance regressions from being released to production.
For us this approach has been very successful and prevented leaks reaching users on a quite a few occasions and given the developers confidence in the code they are deploying.
So thanks for your time and for those interested the code for the demo is available on my Github profile in the link shown.