This document discusses optimizing the performance of generated JavaScript by reducing bundle size. It describes analyzing the Sentry JavaScript SDK bundle size and goal of reducing it by 30%. Techniques discussed include removing unused code from down-compiling, using more optimized constants, simplifying nested object access with try/catch, aliasing object keys, and converting classes to plain objects/functions. These optimizations reduced the minified SDK size by 29% from 74.47kb to 52.67kb while improving tree-shaking.
One of the main hindrances to teams being able to respond rapidly to new features are technical problems resulting from bad coding practices, also known as technical debt. Melissa and Brett will cover Agile tools and practices that help development teams write better code and increase maintainability. Topics that will be covered include:
Pair programming
Automated Unit Testing
Refactoring
Test-Driven Development
Agile Architecture
Truemotion Adventures in ContainerizationRyan Hunter
This document summarizes Ryan Hunter's experience switching his company's infrastructure from using Ansible to provision Debian-based servers to using Docker containers and ECS on AWS. Some key reasons for the switch included dependency issues with Ansible, inflexible server sizing, and a desire for more portable and standardized application builds. Docker provided containers as a flexible runtime artifact while ECS and CloudFormation helped with scheduling, provisioning, and configuring containers at scale on AWS. Monitoring tools like Consul, Sumo Logic, and custom monitoring libraries were also implemented.
The document describes Guvnor, a business rules management system and application that allows users to define business rules, workflows, and knowledge models. It can be accessed through a web browser or REST API. Guvnor allows domain experts to define and manage changing business rules without relying on programmers. It provides tools for authoring rules through a web editor, decision tables, or DSL. Guvnor also supports testing rules, analyzing them for errors, versioning and releasing packages of rules and assets, and integrating the managed knowledge through a REST API or knowledge agent.
DevOps Days Boston 2017: Real-world Kubernetes for DevOpsAmbassador Labs
DevOps Days Boston 2017
Microservices is an increasingly popular approach to building cloud-native applications. Dozens of new technologies that streamline adopting microservices development such as Docker, Kubernetes, and Envoy have been released over the past few years. But how do you actually use these technologies together to develop, deploy, and run microservices?
In this presentation, we’ll cover the nuances of deploying containerized applications on Kubernetes, including creating a Kubernetes manifest, debugging and logging, and how to build an automated continuous deployment pipeline. Then, we’ll do a brief tour of some of the advanced concepts related to microservices, including service mesh, canary deployments, resilience, and security.
Phone:+91 970 442 9989 (WhatsApp Also)
Email: info@spiritsofts.com
A DevOps certification training course is designed to teach individuals and organizations how to implement DevOps practices and principles for software development and delivery. The course typically covers topics such as continuous integration, continuous delivery, infrastructure as code, and automated testing.
The course is usually delivered through a combination of instructor-led training, hands-on exercises, and online resources. Participants will learn how to use tools and techniques to automate software development and delivery processes, improve collaboration between development and operations teams, and increase the speed and quality of software delivery.
The course may also cover advanced topics such as containerization, microservices architecture, and DevOps culture and mindset. Upon completion of the course, participants should have a comprehensive understanding of DevOps principles and practices and be able to implement them effectively in their own organizations. They may also receive a certification credential from a recognized DevOps certification authority, such as DevOps Institute or the DevOps Agile Skills Association (DASA).
Phone:+91 970 442 9989 (WhatsApp Also)
Email: info@spiritsofts.com
Enroll for Online Devops Training Classes, Learn Devops by certified experts through Online. Attend the Demo for free & you will find Spiritsofts is the best Online Training Institute within reasonable cost
One of the main hindrances to teams being able to respond rapidly to new features are technical problems resulting from bad coding practices, also known as technical debt. Melissa and Brett will cover Agile tools and practices that help development teams write better code and increase maintainability. Topics that will be covered include:
Pair programming
Automated Unit Testing
Refactoring
Test-Driven Development
Agile Architecture
Truemotion Adventures in ContainerizationRyan Hunter
This document summarizes Ryan Hunter's experience switching his company's infrastructure from using Ansible to provision Debian-based servers to using Docker containers and ECS on AWS. Some key reasons for the switch included dependency issues with Ansible, inflexible server sizing, and a desire for more portable and standardized application builds. Docker provided containers as a flexible runtime artifact while ECS and CloudFormation helped with scheduling, provisioning, and configuring containers at scale on AWS. Monitoring tools like Consul, Sumo Logic, and custom monitoring libraries were also implemented.
The document describes Guvnor, a business rules management system and application that allows users to define business rules, workflows, and knowledge models. It can be accessed through a web browser or REST API. Guvnor allows domain experts to define and manage changing business rules without relying on programmers. It provides tools for authoring rules through a web editor, decision tables, or DSL. Guvnor also supports testing rules, analyzing them for errors, versioning and releasing packages of rules and assets, and integrating the managed knowledge through a REST API or knowledge agent.
DevOps Days Boston 2017: Real-world Kubernetes for DevOpsAmbassador Labs
DevOps Days Boston 2017
Microservices is an increasingly popular approach to building cloud-native applications. Dozens of new technologies that streamline adopting microservices development such as Docker, Kubernetes, and Envoy have been released over the past few years. But how do you actually use these technologies together to develop, deploy, and run microservices?
In this presentation, we’ll cover the nuances of deploying containerized applications on Kubernetes, including creating a Kubernetes manifest, debugging and logging, and how to build an automated continuous deployment pipeline. Then, we’ll do a brief tour of some of the advanced concepts related to microservices, including service mesh, canary deployments, resilience, and security.
Phone:+91 970 442 9989 (WhatsApp Also)
Email: info@spiritsofts.com
A DevOps certification training course is designed to teach individuals and organizations how to implement DevOps practices and principles for software development and delivery. The course typically covers topics such as continuous integration, continuous delivery, infrastructure as code, and automated testing.
The course is usually delivered through a combination of instructor-led training, hands-on exercises, and online resources. Participants will learn how to use tools and techniques to automate software development and delivery processes, improve collaboration between development and operations teams, and increase the speed and quality of software delivery.
The course may also cover advanced topics such as containerization, microservices architecture, and DevOps culture and mindset. Upon completion of the course, participants should have a comprehensive understanding of DevOps principles and practices and be able to implement them effectively in their own organizations. They may also receive a certification credential from a recognized DevOps certification authority, such as DevOps Institute or the DevOps Agile Skills Association (DASA).
Phone:+91 970 442 9989 (WhatsApp Also)
Email: info@spiritsofts.com
Enroll for Online Devops Training Classes, Learn Devops by certified experts through Online. Attend the Demo for free & you will find Spiritsofts is the best Online Training Institute within reasonable cost
Phone:+91 970 442 9989 (WhatsApp Also)
Email: info@spiritsofts.com
DevOps is a set of practices and methodologies that emphasize collaboration and communication between development and operations teams to enable continuous delivery and faster deployment of software applications. A DevOps certification training course can help you gain the skills and knowledge needed to succeed as a DevOps professional.
A good DevOps certification training course should cover the following topics:
DevOps Fundamentals: This topic covers the basics of DevOps, including its principles, practices, and benefits.
Continuous Integration and Delivery: This topic covers how to use tools and techniques to automate the building, testing, and deployment of software applications.
Cloud Infrastructure: This topic covers how to set up and manage cloud infrastructure using tools like AWS, Azure, and Google Cloud.
Configuration Management: This topic covers how to manage and automate the configuration of infrastructure and software using tools like Ansible, Chef, and Puppet.
Containerization and Orchestration: This topic covers how to use tools like Docker and Kubernetes to containerize and orchestrate software applications.
Monitoring and Logging: This topic covers how to monitor and analyze system and application logs to identify and resolve issues.
The best DevOps certification training course should provide you with hands-on experience with DevOps tools and techniques and should also offer support and guidance from experienced DevOps professionals. It should be interactive and engaging, with plenty of exercises, quizzes, and projects to help you apply what you learn. Finally, it should be flexible and affordable, allowing you to learn at your own pace and within your budget.
There are many online training platforms that offer DevOps certification courses, such as Udemy, Coursera, and LinkedIn Learning. It's important to choose a course that is up-to-date with the latest DevOps tools and practices and is taught by experienced DevOps professionals. You can check the ratings and reviews of the courses before purchasing them to ensure you find the best course for your needs.
Devops Online Training provide you with in depth practical knowledge of different Devops tools Git Jenkins Docker Vagrant New Relic ELK Ansible Puppet Nagios and Kubernetes Devops Online Training help you to get the practical knowledge in the different aspects of continuous development continuous integration continuous testing continuous deployment.
Spiritsofts is the best Training Institutes to expand your skills and knowledge. We Provides the best learning Environment. Obtain all the training by our expert professional which is having working experience from Top IT companies. The Training in is every thing we explained based on real time scenarios it works which we do in companies.
Phone:+91 970 442 9989 (WhatsApp Also)
Email: info@spiritsofts.com
Enroll for Online Devops Training Classes, Learn Devops by certified experts through Online. Attend the Demo for free & you will find Spiritsofts is the best Online Training Institute within reasonable cost
This document discusses best practices for application architecture, including:
- Using inversion of control and dependency injection to create loosely coupled and testable code
- Applying the single responsibility principle to have focused classes that are easier to maintain
- Employing patterns like extract method and execute around method to reduce code duplication
- Structuring applications using architectures like onion architecture or n-tier to organize code clearly
- Logging executed use cases/events for replay testing before releases to check for bugs
Here are some tips for breaking down work in an agile way:
- Focus on delivering value to users. Each story and task should provide some value.
- Iterate frequently. Stories and tasks should be small enough that you can complete and release them within a sprint or two.
- Get early feedback. Small slices allow testing work sooner and adjusting based on feedback.
- Prioritize flexibility. Small slices give you options to reorder or drop work as priorities change.
- Estimate costs accurately. Tasks should take 1-5 days; if longer, may need refactoring. Consider spikes for technical challenges.
- Refactor when repetitive. If work is very similar, look for ways to simplify through ref
The presentation on Protractor Cucumber BDD Approach was done during #ATAGTR2017, one of the largest global testing conference. All copyright belongs to the author.
Author and presenter : Rajat Acharya
Test driven development - Zombie proof your codePascal Larocque
This document discusses test driven development and how to write testable code. It recommends writing tests before writing code to prevent "zombie code" that is hard to maintain and change. Specific tips provided include using dependency injection, following SOLID principles to separate concerns, and writing fast, isolated tests using tools like PHPUnit and PHPSpec. Continuous integration is also recommended to prevent technical debt from accumulating.
What are you going to do if you have 60,000 jobs coming in a blink of an eye? It's normal in the Machine Learning world that you are going to process a huge load of the jobs that coming instantly in no time. We are going to walk you through our journey to scale out Kubernetes cluster to handle them. The tools we used, load testing, how to measure it and our solution.
Lightning talks on best practices for product and engineering teams to experiment everywhere in their applications.
Originally given at Optimizely's conference: Opticon on October 17th, 2017.
This document outlines an API design and management workshop presented by Matthew McClean, Nicolas Grenié, and Manfred. The workshop covers API design best practices, AWS services like Amazon API Gateway and AWS Lambda, and 3scale API management. It includes sections on API design principles, AWS integration, and a customer case study of Rosette's API implementation with 3scale and AWS. Attendees will build and deploy an API using Amazon API Gateway, AWS Lambda, and 3scale for additional management features.
The document discusses LinkedIn's adoption of the Dust templating language in 2011. Some key points:
- LinkedIn needed a unified view layer as different teams were using different templating technologies like JSP, GSP, ERB.
- They evaluated 26 templating options and selected Dust as it best met their criteria like performance, i18n support, and being logic-less.
- Dust templates are compiled to JavaScript for client-side rendering and to Java for server-side rendering (SSR) through Google's V8 engine, allowing templates to work on both client and server.
- SSR addresses challenges like SEO, supporting clients without JavaScript, and i18n by rendering
This document introduces CloudBridge, a Python library that provides a simple, uniform API for interacting with multiple cloud providers. It aims to allow users to write code once that can run on any supported cloud without specialization for individual providers. CloudBridge focuses on mature cloud APIs and offers a set of conformance tests to ensure compatibility without needing separate testing for each provider. The document outlines the goals, design, and features of CloudBridge, and provides code samples for setting up a provider and launching an instance using the uniform API.
TypeScript and Angular2 (Love at first sight)Igor Talevski
“We love TypeScript for many things… With TypeScript, several of our team members have said things like ‘I now actually understand most of our own code!’ because they can easily traverse it and understand relationships much better. And we’ve found several bugs via TypeScript’s checks. “
– Brad Green, Engineering Director - AngularJS
Angular JS Institute: NBITS is the best Angular JS Online/Classroom Training Institute in Hyderabad.We provide training from best real time industry experts in Angular 2,Angular 4,Angular 5,Node js, mean stack courses through online and Classroom with Lab facility.
Ultimate Guide to Microservice Architecture on Kuberneteskloia
This document provides an overview of microservice architecture on Kubernetes. It discusses:
1. Benefits of microservice architecture like independent deployability and scalability compared to monolithic applications.
2. Best practices for microservices including RESTful design, distributed configuration, client code generation, and API gateways.
3. Tools for microservices on Kubernetes including Prometheus for monitoring, Elasticsearch (ELK) stack for logging, service meshes, and event sourcing with CQRS.
Bill Cava provides a timeline of significant features and improvements made to Ektron over the past four years and helps you understand how upgrading can help you get your job done, faster with more control and less effort
How to Create Your Own Product-Modeling EnvironmentTim Geisler
This document discusses how to create a product modeling environment using domain-specific languages (DSLs) and Eclipse tools. It describes using DSLs to model products tailored to a company's needs, with an Eclipse-based IDE for editing. It provides examples of DSLs used at Nokia Siemens Networks for product modeling and integration with SAP, and discusses the various components involved in creating a customized product modeling environment, including the DSL grammar/metamodel, validation, code generation, and integration with other systems.
This document discusses object-oriented design principles including encapsulation, abstraction, inheritance, polymorphism, and decoupling. It then introduces the SOLID principles of object-oriented design: single responsibility principle, open/closed principle, Liskov substitution principle, interface segregation principle, and dependency inversion principle. Code examples are provided to demonstrate how to apply these principles and improve code maintainability, reusability, and testability.
Data Day Seattle 2017: Scaling Data Science at Stitch FixStefan Krawczyk
At Stitch Fix we have a lot of Data Scientists. Around eighty at last count. One reason why I think we have so many, is that we do things differently. To get their work done, Data Scientists have access to whatever resources they need (within reason), because they’re end to end responsible for their work; they collaborate with their business partners on objectives and then prototype, iterate, productionize, monitor and debug everything and anything required to get the output desired. They’re full data-stack data scientists!
The teams in the organization do a variety of different tasks:
- Clothing recommendations for clients.
- Clothes reordering recommendations.
- Time series analysis & forecasting of inventory, client segments, etc.
- Warehouse worker path routing.
- NLP.
… and more!
They’re also quite prolific at what they do -- we are approaching 4500 job definitions at last count. So one might be wondering now, how have we enabled them to get their jobs done without getting in the way of each other?
This is where the Data Platform teams comes into play. With the goal of lowering the cognitive overhead and engineering effort required on part of the Data Scientist, the Data Platform team tries to provide abstractions and infrastructure to help the Data Scientists. The relationship is a collaborative partnership, where the Data Scientist is free to make their own decisions and thus choose they way they do their work, and the onus then falls on the Data Platform team to convince Data Scientists to use their tools; the easiest way to do that is by designing the tools well.
In regard to scaling Data Science, the Data Platform team has helped establish some patterns and infrastructure that help alleviate contention. Contention on:
Access to Data
Access to Compute Resources:
Ad-hoc compute (think prototype, iterate, workspace)
Production compute (think where things are executed once they’re needed regularly)
For the talk (and this post) I only focused on how we reduced contention on Access to Data, & Access to Ad-hoc Compute to enable Data Science to scale at Stitch Fix. With that I invite you to take a look through the slides.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
More Related Content
Similar to Understanding the Performance Impact of Generated JavaScript.pptx
Phone:+91 970 442 9989 (WhatsApp Also)
Email: info@spiritsofts.com
DevOps is a set of practices and methodologies that emphasize collaboration and communication between development and operations teams to enable continuous delivery and faster deployment of software applications. A DevOps certification training course can help you gain the skills and knowledge needed to succeed as a DevOps professional.
A good DevOps certification training course should cover the following topics:
DevOps Fundamentals: This topic covers the basics of DevOps, including its principles, practices, and benefits.
Continuous Integration and Delivery: This topic covers how to use tools and techniques to automate the building, testing, and deployment of software applications.
Cloud Infrastructure: This topic covers how to set up and manage cloud infrastructure using tools like AWS, Azure, and Google Cloud.
Configuration Management: This topic covers how to manage and automate the configuration of infrastructure and software using tools like Ansible, Chef, and Puppet.
Containerization and Orchestration: This topic covers how to use tools like Docker and Kubernetes to containerize and orchestrate software applications.
Monitoring and Logging: This topic covers how to monitor and analyze system and application logs to identify and resolve issues.
The best DevOps certification training course should provide you with hands-on experience with DevOps tools and techniques and should also offer support and guidance from experienced DevOps professionals. It should be interactive and engaging, with plenty of exercises, quizzes, and projects to help you apply what you learn. Finally, it should be flexible and affordable, allowing you to learn at your own pace and within your budget.
There are many online training platforms that offer DevOps certification courses, such as Udemy, Coursera, and LinkedIn Learning. It's important to choose a course that is up-to-date with the latest DevOps tools and practices and is taught by experienced DevOps professionals. You can check the ratings and reviews of the courses before purchasing them to ensure you find the best course for your needs.
Devops Online Training provide you with in depth practical knowledge of different Devops tools Git Jenkins Docker Vagrant New Relic ELK Ansible Puppet Nagios and Kubernetes Devops Online Training help you to get the practical knowledge in the different aspects of continuous development continuous integration continuous testing continuous deployment.
Spiritsofts is the best Training Institutes to expand your skills and knowledge. We Provides the best learning Environment. Obtain all the training by our expert professional which is having working experience from Top IT companies. The Training in is every thing we explained based on real time scenarios it works which we do in companies.
Phone:+91 970 442 9989 (WhatsApp Also)
Email: info@spiritsofts.com
Enroll for Online Devops Training Classes, Learn Devops by certified experts through Online. Attend the Demo for free & you will find Spiritsofts is the best Online Training Institute within reasonable cost
This document discusses best practices for application architecture, including:
- Using inversion of control and dependency injection to create loosely coupled and testable code
- Applying the single responsibility principle to have focused classes that are easier to maintain
- Employing patterns like extract method and execute around method to reduce code duplication
- Structuring applications using architectures like onion architecture or n-tier to organize code clearly
- Logging executed use cases/events for replay testing before releases to check for bugs
Here are some tips for breaking down work in an agile way:
- Focus on delivering value to users. Each story and task should provide some value.
- Iterate frequently. Stories and tasks should be small enough that you can complete and release them within a sprint or two.
- Get early feedback. Small slices allow testing work sooner and adjusting based on feedback.
- Prioritize flexibility. Small slices give you options to reorder or drop work as priorities change.
- Estimate costs accurately. Tasks should take 1-5 days; if longer, may need refactoring. Consider spikes for technical challenges.
- Refactor when repetitive. If work is very similar, look for ways to simplify through ref
The presentation on Protractor Cucumber BDD Approach was done during #ATAGTR2017, one of the largest global testing conference. All copyright belongs to the author.
Author and presenter : Rajat Acharya
Test driven development - Zombie proof your codePascal Larocque
This document discusses test driven development and how to write testable code. It recommends writing tests before writing code to prevent "zombie code" that is hard to maintain and change. Specific tips provided include using dependency injection, following SOLID principles to separate concerns, and writing fast, isolated tests using tools like PHPUnit and PHPSpec. Continuous integration is also recommended to prevent technical debt from accumulating.
What are you going to do if you have 60,000 jobs coming in a blink of an eye? It's normal in the Machine Learning world that you are going to process a huge load of the jobs that coming instantly in no time. We are going to walk you through our journey to scale out Kubernetes cluster to handle them. The tools we used, load testing, how to measure it and our solution.
Lightning talks on best practices for product and engineering teams to experiment everywhere in their applications.
Originally given at Optimizely's conference: Opticon on October 17th, 2017.
This document outlines an API design and management workshop presented by Matthew McClean, Nicolas Grenié, and Manfred. The workshop covers API design best practices, AWS services like Amazon API Gateway and AWS Lambda, and 3scale API management. It includes sections on API design principles, AWS integration, and a customer case study of Rosette's API implementation with 3scale and AWS. Attendees will build and deploy an API using Amazon API Gateway, AWS Lambda, and 3scale for additional management features.
The document discusses LinkedIn's adoption of the Dust templating language in 2011. Some key points:
- LinkedIn needed a unified view layer as different teams were using different templating technologies like JSP, GSP, ERB.
- They evaluated 26 templating options and selected Dust as it best met their criteria like performance, i18n support, and being logic-less.
- Dust templates are compiled to JavaScript for client-side rendering and to Java for server-side rendering (SSR) through Google's V8 engine, allowing templates to work on both client and server.
- SSR addresses challenges like SEO, supporting clients without JavaScript, and i18n by rendering
This document introduces CloudBridge, a Python library that provides a simple, uniform API for interacting with multiple cloud providers. It aims to allow users to write code once that can run on any supported cloud without specialization for individual providers. CloudBridge focuses on mature cloud APIs and offers a set of conformance tests to ensure compatibility without needing separate testing for each provider. The document outlines the goals, design, and features of CloudBridge, and provides code samples for setting up a provider and launching an instance using the uniform API.
TypeScript and Angular2 (Love at first sight)Igor Talevski
“We love TypeScript for many things… With TypeScript, several of our team members have said things like ‘I now actually understand most of our own code!’ because they can easily traverse it and understand relationships much better. And we’ve found several bugs via TypeScript’s checks. “
– Brad Green, Engineering Director - AngularJS
Angular JS Institute: NBITS is the best Angular JS Online/Classroom Training Institute in Hyderabad.We provide training from best real time industry experts in Angular 2,Angular 4,Angular 5,Node js, mean stack courses through online and Classroom with Lab facility.
Ultimate Guide to Microservice Architecture on Kuberneteskloia
This document provides an overview of microservice architecture on Kubernetes. It discusses:
1. Benefits of microservice architecture like independent deployability and scalability compared to monolithic applications.
2. Best practices for microservices including RESTful design, distributed configuration, client code generation, and API gateways.
3. Tools for microservices on Kubernetes including Prometheus for monitoring, Elasticsearch (ELK) stack for logging, service meshes, and event sourcing with CQRS.
Bill Cava provides a timeline of significant features and improvements made to Ektron over the past four years and helps you understand how upgrading can help you get your job done, faster with more control and less effort
How to Create Your Own Product-Modeling EnvironmentTim Geisler
This document discusses how to create a product modeling environment using domain-specific languages (DSLs) and Eclipse tools. It describes using DSLs to model products tailored to a company's needs, with an Eclipse-based IDE for editing. It provides examples of DSLs used at Nokia Siemens Networks for product modeling and integration with SAP, and discusses the various components involved in creating a customized product modeling environment, including the DSL grammar/metamodel, validation, code generation, and integration with other systems.
This document discusses object-oriented design principles including encapsulation, abstraction, inheritance, polymorphism, and decoupling. It then introduces the SOLID principles of object-oriented design: single responsibility principle, open/closed principle, Liskov substitution principle, interface segregation principle, and dependency inversion principle. Code examples are provided to demonstrate how to apply these principles and improve code maintainability, reusability, and testability.
Data Day Seattle 2017: Scaling Data Science at Stitch FixStefan Krawczyk
At Stitch Fix we have a lot of Data Scientists. Around eighty at last count. One reason why I think we have so many, is that we do things differently. To get their work done, Data Scientists have access to whatever resources they need (within reason), because they’re end to end responsible for their work; they collaborate with their business partners on objectives and then prototype, iterate, productionize, monitor and debug everything and anything required to get the output desired. They’re full data-stack data scientists!
The teams in the organization do a variety of different tasks:
- Clothing recommendations for clients.
- Clothes reordering recommendations.
- Time series analysis & forecasting of inventory, client segments, etc.
- Warehouse worker path routing.
- NLP.
… and more!
They’re also quite prolific at what they do -- we are approaching 4500 job definitions at last count. So one might be wondering now, how have we enabled them to get their jobs done without getting in the way of each other?
This is where the Data Platform teams comes into play. With the goal of lowering the cognitive overhead and engineering effort required on part of the Data Scientist, the Data Platform team tries to provide abstractions and infrastructure to help the Data Scientists. The relationship is a collaborative partnership, where the Data Scientist is free to make their own decisions and thus choose they way they do their work, and the onus then falls on the Data Platform team to convince Data Scientists to use their tools; the easiest way to do that is by designing the tools well.
In regard to scaling Data Science, the Data Platform team has helped establish some patterns and infrastructure that help alleviate contention. Contention on:
Access to Data
Access to Compute Resources:
Ad-hoc compute (think prototype, iterate, workspace)
Production compute (think where things are executed once they’re needed regularly)
For the talk (and this post) I only focused on how we reduced contention on Access to Data, & Access to Ad-hoc Compute to enable Data Science to scale at Stitch Fix. With that I invite you to take a look through the slides.
Similar to Understanding the Performance Impact of Generated JavaScript.pptx (20)
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Webinar: Designing a schema for a Data WarehouseFederico Razzoli
Are you new to data warehouses (DWH)? Do you need to check whether your data warehouse follows the best practices for a good design? In both cases, this webinar is for you.
A data warehouse is a central relational database that contains all measurements about a business or an organisation. This data comes from a variety of heterogeneous data sources, which includes databases of any type that back the applications used by the company, data files exported by some applications, or APIs provided by internal or external services.
But designing a data warehouse correctly is a hard task, which requires gathering information about the business processes that need to be analysed in the first place. These processes must be translated into so-called star schemas, which means, denormalised databases where each table represents a dimension or facts.
We will discuss these topics:
- How to gather information about a business;
- Understanding dictionaries and how to identify business entities;
- Dimensions and facts;
- Setting a table granularity;
- Types of facts;
- Types of dimensions;
- Snowflakes and how to avoid them;
- Expanding existing dimensions and facts.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
2. Shana
Matthews
- DevRel at Sentry.io
- Fun facts
- Every version of Windows since
2016 has shipped with code I’ve
written
- I once delayed a Windows 10 major
release
3. Abhijeet
Prasad
- SDK engineer at Sentry.io
- Maintains Sentry JavaScript SDKs
- Cool dude
- Largely responsible for making all
this happen
7. CONFIDENTIAL
The problem
SDK too big.
● V6 JavaScript SDK was 74.47kb (minified, un-gzipped)
● New performance features would increase package size further
● “Package size is massive” issue filed on the SDK
9. CONFIDENTIAL
The goal
Reduce SDK size by 30% and improve tree shaking
Measure minified CDN bundle, not gzipped
Track bundle size using the size-limit library
26. CONFIDENTIAL
2. Optimizations for minification
Minification = making your JavaScript assets as small as possible
● Remove whitespace, comments, unnecessary tokens
● Shorten variable, function names
Sentry uses terser.
28. CONFIDENTIAL
2. Optimizations for minification
● Using try-catch blocks to simplify code requiring nested
object access
● Local aliases for object keys to improve minification
● Converting classes to objects and functions
29. CONFIDENTIAL
2. Optimizations for minification
● Using try-catch blocks to simplify code requiring nested
object access
● Local aliases for object keys to improve minification
● Converting classes to objects and functions
33. CONFIDENTIAL
2. Optimizations for minification
● Using try-catch blocks to simplify code requiring nested
object access
● Local aliases for object keys to improve minification
● Converting classes to objects and functions
37. CONFIDENTIAL
2. Optimizations for minification
● Using try-catch blocks to simplify code requiring nested
object access
● Local aliases for object keys to improve minification
● Converting classes to objects and functions
44. CONFIDENTIAL
Techniques to optimize your JS
1. Optimizations for down-compiling & transpiling
○ Removing usages of optional chaining
○ Using const enums or string constants instead of TypeScript enums
2. Optimizations for minification
○ Using try-catch blocks to simplify code requiring nested object access
○ Local aliases for object keys to improve minification
○ Converting classes to objects and functions
46. CONFIDENTIAL
Our results
29% bundle size decrease out of the box
Plus tree-shaking related improvements, like…
Next.js 30kb reduction in run-time JS
Finally closed: Package size is massive #2707
47. “We have been very impressed with the new
Sentry JS SDK. Not only is the bundle size
significantly smaller out of the box, but we were
able to reduce it further through tree shaking.”
Shu Ding
Software Engineer, Vercel
48. Thank you 💖
Read more in our blog post:
bit.ly/generated-javascript-blog
GitHub discussion for the changes:
bit.ly/js-v7-gh-discussion
Sentry JavaScript docs:
bit.ly/sentry-js-docs
Check out our Sandbox:
bit.ly/sentry-sandbox
Editor's Notes
This talk is mostly about looking at your TypeScript and finding ways to write that TypeScript better so that when it’s converted into JavaScript and run in the browser, it’s smaller and runs faster. If that sounds interesting, you’re in the right spot!
From Iowa, majored in CS because I liked math but wanted to get a job
Before Sentry I mostly worked at Microsoft
Dev at Microsoft until I was lured to the dark side of DevRel, switched into Azure to do that
Fun facts - I suspect my code will live on inside of Windows until the the AI singularity happens. Wrote SDKs for maps.
I personally delayed a Windows 10 major release (every 6 months), Redstone 2 1703 by not testing a senior dev’s suggested simplification to an equation for transforming colors.
This was some work related to accessibility which was a huge priority in that release. So the work was coming in late but we were NOT going to ship without this.
I was rushing, I accepted the change without testing it fully, and there you go!
Literally only worked on things for developers. SDKs, sample apps, tutorials, docs, talks. I’m in too deep ya’ll.
I’m here talking about this today, but I have to give credit to my coworker Abhi. He’s one of the main maintainers to our JavaScript SDKs.
Lots of people work on our JS SDKs and lots of people helped with this effort of reducing our SDK size, but Abhi gets a special callout since really led the charge on the refactor.
This is the super high-level agenda of the talk, basically I’ll intro myself and the problem we were running into at Sentry, then we’ll spend the majority of the talk in the meat of how we actually reduced our JavaScript bundle size, then finally we’ll wrap up at the end with seeing how our efforts paid off.
I’m going to super briefly tell you what Sentry is, for context, but this talk really has very little to do with what our product does, other than that we have a bunch of JavaScript SDKs for the product and they were getting super bloated.
Sentry is an open source tool for application and performance monitoring. That means devs use Sentry to get alerted & help debug when their code crashes or is slow.
We also have awesome artists – we love our creative team.
So, Sentry do this for many, many languages and frameworks. Each of which has an SDK that devs have to use in their app.
So we have like a million SDKs (actually like 94 we support + more community-supported).
Our JavaScript SDKs are some of the most important and most complicated
So, what was the problem?
Our JavaScript SDK’s size was getting out of hand.
So the SDK was at 74.47kb (minified, un-gzipped)
On top of that, we were planning to ship some new features around performance and managing release health that we knew would increase package size further if we kept going the way we were.
Not surprisingly our users also noticed, and filed this very nice issue on the repo (we love you, it’s fine, keep filing issues)
For our next major release, v7 we knew we wanted to refactor our SDK so we took the opportunity to also ship a reduced bundle size and expand our tree shaking (dead code elimination) capabilities.
We came up with 30% through some rough analysis. We figured we could get about a 15% improvement from quick wins and 15% would take more substantial refactoring.
This talk is mostly about that first 15%, which we think is applicable to many TypeScript projects!
So, how to measure that 30% improvement?
We decided to measure by tracking the size of our minified CDN bundle.
The minified bundle is most closely representative of what's actually executed at runtime, which has a direct relationship with parse and execution time. We wanted to minimize the amount of time Sentry blocks the main thread.
This talk is scoped down to just minimizing bundle size, so we won’t talk specifically about our tree shaking improvements, but they were good.
Also, I’ve tried to include links to relevant PRs that actually made the changes I describe in this talk, so click on those if you’re watching this later with slide access!
We decided to track this using the size-limit library, which checks your commits as part of CI to calculate the bundle size of every PR.
This worked well to keep devs accountable and motivated when working on bundle size improvements.
Here’s what one of the size-limit reports looked like on a PR to our JavaScript SDK repo
Our SDK for JavaScript was written in TypeScript and if we wanted to decrease our bundle size, we needed to understand what was happening with our final generated javascript.
As a super brief review, when you use TypeScript, it must be transpilled into JavaScript. And if you want to support older browsers, you also need to down compile your javascript, which means compiling it to a backwards-compatible version of javascript.
Sentry uses the Babel compiler to do these steps.
Additionally, you likely want to minifiy your JavaScript as part of your build step. That’s not shown super well in this diagram, but it’s important to make your JavaScript run as fast as possible in the browser. We minify using the Terser package.
Here we define transpilation as the process of converting source code of one language to another language, and down-compilation to be the process of converting source code to a more backward-compatible version of that source code.
Understanding how your code is being transpiled and downcompiled is important, because your bundle size is affected by your final generated JavaScript.
Our SDK for JavaScript was written in TypeScript and if we wanted to decrease our bundle size, we needed to understand what was happening with our final generated javascript.
As a super brief review, when you use TypeScript (like Sentry SDKs do), it must be turned into javascript to run in your browser, which is called transpiling. And if you want to support older browsers, you also need to down compile your javascript, which means compiling it to a backwards-compatible version of javascript
Now, before we get into specifics about how we reduced our bundle size, I want to give a very very short review of how modern JavaScript runs in the browser so we’re all on the same page about why this mattered.
In the modern web, the JavaScript you write is often down-compiled using a compiler like Babel to make sure your JavaScript is compatible with older browsers or environments. In addition, if you are using TypeScript (like the Sentry SDK’s do) or something similar, you’ll have to transpile your TypeScript to JavaScript.
This talk is all about the technical prep work needed to ship a 0 bug reported major issue.
In the modern web, the JavaScript you write is often down-compiled using a compiler like Babel to make sure your JavaScript is compatible with older browsers or environments. In addition, if you are using TypeScript (like the Sentry SDK’s do) or something similar, you’ll have to transpile your TypeScript to JavaScript.
Both of these processes affect what your final generated JavaScript looks like. And your final generated JavaScript is what makes up your bundle size.
This talk is all about the technical prep work needed to ship a 0 bug reported major issue.
0 confirmed bug reports due to extensive integration testing and not changing public API
The JavaScript SDKs are the largest set of SDKs at Sentry, with thousands of organizations relying on them to instrument their applications. As such, we need to make sure that the changes we make to the SDK do not introduce behavior regressions or crashes in user code.
Before the major release, we completely revamped our integration testing setup. We introduced brand new browser based integration tests that ran on Playwright, allowing us to test on Chrome, Safari and Firefox at the same time. We also introduced brand new node integration tests that ran on a custom framework we built out that used the Node.js Nock library. Having this integration test setup gave us the confidence to make large scale refactors that were required to try to reduce bundle size.
0 confirmed bug reports due to extensive integration testing and not changing public API
The JavaScript SDKs are the largest set of SDKs at Sentry, with thousands of organizations relying on them to instrument their applications. As such, we need to make sure that the changes we make to the SDK do not introduce behavior regressions or crashes in user code.
Before the major release, we completely revamped our integration testing setup. We introduced brand new browser based integration tests that ran on Playwright, allowing us to test on Chrome, Safari and Firefox at the same time. We also introduced brand new node integration tests that ran on a custom framework we built out that used the Node.js Nock library. Having this integration test setup gave us the confidence to make large scale refactors that were required to try to reduce bundle size.
So let’s get into the optimizations we made to our TypeScript and how they affected our generated, minified JavaScript!
We’re going to talk about 2 main types of optimizations we made today
Optimizing our TypeScript so that when it was downcompiled and transpiled to JavaScript, it would take up fewer bytes
Optimizing our TypeScript so that our minifier was able to minify it more effectively, so it took up fewer bytes
First, we’ll dive into our first category: optimizations around down-compiling and transpiling
We were able to work on a couple types of quick wins within the realm of optimizations for down-compiling and transpiling. These were an easy place to get started decreasing bundle size.
Up first, optional chaining
The optional chaining operator (?.) is a newer JavaScript feature introduced with ES2020 in June 2020.
So if “hey” was null here, instead of throwing an error, the expression evaluates to undefined.
Super handy, but since its so new, it must be down-compiled to work with older browsers.
When down-compiled, this particular feature tends to produce a lot of extra bytes.
This is what the downcompiled result of that previous snippet looks like when targeting ES6, which takes up a LOT more bytes than the optional chaining version and even more than the old-fashioned way of doing this, the Boolean short circuit.
This is way more bytes than the equivalent boolean short circuit:
Like I mentioned, the Sentry SDK is written in TypeScript, so we were able to switch to the Boolean short circuit expression everywhere we were using optional chaining. With TypeScript, we could rely on type coercion to make sure everything was typed correctly.
That link shows some example PRs of switching optional chaining to Boolean short circuits.
I don’t think anyone did the exact math on how much we saved total by getting rid of optional chaining, but a couple percent, for sure.
Next, changing the way we used enums in the SDK
We were using TypeScript enums everywhere in the SDK. TypeScript enums are great because they provide reverse mapping, the ability to map enum values to enum names if they’re not already a string enum, which, again can be very handy if you’re taking advantage of it.
BUT, they take up a lot of bytes.
For example a regular enum like this one showing states:
Would map to something like:
For example a regular enum like this one showing states:
Would map to something like:
In this case, this was a lot of extra generated code that we wanted to optimize.
One way we optimized these TypeScript enums was to convert them to const enums for any that were only used interally.
Here's showing that same States enum as a const enum.
Const enums automatically inline enum members when they’re used. This means the enum doesn’t generate any extra code!
Unfortunately this only works for internal-only enums, since const enums are removed at transpile time. The space-saving benefit means that they can’t be imported and used by users of the SDK.
For those other public-facing enums, we still wanted to optimize them.
Here’s a public-facing TypeScript string enum for Severity levels and it’s transpiled JavaScript version.
That’s a lot of extra code
For these public, exported enums, we deprecated them in favor of using string constants, as much as it made sense.
That looked something like changing this to…
That looked something like changing this to…
To this. Which saved a ton of bytes.
I’ve also linked a PR that shows an example of making this enum to string constant change in the SDK.
This was another easy win that gave us some really good bundle size improvements. The linked PR alone, from deprecating the severity enum dropped our non-gizpped bundle size by almost 2% as measured by the size-limit library we were using
Ok we’ve covered the first category of optimizations that relate to transpilation and down-compiling.
Another important part of getting the bundle as small as possible is minification, which is the second category of improvements we made.
Minification is exactly what it sounds like: making your JavaScript assets as small as possible.
In the minification process, we remove white space, comments, and other unnecessary tokens, and shorten variable and function names.
Sentry minifies our JS SDK by using the terser library.
I’m sure many of you are familiar, but here’s an example of minification. This example shows how terser minified this code specifically.
I’ve tried to fit both the before and after code on one slide so forgive me here.
In this example, Terser reduces the number of bytes by 60% - which is amazing. But we’re already minifying our code, and you probably are too. Modern bundlers like Webpack will minify your code by default in production mode.
To do better than this automatic minification, we need to get a little more complicated and manual with our optimizations for minification.
In this category of minification-related optimizations, we’re going to go through 3 examples of how we rewrote our TypeScript to work better with minification.
Simplifying nested object access by using try-catch blocks to catch undefined objects, instead of chained undefined checks
Using local variables instead of object keys
Minifying private class and method names and moving towards functions and objects
First up, using try-catch blocks instead of chaining undefined checks or using optional chaining.
Let’s return to that previous example function and its minified version.
What is and isn’t being minified? Reserved keywords (like export, function, return) here are used by the JavaScript language themselves, so they can’t be minified.
In addition, identifiers that are required for code to work properly like object keys or class methods are not minified.
In this example, the veryVeryLongKey property of the bestObject object cannot be minified because users need to be able to access the { nestedKey: arg2 } value using the veryVeryLongKey.
So nested keys cannot get minified because they are needed to index the various nested objects.
And we had examples of this throughout the SDK codebase, where we would do undefined checks to make sure we didn’t throw any errors.
We had examples of this throughout the SDK codebase, where we would do undefined checks to make sure we didn’t throw any errors.
This is a real example from our SDK code, where these nested keys can never get minified, just based on how minification works, so automatic minification isn’t doing a lot for us here.
In order to reduce some bytes, we’ll have to make a manual simplification.
Instead of preventing type errors with these undefined checks, or using optional chaining, which would work nicely here, but as we’ve already discussed, it is also super size wasteful because of down-compilation, we can simply take advantage of the existing try catch block…
We’re already in a try catch block, so we can actually shorten this to a single line and ignore the resulting TypeError that would occur if values were undefined.
Again, I’ve linked to an actual PR in our SDK that shows an example of doing this kind of optimization.
The previous trick of simplifying code with try/catch statements is useful in some situations, but isn’t going to work for everything.
Another strategy we took to deal with these nested object keys that can’t be minified is to create local aliases for object keys to make minification work better.
Here’s an example of another function in our actual SDK. We have all these nested object keys that can’t be minified. What we can do is alias these object keys to local variables that CAN get minified.
So we edited this code to
So we edited this code to this code
Can't show you on one slide
I added comments above each line to show what it was in the previous slide, which hopefully helps.
You can see that we’re creating these local aliases every time we get a layer deeper in the object.
Ok, so let’s compare the first version (no aliases) and this second version after minifying
After minification, you can see that the second version, that used local aliases saves quite a few bytes.
We’re on to the third and final minification-related optimization!
This one is mostly here as a bonus for interested parties, because it’s very hard to show with slides.
So, just like object properties, class methods and identifiers generally can’t minified.
Let’s look at a simplified example from the Sentry codebase, the API class, which the SDK uses to manage how it sends data to a Sentry instance.
We’re just showing a tiny bit of this class.
2 example methods, the getStoreEndpointWithUrlEncodedAuth () method and the _encodedAuth() method.
This SDK uses terser, and we’ve configured terser to minify private field and method names. This is great and definitely saves bytes, but public field and methods can’t be minified.
So here’s the minified version of that snippet. You can see _encodedAuth() is minified to h() but getStoreEndpointWithUrlEncodedAuth () is still getStoreEndpointWithUrlEncodedAuth ().
The fact that we can’t minify public method names becomes a real problem with very long method names, or even kinda long method names that are used very frequently.
On top of that, trying to work around it can cause even more problems, because now you have to start paying attention to how long your method names are.
The way we worked around this was by converting internal classes to objects and functions. Naturally, showing this is basically impossible on a slide, but essentially
the public fields on the class become keys on an object, and then you use functions to operate on those objects.
Since those functions are just top level exports, they can be minified.
Here’s an example of this in our Memo class - you can’t see the public fields becoming object keys
Again, here are some PR examples of doing this. The first one is that Memo class I had the screenshot of on the previous slide and the second is for our Logger class.
Although we ended up converting some more internal classes to use a more functional style to save on bytes, we couldn’t convert the biggest classes in the Sentry SDK, the Client and the Hub
This was because many users were manually importing and using these classes, so converting them would make it difficult for those users to upgrade.
Ok, this has been a lot!
Let’s quickly review the categories of optimizations we were able to make and reveal our final results!
There are major package size benefits to reducing the amount of generated JavaScript your package is creating. As part of our larger Javascript SDK package reduction, we spent a considerable effort to minify as much of our code as possible. If you’re looking to do the same, here are 2 categories of improvements to consider:
First, optimizing the way we wrote our TypeScript so that when it was downcompiled and transpiled to JavaScript, it would take up fewer bytes
Which involved removing uses of optional chaining and switching to const enums and string constants over regular TypeScript enums
Second, optimizing the way we wrote our TypeScript so that we were able to minify it more effectively.
This involved using try-catch blocks to simplify nested object access, adding local aliases for object keys, and converting classes to objects and functions.
So what were the results?
When we started making changes with version 6.16.1, our minified un-gzipped browser SDK was 74.47kb.
As of Browser JavaScript version 7.3.1, the bundle size of the minified un-gzipped browser SDK is 52.67kb.
This represents a 29% decrease in bundle size, as measured by the size-limit library.
Our goal was 30%, but we were very, very pleased with 29%
In addition, our v7SDK also came with some great tree shaking improvements that users really appreciated.
For example, some of our Next.js SDK users have reported a 30kb reduction in run-time JavaScript size.
Our tests internally have shown similar wins, but the final numbers will vary based on your specific SDK being used and what features you are using from the SDK.
In addition, we were finally able to close the “Package size is massive” issue that was filed. Now that’s a win.
Here’s a quote from one of the devs at Vercel who appreciated the effort.
This talk also exists in blog post form - if you want to read more, go there
If you want to dig in further to the types of bundle size improvements, tree shaking improvements, and other refactoring we did on the v7 SDK, the second link is where you can go for that. That links you to a discussion on GitHub where we talk about all of this and link to even more PRs.
Finally, if you want to learn more about Sentry, you can check out our JavaScript docs or we have a pretty nice Sandbox where you can see what Sentry can do in terms of helping you see and debug errors and performance problems.
That’s it! Any questions?