To hit Ruby3x3, we must first figure out **what** we're going to measure, **how** we're going to measure it, in order to get what we actually want. I'll cover some standard definitions of benchmarking in dynamic languages, as well as the tradeoffs that must be made when benchmarking. I'll look at some of the possible benchmarks that could be considered for Ruby 3x3, and evaluate them for what they're good for measuring, and what they're less good for measuring, in order to help the Ruby community decide what the 3x goal is going to be measured against.
2018 Jul 25th LINE Developer Meetup #41 in Fukuoka
Session Slide in English / セッションスライドです。
Graal in GraalVM - A New JIT Compiler
オラクル社からGraalVMというものが発表され、話題を呼んでいます。GraalVMはHotSpot VM上に新しいJITコンパイラGraalと言語実装用フレームワーク/ASTインタプリタであるTruffle、さらにネイティブイメージ作成機能とその実行に使われるSubstrateVMを併せ持ったものです。すでにTruffleを使用したJavaScriptやRuby、R、Pythonの実装も提供されており、これらの言語とJavaはコードから相互に呼び出しができます。このセッションではGraalVMを概観したあと、JITコンパイラGraalにとくに注力して解説します。GraalとTruffleはOracle Labsとヨハネス・ケプラー大学で共同研究されており、多くの論文が発表されています。HotSpotのJITコンパイラとパフォーマンスや構造などを比較しつつ、GraalのJITコンパイルのテクニックについてもいくつか触れます。とにかく、私がGraalをとても好きなのです。デモも実施しつつ、Graalのすごさを伝えられればと考えています。
How to develop the Standard Libraries of Ruby?Hiroshi SHIBATA
I maintain the RubyGems, Bundler and the standard libraries of the Ruby language. So, I've been extract many of the standard libraries to default gems and GitHub at Ruby 3.0. But the some of libraries still remains in only Ruby repository. I will describe these situation.
Graal is a dynamic meta-circular research compiler for Java that is designed for extensibility and modularity. One of its main distinguishing elements is the handling of optimistic assumptions obtained via profiling feedback and the representation of deoptimization guards in the compiled code. Truffle is a self-optimizing runtime system on top of Graal that uses partial evaluation to derive compiled code from interpreters. Truffle is suitable for creating high-performance implementations for dynamic languages with only moderate effort. The presentation includes a description of the Truffle multi-language API and performance comparisons within the industry of current prototype Truffle language implementations (JavaScript, Ruby, and R). Both Graal and Truffle are open source and form themselves research platforms in the area of virtual machine and programming language implementation (http://openjdk.java.net/projects/graal/).
Have you ever wondered how to speed up your code in Python? This presentation will show you how to start. I will begin with a guide how to locate performance bottlenecks and then give you some tips how to speed up your code. Also I would like to discuss how to avoid premature optimization as it may be ‘the root of all evil’ (at least according to D. Knuth).
2018 Jul 25th LINE Developer Meetup #41 in Fukuoka
Session Slide in English / セッションスライドです。
Graal in GraalVM - A New JIT Compiler
オラクル社からGraalVMというものが発表され、話題を呼んでいます。GraalVMはHotSpot VM上に新しいJITコンパイラGraalと言語実装用フレームワーク/ASTインタプリタであるTruffle、さらにネイティブイメージ作成機能とその実行に使われるSubstrateVMを併せ持ったものです。すでにTruffleを使用したJavaScriptやRuby、R、Pythonの実装も提供されており、これらの言語とJavaはコードから相互に呼び出しができます。このセッションではGraalVMを概観したあと、JITコンパイラGraalにとくに注力して解説します。GraalとTruffleはOracle Labsとヨハネス・ケプラー大学で共同研究されており、多くの論文が発表されています。HotSpotのJITコンパイラとパフォーマンスや構造などを比較しつつ、GraalのJITコンパイルのテクニックについてもいくつか触れます。とにかく、私がGraalをとても好きなのです。デモも実施しつつ、Graalのすごさを伝えられればと考えています。
How to develop the Standard Libraries of Ruby?Hiroshi SHIBATA
I maintain the RubyGems, Bundler and the standard libraries of the Ruby language. So, I've been extract many of the standard libraries to default gems and GitHub at Ruby 3.0. But the some of libraries still remains in only Ruby repository. I will describe these situation.
Graal is a dynamic meta-circular research compiler for Java that is designed for extensibility and modularity. One of its main distinguishing elements is the handling of optimistic assumptions obtained via profiling feedback and the representation of deoptimization guards in the compiled code. Truffle is a self-optimizing runtime system on top of Graal that uses partial evaluation to derive compiled code from interpreters. Truffle is suitable for creating high-performance implementations for dynamic languages with only moderate effort. The presentation includes a description of the Truffle multi-language API and performance comparisons within the industry of current prototype Truffle language implementations (JavaScript, Ruby, and R). Both Graal and Truffle are open source and form themselves research platforms in the area of virtual machine and programming language implementation (http://openjdk.java.net/projects/graal/).
Have you ever wondered how to speed up your code in Python? This presentation will show you how to start. I will begin with a guide how to locate performance bottlenecks and then give you some tips how to speed up your code. Also I would like to discuss how to avoid premature optimization as it may be ‘the root of all evil’ (at least according to D. Knuth).
Performance doesn’t have the same definition between system administrators, developpers and business teams. What is Performance ? High CPU usage, not scalable web site, low business transaction rate per sec, slow response time, … This presentation is about maths, code performance, load testing, web performance, best practices, … Working on performance optimizaton is a very broad topic. It’s important to really understand main concepts and to have a clean and strong methodology because it could be a very time consumming activity. Happy reading !
Grails has great performance characteristics but as with all full stack frameworks, attention must be paid to optimize performance. In this talk Lari will discuss common missteps that can easily be avoided and share tips and tricks which help profile and tune Grails applications.
Performance testing is one of the kinds of Non-Functional Testing. Building any successful product hinges on its performance. User experience is the deciding unit of fruitful application and Performance testing helps to reach there. You will learn the key concept of performance testing, how the IT industry gets benefitted, what are the different types of Performance Testing, their lifecycle, and much more.
Determining the root cause of performance issues is a critical task for Operations. In this webinar, we'll show you the tools and techniques for diagnosing and tuning the performance of your MongoDB deployment. Whether you're running into problems or just want to optimize your performance, these skills will be useful.
Grails has great performance characteristics but as with all full stack frameworks, attention must be paid to optimize performance. In this talk Lari will discuss common missteps that can easily be avoided and share tips and tricks which help profile and tune Grails applications.
Video and slides synchronized, mp3 and slide download available at http://bit.ly/14w07bK.
Martin Thompson explores performance testing, how to avoid the common pitfalls, how to profile when the results cause your team to pull a funny face, and what you can do about that funny face. Specific issues to Java and managed runtimes in general will be explored, but if other languages are your poison, don't be put off as much of the content can be applied to any development. Filmed at qconlondon.com.
Martin Thompson is a high-performance and low-latency specialist, with over two decades working with large scale transactional and big-data systems, in the automotive, gaming, financial, mobile, and content management domains. Martin was the co-founder and CTO of LMAX, until he left to specialize in helping other people achieve great performance with their software.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
8. Definition
Benchmark:
Comparing the execution time of different
interpreters, or options.
Comparing the execution time of algorithms
Comparing the accuracy of different machine
learning algorithms
11. Microbenchmarks
Pros
Often easy to setup and run.
Targeted to a particular
aspect.
Fast acquisition of data.
Cons
Exaggerates effects.
Not typically generalizable.
A very small program written to explore the performance of one
aspect of the system under test.
12. Full Applications
Pros
Immediate and obvious real
world impact!
Cons
Small effects can be
swamped in natural
application variance.
Can be complicated to
setup, or slow to run!
Benchmarking a whole application
13. Application Kernel
Pros
Tight connection to real
world code.
Typically more
generalizable.
Cons
Difficult to know how much
of a an application should
be included vs. mocked.
A particular part of an application extracted for the express purpose
of constructing a benchmark.
14. Pitfalls in benchmark design
Un-Ruby-Like Code:
Code that looks like another language.
“You can write FORTRAN in any language”
Code that never produces garbage.
Code without exceptions
15. Pitfalls in benchmark design
Input Data is a key part of many benchmarks: Watch out
for weird input data!
Imagine an MP3 compressor benchmark
– Inputs are
1. Silence. weird because most mp3s are not silence.
2. White noise. weird because most mp3s have some
structure.
– Reduces the generalizability of the results!
16. The Art of Benchmarking: What do you run?
What do you measure?
26. An aside on misleading with speedup.
Speedup:
A ratio computed between a
baseline and experimental time
measurement.
27. An aside on misleading with speedup.
Speedup:
A ratio computed between a
baseline and experimental time
measurement.
28. An aside on misleading with speedup.
“He who controls the baseline
controls the speedup”
29. An aside on misleading with speedup.
“Our parallelization system shows
linear speedup as the number of
threads increases”
30. An aside on misleading with speedup.
0
1
2
3
4
5
6
7
8
9
1 thread 2 thread 4 thread 8 thread
SPEEDUP
Speedup
31. An aside on misleading with speedup.
Measurement Time (s)
Original Sequential Program 10.0
Parallelized 1 thread 100.0
Parallelized 2 thread 50.0
Parallelized 4 thread 25.0
Parallelized 8 thread 12.5
The distinction between relative speedup
and absolute speedup.
33. Both of these are valid benchmarks!
$ cat test.rb
...
puts Benchmark.measure {
1_000_000.times {
compute_foo()
}
}
$ for i in `seq 1 10`; do
ruby t.rb ; done;
...
10.times {
puts Benchmark.measure {
1_000_000.times {
compute_foo()
}
}
}
vs.
But they’re going to measure (and may encourage
the optimization of ) two different things!
34. Definition
Warmup:
The time from application start
until it hits peak performance.
100
64 69
36
25 30 25 26 25 26 25
1 2 3 4 5 6 7 8 9 10 11
Time per Iteration (s)
35. When has warmup finished?
Despite this, even knowing warmup exists is important: It
allows us to choose methodologies that can accommodate the
possibility!
36. Definition
Run-to-Run Variance
The observed effect that
identical runs do not have
identical times.
$ for i in `seq 1 5`; do ruby -I../../lib/ string-equal.rb
--loopn 1 1000; done;
1.347334558
1.348350632
1.30690478
1.314764977
1.323862345
37. Methodology:
An incomplete list of decisions that need to be made when
developing benchmarking methodology:
1. Does your methodology account for warmup?
2. How are you accounting for run-to-run variance?
3. How are you accounting for the effects of the garbage
collector?
38. Pitfalls in benchmark design
Accounting for warmup often means producing
intermediate scores, so you can see when they stabilize.
If you aren’t accounting for warmup, you may find
that you miss out on peak performance.
39. Pitfalls in benchmark design
Account for run to run variance by running multiple times,
and presenting confidence intervals!
Be sure you’re methodology doesn’t encourage wild
variations in performance though!
42. Garbage Collector Impact
Garbage collector impact can make benchmarks incredibly difficult to
compare:
The Ruby+OMR Preview uses the OMR GC technology, including a
change to move off heap data on heap.
Side effect of this is that it’s crazy difficult to compare against the default
ruby: there’s an entirely different set of data on the heap!
If heap size adapts to machine memory, you’ll need to figure out how to
lock it to give good comparisons across machines
42
string malloc string OMRBuffer
45. User Error
$ time ruby their_implementation.rb 100000
real 0m10.003s
user 0m08.001s
sys 0m02.007s
$ time ruby my_implementation.rb 10000
real 0m1.003s
user 0m0.801s
sys 0m0.206s
10x speedup!
46. User Error
$ time ruby their_implementation.rb 100000
real 0m10.003s
user 0m08.001s
sys 0m02.007s
$ time ruby my_implementation.rb 10000
real 0m1.003s
user 0m0.801s
sys 0m0.206s
10x speedup!
Pro Tip: Use a harness that
keeps you out of the
benchmarking process.
Aim for reproducibility!
48. Other Hardware Effects to watch for!
TurboBoost (and similar): Frequency scaling based
on the season.
49. Other Hardware Effects to watch for!
TurboBoost (and similar): Frequency scaling based
on the season location.
50. Other Hardware Effects to watch for!
TurboBoost (and similar): Frequency scaling based
on the season location rack
51. Other Hardware Effects to watch for!
TurboBoost (and similar): Frequency scaling based
on the season location rack CPU temperature.
Even in the cloud! [1]
[1]: http://www.brendangregg.com/blog/2014-09-15/the-msrs-
of-ec2.html
52. Software Pitfalls
What about your backup service?
Long sequence of benchmarks… do you have
automatic software updates installed?
Do your system administrators know you are
benchmarking?
54. Paranoia is a matter of
Effect Sizes
Hardware Changes:
– Disable turbo boost,
– Disable hyperthreading.
Krun tool:
– Set ulimit for heap and stack.
– Reboot machine before execution
– Monitor dmesg for unexpected output
– Monitor temperature of machine.
– Disable pstates
– CPU Governor set to performance mode.
– Perf sample rate control.
– Disable ASLR.
– Create a new user account for each run
http://arxiv.org/pdf/1602.00602v1.pdf
59. Squeezing a Water Balloon
Be sure to measure associated metrics to have a
clear headed view of tradeoffs:
For example: JIT Compilation:
Trade startup speed for peak speed.
Trade footprint for speed.
60. Benchmarks age!
Benchmarks can be wrung of all their possible
performance at some point.
Using the same benchmarks for too long can lead to
shortsighted decisions driven by old benchmarks.
Idiomatic code evolves in a language.
Benchmark use of language features can help drive
adoption!
–Be sure to benchmark desirable new language features!
60
64. Recall: Benchmarks drive change
Thought: Choose 9 application kernels that
represent what we want from a future CRuby!
• Why 9?
• Too many benchmarks can diffuse effort.
• Also! 3x3 = 9!
¯_(ツ)_/¯
65. Brainstorming on the nine?
1. Some CPU intensive applications:
• OptCarrot, Neural Nets, Monte Carlo Tree
Search, PSD filter pipeline?
2. Some memory intensive application:
• Large tree mutation benchmark?
3. A startup benchmark:
• time ruby -e “def foo; ‘100’; end; puts foo”?
4. Some web application framework benchmarks.
66. Choose a methodology that drives the change we
want in CRuby.
Want great performance, but not huge warmup
times?
–Only run 5 iterations, and score the last one?
Don’t want to deal with warmup?
–Don’t run iterations: Score the first run!
I Error Bars
69. Use the ecosystem!
Add a standard performance harness to RubyGems.
Would allow VM developers to sample popular gems, and
run a perf suite written by gem authors.
With effort, time and $$$, we could make broad statements
about performance impact on the gem ecosystem.
70. Use the ecosystem!
Doesn’t just help VM developers
Gem authors get
1. Enabled for performance tracking!
2. Easier performance reporting with VM developers.
OMR is a project trying to create reusable components for building or augmenting language runtimes.
Should be some news soon, so follow us on twitter.
Please, come talk to me about OMR! But, I’m not here to talk about OMR right now.
That purple circle hides a big concept! Let’s dig into it.
Benchmarking is this weird combination of art and science, that drives me mad. The problem is that benchmarks seem so objective and scientific, but are filled with judgement calls, and the science is hard!
The art of benchmarking ends up being a long list of questions and decisions you have to ask yourself, filled with judgement calls.
First off, what do you run?
Sometimes this involves mocking up parts of the normal application flow in such a way to keep the code isolated.
Imagine how this perturbs the code paths that your interpreter is going to take.
The art of benchmarking ends up being a long list of questions and decisions you have to ask yourself, filled with judgement calls.
First off, what do you run?
Lots of questions have to be asked when you are benchmarking.
This is equally true of both application developers and those who are developing language runtimes!
Often when we’
CPU time can be pretty misleading in a lot of circumstances: Notice that sleep used almost no CPU time, because it didn’t do anything! But it spent a long time running!
Can be important though if you’re on a platform that charges by CPU usage!
L
For example, in a web server, latency would be how long it takes a request to be processed after the request is received.
The art of benchmarking ends up being a long list of questions and decisions you have to ask yourself, filled with judgement calls.
First off, what do you run?
Lots of questions have to be asked when you are benchmarking.
This is equally true of both application developers and those who are developing language runtimes!
Typically, speedup is talking about a measurement on the same machine with a software change of some kind, though one can also compute speedups by changing hardware.
Typically, speedup is talking about a measurement on the same machine with a software change of some kind.
I used to be an academic, and I learned while I was there that it’s terribly easy to lie with speedup.
Typically, speedup is talking about a measurement on the same machine with a software change of some kind, though one can also compute speedups by changing hardware.
To abuse a quote from Dune,
To abuse a quote from Dune,
To abuse a quote from Dune,
You’ll note even at 8 threads, the parallel program is slower than the original.
Relative: Relative to 1 thread
Absolute: Relative to the fastest sequential version!
This point isn’t obvious to everyone.
The first will try to encourage faster startup – if compute foo runs quickly, startup costs will dominate the run on the left side.
Warmup can occur as code loading is happening, caches are warmed up, JIT compilation occurs, etc.
Warmup is a really awkward term, because while many people understands what you mean, but it’s not got a great scientific definition.
Warmup can occur as code loading is happening, caches are warmed up, JIT compilation occurs, operating system thread scheduling
Reporting the minimum time for example.
When trying to measure performance be aware that benchmarks can act weird! You’ll have to report with a methodology that can handle it!
3x degradation of performance by having too small a heap.
Imagine you do your benchmark baseline on your couch at home, but then you get to work and find your change has made everything 3x faster!
You benchmark 10 rubies…
But we would like to be able to measure small changes….
Faster code can come at the cost of increased warmup time, increased footrprint etc.
2. Just because you’re the fastest C89 compiler today doesn’t matter if people are writing C11 code that looks different!
-
At this point, we go to the wise tenderlove, who reminds us!
Please… whatever you do though, account for some variance.