Each environnement is different, this presentation being .NET is focused on the server side. Front-end has its own set of tools and challenges. A performance issue usually have multiple possible solutions, can’t make a choice without context.
Memory footprint, latency, time to see a result, amount of hardware required
Any small detail can make all the difference.
Total : 10 seconds. Even if the query was completely removed, max improvement is 1%. No measurements, no way to know if optimization is worthwhile
MiniProfiler and Glimpse -> Configuration Prefix -> Installed, works with profiler APIs
ConFoo 2017: Introduction to performance optimization of .NET web apps
Introduction to Performance
Optimization of .NET web
Developer/DevOps at Amilia
Optimization, scaling, monitoring, SQL Server, Elastic Search
Editor at InfoQ.com, writing about .NET and F#
In this talk
What does “performance optimization” mean?
Tools of the trade
Focused on the server side
What does “performance” means?
Depends on how it is measured.
What interest us is response time (measured in ms) and
resource utilisation (CPU, memory, disk, bandwidth)
The method everybody knows:
Tempting to try to optimize code out of a gut feeling
Unlikely to get it right more than 5-10% of the time
The possible causes are endless: code, third party library, web
server, CDN, DNS, bandwitdh, ISP, hardware, etc.
Regardless the magnitude of an improvement, the theoretical speedup of a task is
always limited by the part of the task that cannot benefit from the improvement.
TL;DR: Optimize the right thing.
Time to optimize
Measure everything, all the time. Knowing precisely what is slow is key to
efficient performance optimization.
Stopping profiling too early and fixing the wrong problem is an easy trap
to fall into.
Optimizing the the right thing: trickier than it sounds
Libraries often have overhead on first call (razor view compilation,
scanning an object with reflection, data caching, library caching.
SQL queries: Parametrized queries, locks, data not in cache
Tip: Run your request a few times times to get consistent timings.
Not all bottlenecks are created equals
Different application types lead to different bottlenecks
Uncommon performance issues in unoptimized code:
Hitting concurrent request limit, default in asp.net is 12 * cores (async)
Minor innefficiencies like an extra if condition
Different issues at different scales.
1s Upload a file to third party cloud service
10ms to100ms SQL Queries
1ms Regex match
0,001ms Get a property through reflection
0,000,001ms Multiplying two numbers together
A file upload may look slow, but a thousand SQL queries is slower.
Some numbers: Amilia
2 million pageviews per month
Between 10 to 20 million requests per month
Usual throughput : 300 to 500 requests per minute
During registrations: Anywhere from 1,000 to 12,000 rpm
SQL: 100,000+ queries per minute in peaks
Avg. execution time: 100 ms
Note: Static files (js, css, html) are not included, served directly from CDN
Application Performance Monitoring
Provides a clear picture of what’s happening in production.
Data can be used as a starting point to reproduce an issue
New Relic APM
Lightweight code profilers
Provide live performance data on both dev and production systems.
I/O: Third-party services and systems must be monitored, even if control over them is limited.
Databases, CDN, payment processor, etc.
Service Map and Database tabs in New Relic
Follow best practices
Issues can often be avoided by taking a quick look at best practices for a
Difficult and time consuming to find
IIS and HTTPS -> Offload to a load balancer
NHibernate -> Avoid implicit transactions
SQL Server -> Too many to name them
To sum it up
Measure before, during and after changing code
Optimizing without measuring is like fixing a bug without testing
Various tools are at your disposal, each giving visibility on some level
Article with links to the tools: https://www.infoq.com/articles/dotnet-