In software engineering, response time has always been one of the main metrics of overall system quality. For web applications, performance is about how easily users can navigate a website. Any delays or interruptions cause a negative user experience and, as studies show, lead to an increase in the site abandonment rate. Learn why it is important.
These numbers are even more meaningful in a mobile context. Smartphones and tablets are widely-used devices with more limited hardware (and often worse Internet connection) than desktops but, according to gomez.com research, users’ expectations remain the same. This post shows how to measure web app performance and explains how to fix the most common issues.
Apdex (Application Performance Index) is an open standard that defines a method to measure and track user satisfaction. Simply - it transforms an app’s response times into a single value which represents how satisfied users are.
How is it calculated? All measurements are divided into 3 zones: satisfied, tolerated and frustrated. Then, to place all our responses into the appropriate bucket, we set the threshold value, T (target time) and assign times to buckets as follows: from 0s to Ts - satisfied; Ts to 4Ts - tolerated; 4Ts and more - frustrated. Now, having all values assigned, we evaluate the Apdex Index with the following formula:
The final value falls between 0 and 1, where 0 means that no users were satisfied and 1 indicates that all users experienced excellent performance.
Example: let’s say we set the threshold value to 0.5s and measure 1000 response times; 750 were between 0s and 0.5s (satisfied), 230 between 0.5s and 2s (4T) (tolerated) and 20 were greater than 2s (frustrated). Applying the formula, we get Apdex(0.5) = (750 + (230 / 2)) / 1000 = 0.9.
There are many tools on the market that provide real-time Apdex measurements (and many more features) for your app (e.g. New Relic, Skylight). Once you’ve measured, it’s time to optimise. In the next sections, I will cover the most common performance killers and explain techniques that can make your app even faster.
N+1 queries occur when you try to access ActiveRecord data that was not initially fetched from the database. This sounds trivial, but making additional SQL queries may significantly affect an app’s performance. Let’s consider the following snippet:
This will produce the following queries stack:
But could be just:
To decrease the number of queries, we should use eager loading. ActiveRecord has several methods that support this (#includes, #preload, #eager_load). These ensure that all specified relations are prefetched using the fewest possible queries. All we need to do is to modify the controller:
Bear in mind that N+1 queries don’t only occur with fetching data. Your app may also suffer from them when removing an object. Consider the following scenario:
This may produce:
destroy on all associated objects (with callbacks), if callbacks are not needed, use
dependent: :delete. The generated query is:
Bullet gem Bullet gem helps you spot N+1 queries by showing a JS alert each time it occurs. For api-only apps, you can configure it to output issues into the rails logs (serializers may cause N+1 queries in the same way as views do!).
A database index is a separate data object which holds specific information about where and how the actual data resides in the data blocks on the hard drive. That information improves the performance of data retrieval operations. But remember the cost of indexing - for every index in a table, there is a penalty when both inserting and updating rows. Indexes also take space on disk and in memory, which can affect the efficiency of queries. Yet, having too many indexes on the same table can cause databases to choose between them, harming performance rather than improving it.
You should be sure to index the following:
Lol_dba gem is a tool that scans your models and checks if all required indices are persisted in the database.
When your app imports CSV files or fetches a bunch of data from external APIs you may experience slowness caused by creating a lot of AR objects and making n SQL insert queries. Activerecord-insert gem allows you to insert a bunch of data in (possibly) fewer queries.
You don’t have to worry too much about memory management while programming in Ruby. When there is a need to store something in memory, you simply create a variable, modify it, read from it, and that’s it. In this cycle, there is no need to remove variables (deallocate memory) because Ruby does it for you using GarbageCollector (GC). It’s super-handy and less error-prone, but the downside is that each GC session pauses your application’s execution, thus obviously affecting performance. The rule is simple - the more objects you create, the more memory you allocate and then the more GC sessions are required - thus, the slower your app is. Lots of Ruby’s built-in methods are slow because they create an object copy in memory that must then be released by GC. Make sure you use proper methods to solve a given problem, there is a nice repository with a list of method idioms and benchmarks included.
You can measure how much time your app spent on GC pauses using the gc_tracer gem.
Caching allows the serving of static content that is already stored in memory bypassing all operations that would be executed during typical requests, such as database queries, app logic and view rendering.
As the old joke goes, there are only two hard things in programming: cache invalidation, naming stuff and off-by-one errors. While the third one is a joke, the first is a true headache. Fortunately, Rails supported with Memcached or Redis come with a really powerful built-in mechanism for making the effort of computation reusable. To implement caching in your app make sure you are familiar with the very nice official Rails guide (especially fragment caching, Russian doll caching and low-level caching). Also, when building an API app you can use caching, just make sure that the serializer you use supports it (e.g. active_model_serializers).