Ruby on Rails optimization techniques

There’s a reputation that Ruby on Rails apps are slow, however, because of its simplicity, readability and many good techniques and tools available out of the box, it’s easy to make ROR apps behave faster than other language frameworks.

So unless your app’s business model heavily depends on how much each transaction costs (e.g WhatsApp, Twitter, and other social networks) ROR can be the right tool for you.

Let’s begin!

No Optimization

No optimization article would be complete without mentioning premature optimization. So the first rule of optimization is that you shouldn’t optimize unless you have a problem, and unless you know for sure there’s going to be a bottleneck, you should focus on readability and business value instead of solving a problem that doesn’t exist.


The second rule is measure! How do you know you are making improvements unless you have numbers to back you up?

There are different tools available based on what you want to measure.

For optimizing specific methods you can use the inbuilt Benchmark module

require 'benchmark'
puts Benchmark.measure { 10_000.times { your_code } }
#        user     system      total        real
# => 1.500000   0.000000   1.500000 (  1.500000)

For measuring web requests you could use a handy gem called rack-mini-profiler:

Or you could use a very primitive combination of time and curl:

time curl  > /dev/null 2>&1

There are important details related to measuring execution time.

After you change your code make sure you don’t measure the first request to your server because the code is going to need a reload. The same goes for micro-optimizations, your processor caches need to be warmed first.

Once you have measured the execution time of your code you can begin thinking of different techniques to speed up your app.

Database Optimizations

We are going to start with database optimizations: the more data you have the easier it is to shoot yourself in the foot by not using optimal solutions. Thankfully by following a few simple techniques, we can reach 80% of the way to make our DB access fast.

We are going to talk about:

  • Getting rid of N+1 requests
  • Adding Indexes
  • Rewriting ORM queries in plain SQL
  • Denormalizing database
  • Doing inserts in transaction

Getting rid of N+1 requests

Usually, the first thing I look at when trying to optimize a web request is to try to get rid of N+1 SQL queries.

The easiest way to spot them is to open your logs, make a new HTTP request and see if there are many identical SQL queries being logged.

You can also use a handy bullet gem:

An example of N+1 is an application where you have many articles, each article has an author and many comments.

Say you want to display all articles, and each article shows its author and comments.

It takes 1 SQL request to load the article, however when your code iterates over them and tries to access its author or comments, it needs to make an SQL request for each one.

articles = Article.all
articles.each do |a|

Given you have 100 articles, this code is going to make 201 SQL requests.

You could turn this into 1 SQL request by adding just a bit of code.

articles  = Article.eager_load(author: {}, comments: {})

However this makes database perform a huge join, a better alternative is to use "includes":

articles  = Article.includes(author: {}, comments: {})

Includes is going to make 3 SQL queries.

SELECT * FROM articles
SELECT * FROM authors WHERE authors.article_id IN (...)
SELECT * FROM comments WHERE comments.article_id IN (...)

In practice, this is faster than a huge join.

When serializing complex nested data structures you could use something like gem.


If your table has more than 1000 records, it is vital to consider if it needs indexes.

An index can be created for a column to make SELECT queries that filter by this column faster.

Imagine a phone book, if you are searching for your friend’s number by a name, and names are not sorted, it is ok to list through maybe 50 contacts, but if you have more it becomes a nightmare, it is much easier to find your friend if contacts are sorted by names.

This is very close to what database does when you create an index, it creates a data structure where references to records are sorted by the specified column.

You need to analyze what columns are included in your SELECTs and create indexes for them.

It is better to have too many indexes than too few.

Downsides of indexes are extra storage usage and a bit slower UPDATE and INSERT queries on that table since indexes need to be rebuilt each time values in the indexed column change.

Rewriting ORM queries in plain SQL

ORMs are a controversial topic, some love them, some hate. Personally, I like them for the productivity boost, but it comes at a cost, especially in ROR, where every model is very smart and complex.

When you need to load many records at once, it might be a better idea to get the data you need via plain SQL. Since instantiating ActiveRecord objects is costly, it can speed up your code several times.

For most of us writing business logic is more natural and convenient in our programming language, than in SQL, however, when you are loading a lot of data and then crunching it, if it’s slow, rewriting it in SQL can make your code significantly faster.

Denormalizing database

Denormalizing comes at the cost of maintainability but sometimes is inevitable to reach the necessary performance.

The main idea is that instead of calculating information every time on the fly, doing joins, you can store information in a database column.

E.g. when you want to show user rating of some shop, instead of going and loading all the ratings for it and taking an average on every request, you can calculate that average once a day and store in a column on that shop’s table.

The same can be done with counts, instead of doing queries to calculate how many comments each post in a blog has, you can store it in a column on that post.

Doing inserts in transaction

When you are inserting many records at once there’s a trick to make it faster: put all inserts inside a transaction. Besides helping with consistency, it can make inserts several times faster.


Perhaps the most powerful optimization technique, but also it can be hard to get right, it can cause many bugs (right away and later when you change your code).

Caching views

One of the highest levels of caching (ignoring HTTP) is caching HTML in your views.

You can either rely on rails "magic":

<% @articles.each do |article| %>
  <% cache article do %>
    <your-html />
  <% end %>
<% end %>

Or generate cache keys yourself

<% @articles.each do |article| %>
  <% Rails.cache.fetch(
     "articles/#{}-#{article.updated_at}/details", expires_in: 1.hour) do %>
    <your-html />
  <% end %>
<% end %>

On the one hand, you can avoid a whole lot of queries and HTML generation, on the other, you need to be very careful in what data you include in your HTML and make sure that data in the cache key uniquely identifies the content. E.g. if your HTML contains data specific to a user, you need to include this user’s id in the cache, otherwise, one user is going to see another users content, which is not only confusing but can be a gaping hole in your security.

It’s also important to think if your content always needs to be up to date.

E.g. if a new article was added and you are showing a new article list, how soon does it have to appear.

One way to deal with it is expiring cache every time a new article is created, but it can be very hard to track all the places where the article can be updated.

A common technique to deal with having user-specific content on cached pages is caching the whole fragment without user information, and then loading user info via AJAX upon page load.

Caching DB queries

One level below view caching lies caching DB queries. Often they are slow and could use some speeding up.

Rails.cache.fetch("user_articles_#{}", expires_in: 1.hour) do
      Article.where(owner: current_user)

There’s actually a bug in the code above, it’s not going to be cached, because it only caches the relationship and not the actual records, it can be fixed by adding .load or .to_a which is going to actually load the records.

Rails.cache.fetch("user_articles_#{}", expires_in: 1.hour) do
      Article.where(owner: current_user).load

Caching DB queries has the same pitfalls as caching views. You need to be very careful!

Not caching

That sounds like a weird technique, but having too much caching can, in fact, slow your app down. In some cases a lot. Usually, your cache store (e.g. Redis) lives on a remote server, and making requests to it incurs quite a cost. To write to cache you need to calculate key, serialize content, send that content via a network. To read from the cache you need to calculate key, read content from the network and deserialize it.

You should never cache things like:

<% Rails.cache.fetch("long_piece_of_html") do %>
        This is a title
    <!-- more plain html -->
<% end %>

Unless they have some DB queries inside or a lot of data crunching.

It’s much easier to generate a string locally than to make a network request for it.

HTTP optimizations

You can squeeze out more juice out of your app by correctly configuring Nginx and your assets.

Turning on HTTP 2 is fairly easy and provides a good performance boost for download speed, especially if you have many files living on your server since they can be downloaded in parallel.

It is very important to enable Gzip compression for your files, it can be done from Nginx.

Better yet use Brotli compression. It’s a new compression algorithm developed by Google that was specifically optimized for the web. It can be reduced your asset size by 14-20% when compared to Gzip. Sadly configuring it for Nginx is not very easy, but if you are using Cloudflare, you can enable it by simply toggling a checkbox.

Another optimization that you can include to your pages is adding different hints to the web browser.

There are prefetch, preconnect, preload, prerender.

Can be used like:

<link rel="prefetch" href="//">

This technique requires to carefully think about assets and helps browser load relevant content faster.

Caching assets

By assets I mean images, JS and CSS files.

It’s a huge topic in and of itself, but too important to skip.

Usually during deployment Rails generates files with unique names based on file content.

It runs a hash function over file content and adds it to the file name. So the file can become my_js_2086A9193BE7DD4E916989BFFACDB767.js. This way if file ever changes its name changes as well, so you can safely cache it.

You should never serve files via your ROR server, it’s just not meant for that.

A much better alternative is placing your ROR server behind a reverse proxy like Nginx, which would read these files from the filesystem and serve them for you. In this case, you need to make sure that Nginx adds correct header when serving these files:

location ^~ /assets/ {
    gzip_static on;
    expires max;
    add_header 'Cache-Control' 'public';

A better yet alternative is placing your server behind a proxy like CloudFlare, which would look at response headers and cache your content on their own servers and deliver it to your clients via a global CDN.

Optimize your images

Images are often the heaviest assets on your site. You battle to save 50kb from JS bundle only to add an image that weights 1mb without giving it a second thought.

You should add images in matching resolutions, if your designer gives you an image in 4k resolution, you shouldn’t use it as it.

Better yet, use "srcset" to specify images of different resolutions for different screen sizes.

Also, you need to remember, that 2 images with the same resolution can have a drastically different size based on how they were encoded. Most of the time you can sacrifice a bit of quality to get an image that is 10 times lighter.

Another easy optimization technique is lazy loading images, to only show them when the user scrolls and they appear on the screen.

Also, Chrome team promised lazy loading functionality to be available in Chrome 75.

It's going to be as easy as adding a "loading" attribute to your img tags:

<img src="example.jpg" loading="lazy" />

Split your JavaScript

This one requires a bit of thinking if you don’t have a SPA with chunk splitting, but can drastically reduce your JS bundle size. For example, you could split everything that requires interactivity from your main JS bundle and exclude it from your home page, which often has only marketing details.

Move logic to background workers

Much of the work that happens on web requests to your web application doesn’t have to complete before we show a user some response.

The classic example is emails.

Imagine that a user is buying a product. The code would look something like:

def buy_product 
  charge_user(user, product)
  send_success_email(user, product)
  return user.money_left

Sending the email could be slow, instead of waiting for it and slowing the user’s experience we can move it to a background worker.

There are other benefits to this technique besides speed:

  • Worker code can be re-tried, especially if it’s a transient error
  • Your app is much more resilient to sudden spikes of requests
  • It is easier to separate your app into different services later

DRY your code

DRY - don’t repeat yourself.

By following this core software development practice, not only can you be more productive and reduce bugs related to changing code, but your code can be faster as well.

By making sure your code doesn’t repeat itself, you can spend time optimizing it in one place and anywhere it’s used is going to benefit from it.

The story continues

There are infinite ways your code can be optimized, I only mentioned the ones I deem most impactful for an average Web app.

Make sure you follow these practices but make sure you don’t go overboard since the end goal of you as a developer is delivering business value, and not chasing numbers.

Popular posts from this blog

Next.js: restrict pages to authenticated users

HTTP server in Ruby 3 - Fibers & Ractors

Using image loader is Next.js