Turbocharge HTTP requests in Ruby

article logo

The problem

The slowest part in many applications is I/O, especially network I/O. We spend a lot of trying to make sure we reduce the number of calls and cache results of API calls to 3rd party services and resources.

Imagine we want to get data from Rick and Morty API. In this article, we are going to speed up subsequent requests to this API by almost 4x times.

The solution

And yet there's a trick that even very senior developers and popular API library/clients forget about that can shave off precious time of your network calls built right into HTTP.

Establishing an HTTP connection is very costly, especially if it uses TLS. This is a fixed price added to your HTTP calls that can be avoided by using keep-alive - a mechanism built into HTTP.

According to Wiki:
HTTP keep-alive, or HTTP connection reuse, is the idea of using a single TCP connection to send and receive multiple HTTP requests/responses, as opposed to opening a new connection for every single request/response pair.


Desired usage

We want our solution to resemble Ruby's standard library as much as possible, namely Net::HTTP.

Using Ruby's standard library we could use:

uri = URI("http://example.com")
http = Net::HTTP.new(uri.host, uri.port)
request = Net::HTTP::Get.new(uri.request_uri)
response = http.request(request)

Our PersistentHttpClient usage would look like that:

uri = URI("http://example.com")
http = PersistentHttpClient.get(uri)
request = Net::HTTP::Get.new(uri.request_uri)
response = http.request(request)



As we can see, for this simple HTTP call the speed gain is a whopping 3.9x!
Of course, for API calls where a server takes longer to process a request the gain won't as big, but for simple calls, the difference can't be ignored.

The advantage of this approach in comparison to something like github.com/drbrain/net-http-persistent is that the solution in this article doesn't keep connections open indefinitely to drain resources and have memory leaks. Also, it automatically manages a cache of clients instead of you having to deal with that ad hoc.

It's important to remember that the keep-alive timeout that you set on the Ruby side must be respected by the server that receives it, some servers may choose to close a connection despite this setting.

What's on the server?

I think it's also important to mention, that while keep-alive connections provide utility to clients, they can potentially overload your servers if not configured properly. Misbehaving clients can hog your memory, thus it's important to place your precious application servers behind a reverse proxy such as Nginx.

Thank you for reading!

Popular posts from this blog

Next.js: restrict pages to authenticated users

HTTP server in Ruby 3 - Fibers & Ractors

Migration locks for TypeORM