09 April 2019

Running ActiveJob after transaction

Transactions!

If you know how to cook them they can be a real life saver when it comes to dealing with inconsistent data.

But...

There's always a "but", am I right?

Unfortunately at some point you add code to your system that has sideeffects outside of your database, and obviously transactions won't help you with that.

One of such things is Active Job (unless it's based on Delayed Job, which stores your jobs in the same database.

Imagine the following scenario:

What is wrong here? A lot actually!

First of all the SendSmsJob. The SMS is sent regardless of whether a charge via ChargeCustomer succeeds or not. If the charge fails the customer is still going to get the SMS, even though the whole transaction was rolled back. Confusing experience.

Second PrepareForShippingJob (it could be any other job referencing the order). What happens if the job is executed before the transaction commits? Payment gateways are not the fastest kids on the block, and queues like Sidekiq are pretty fast. So if the order does exist in one database session, it doesn't exist in the one that your Active Job is executing, so you get an annoying RecordNotFound error. Yuck!

Rails magic to the rescue

Turns out ActiveRecord keeps a list of records which were a part of a transaction and calls committed! on them. If we inspect active_record/transactions.rb we can pretend to be one of those records.

Now all that is left is to add ourselves to the list of records in a transaction.

Let's see how our code is going to change.

Not bad, not bad all. With this change if the code is running within a transaction it's going to wait until the transaction is over and then perform actions with uncontrollable side effects.

Having said that, I have to note that this is not a silver bullet. Also the problem described above could be mitigated by restructuring our code. But nevertheless this is a very handy tool to have. Cheers!

02 April 2019

Importing Compose.IO MongoDB dump into local docker instance

Importing Compose MongoDB dump into local docker instance

Sometimes in order to easily test performance on your local machine or to test data migrations on real data you need to import production database. To make things a bit more complicated we are going to use docker. In general I avoid making such tutorials, but I was surpised to find out there are no tutorials for this particular operation.
If you inspect what contents of the dump that Compose provides you look like, it's something like that:

What this folder contains is actually raw database data. So no need for mongorestore and such tools. Also keep in mind that even if your local Mongo uses WiredTiger format, this old format is still going to work.

Just show me the code


Fine!

# tab 1
# extract the backup
tar -xvf backup.tar -C backup

# stop running mongo instance
docker-compose stop mongo

# spin up mongo container and run a bash command in it
docker-compose run mongo /bin/bash

# executing code inside docker now!

# Assuming you had /data/db persisted in a mounted volume, let's back I old data in case we screw up:
cp /data/db /data/db_backup
rm /data/db/*
Now let's open a new tab an copy backup data into our container:

# adjust container name if it's different
CONTAINER_ID=$(docker ps -qf "name=mongo")

# copy backup data into your docker instance
docker cp backup/* $CONTAINER_ID:/data/db
Back to the tab #1:

# tab #1, inside docker
mongo

# now inside mongo

# rename database to whatever name you use locally
db.copyDatabase('my_db_production', 'my_db_development')

use my_db_production

# drop the db we copied from
db.dropDatabase()

Well, that was it. If you don't find it useful, I sure would be glad I wrote this for my later self.