All Case Studies Design Development Interviews Machine Learning Project Management

Phoenix Models and Migrations Compared to Rails Framework

In my last article I outlined the structure of the new Phoenix project. Today I will show you how Phoenix migrations and models compare to rails ActiveRecord and ActiveRecord::Migration.

Given that the last blog post dealt with how to create a simple blog post app, here we will get cracking on a post model. We can use ready-made generators like we did before in order to generate the structure for our application. To create a new post we use the following command:

mix phoenix.gen.html Post posts name:string content:text published:boolean

This command is similar to the rails equivalent:

rails generate scaffold Post name:string content:text published:boolean

In case you haven’t already noticed, the Phoenix creators took a certain amount of inspiration from the Rails framework. This pretty much – as in rails – creates the model, CRUD views, controller and migration. The difference is that it doesn’t automatically add routes for those actions to our router (but you will be notified when using this generator). Let’s add it right now by appending the resources macro to our router:


Similar to rails, we also need to migrate our changes to the database to create a new database table but first let’s look into the migration file created by the generator:


Migrations in Phoenix are really similar to those in Rails. The biggest point of divergence is the different notation that stems from a different paradigm: in Ruby we would use dot notation whereas in Elixir we use the lambda calculus version. If you see this migration over here and wonder to yourself, “Well, yeah, but what if I have to tackle a more complex migration and the rollback is not that easy to get via the migration module?” Ok, so I’ve got another boring answer for you – it’s the same as in Rails where you use two separate “up” and “down” methods. This makes me a happy bunny because Rails migrations are really top notch, in my humble opinion, and I’ve never had any problems with them as long as they were used properly.

Now let’s head back to check out our post model. Models in Phoenix are actually just structs from Elixir that in essence are just good old maps. They merely deliver a more convenient method to address fields in structs by using dot notation instead of using the Map.get/3 function. Below you can see our post model:


The first line in our model is just an import of functions specific for models in our application. By default, it imports functions specific for Ecto which is a sort-of-ORM library (it’s actually just a database wrapper) for Elixir. These functions are used later so let’s move on. In the next few lines we see a definition of the schema for the table used by this model. It contains the database table name, fields, types of those fields, default values and a timestamps macro which – again, similar to Rails – adds 3 default fields: id, created_at, updated_at.

Normally here we would also include any information that might be relevant about the relationships this model contains. I really like the fact that the schema is in the model – I don’t need to use any libraries like annotate in ActiveRecord models. The next method is called changeset and this is where we can see really some nice architecture coming into play. Changeset is really a function which takes a pre-existing model (the first parameter called struct) and new params (for example from HTML form) and checks if this model is valid after the fresh changes. Why is this so awesome? You may well ask since rails has been doing this for a couple of years already. The difference is in the scope of the validation.

Whereas the regular “rails way” would involve adding validations to the global scope of the model, here we just do it in the function. If business logic requires the use of a different validation depending, for example, on the role of the user, we can just create another changeset function. Of course in our Ruby world we already have this kind of stuff at our disposal by using dry-validation or form objects but in fact not that many projects in the rails ecosystem use it – here, however, it’s the default. In this function we can also see the pipe operator in use – if you’ve ever made some more complex bash command, you’ll already feel right at home. What we really do here is take our struct which is data, cast new params on it (we don’t change the struct itself – immutability!) and at last we validate the presence of the params in the new struct. There are, of course, more validate functions which we can use, and you can find them in Ecto.Changeset docs over here. 

This pretty much wraps up today. Just to summarise, we learned how the migrations and models differ from one another. The migrations are really similar and the models are quite a bit different – they propose an approach other than ActiveRecord from rails. By default they say, “Hey, maybe you shouldn’t add global model validations and add some scopes to that validation,” which is totally awesome. During my time with rails I learned that a thinner model is a better model. The less state I have to worry about, the better.

In the next post we will get down to the request-response cycle and how we can influence it via pipelines.

digital transformation
remote work best practices
READ ALSO FROM Software Development
Read also
Need a successful project?
Estimate project or contact us