Ready to scale: The 12-factor app methodology
5 min read
You might have heard the term 12-factor app already. It’s a methodology that applies to any web application, regardless of their language or framework, aimed to make them easily deployable & scalable. On this post, we’ll walk you through its main concepts to help you understand why it’s such a big deal and how you can apply the methodology on your next web application.
A twelve-factor app is always tracked in a version control system, such as Git, Mercurial, or Subversion. A codebase is a repository, as in a Git repository, which can then be deployed to multiple environments (for example Production, Staging or other development environments). The app consists of a single codebase and is the same across all deployments, even though each deployment can be a different version of the codebase.
Real life example: The production website is a running instance of the master version of the codebase.
This is where you take advantage of the dependency manager of your programming language; Ruby offers Bundler, Python offers pip, Node.js offers NPM, PHP offers composers and so on. A twelve-factor app, relies on local installations of app-specific packages, not having to rely on system-wide dependencies.
Real life example: A Ruby application that uses Bundler to manage its Gem dependencies
Twelve factor apps strictly separate configurations from the code. This practically means that the codebase is consistent across deployments and doesn’t include any environment-specific configuration hard-coded in itself. Environment-specific configuration is stored in environment variables so that we can differentiate based on whether we’re deploying a Production or Staging instance for example
Real life example: A Rails application separates databases used in config/database.yml by environment and uses ENV variables to fetch this information.
4. Backing services
The code for a twelve-factor app makes no distinction between local and third party services, meaning that we can interchange a local MySQL database for example with a managed database in Amazon Web Services RDS or a local Redis key-value storage with Elasticache, as the backing service can be either a local or a remote one
5. Build, release, run
Each app has three very distinct stages: the build stage where dependencies are getting installed and the app is being prepared for release, the release stage where the built application gets combined with the environment’s configuration and the run stage, usually referred to as runtime where the app is public and accessible to its users.
All process should be separated from one another, meaning it should be impossible to apply changes to the codebase while at runtime, since those changes wouldn’t be propagated to the build stage.
This is where the app gets executed in the execution environment, by running into processes that are stateless and share nothing. Any data that should be shared between the app’s processes, should be done via a backing service, such as a database or file store.
Real life example: The application’s sessions are not stored in files in the server, as in a multi-server environment it wouldn’t make sense and the app would break. Instead, the sessions are stored in a remote cache cluster (for example Elasticache) accessible through all of the app’s servers.
7. Port binding
A twelve-factor app is completely self-contained, meaning that they don’t rely on a webserver in order to be served. A local webserver could be injected by defining a dependency declaration and should occupy a port but, we’re not limiting ourselves to running a web server as the only example to port-binding. Practically any software can run on a dedicated port and any software can act as a backing service for an app.
Real life example: A local development server running on port 3000, or Redis running on port 6379
A twelve-factor app should be able to spawn across multiple processes concurrently in a reliable manner. This practically means that we can have as many processes of the application as required, since they all stateless and standalone.
Real life example: Multiple instances of the app can co-exist inside a server cluster without affecting each other
Disposability has more to do with application startup and shutdown, as the application should be guarded against sudden death and should be able to start up and shut down at any given time.
Real life example: Handling SIGINT and SIGTERM inside the application
10. The development / production parity
The main principle that applies here, is that the Development and Production environment should be as close as possible to each other. In other terms, all backing services and dependencies should be shared across development & production environments, enabling the developer to continuously deploy on both environments. Modern tools like Docker help towards this direction.
Real life example: The development & staging environments should use the same type of database as the production one, for example MySQL. Also, the developer is able to deploy to staging or production within hours or even minutes.
Logs in a twelve-factor app are considered to be event streams logged unbuffered to an output such as stdout. The app shouldn’t be concerned about managing or rotating log files.
Real life example: The application process logs to stdout, while there’s another, system process collecting those logs
12. Admin processes
One-off admin processes, such as command line scripts or cron jobs, should be run in an identical environment as the regular long-running processes of the app. They run against a release, using the same codebase and config as any process run against that release. Admin code must ship with application code to avoid synchronization issues.
Real life example: A rake task inside a Rails application
Why is the 12-factor application such a big deal
The 12-factor app methodology is indeed a big deal; This is because it provides a rock-solid framework that allows developers consistently ship code that can scale reliably. Reliability is the key metric here, since we might need to scale horizontally in order to distribute load across applications’ processes.