top of page

What is Twelve-Factor App?

The Twelve-factor app is a set of 12 principles or best practices for building web applications which now days are more commonly known as Software-As-A-Service (SAAS) applications. It was published by the co-founder of Heroku, Mr. Adam Wiggins in 2011. These principles are the result of all the experiences and observations that the people working at Heroku, which is a Platform-As-A-Service, have gained over a large variety of Software-As-A-Service applications ever deployed on it.

When a developer uses the twelve-factor app DevOps methodology, applications will have certain characteristics in common that address a variety of scenarios as an app scales. For example, the methodology recommends that apps use declarative formats for setup automation to assist new developers that enter the project at a later time.

Apps should also be written to have maximum portability between execution environments. and scale easily without significant reworking. Twelve-factor apps can be written in any programming language and in combination with any back-end service, such as a database.

The goal of the twelve-factor framework is to help developers build apps that use an architecture that ensures speed, reliability, agility, portability and ultimately results in a robust and reliable application.

These principles help us to create the applications that use declarative programming for automation of setup which in turns reduces the development time and cost of the projects when a new developer joins it, doesn’t rely on any particular OS and are easily portable, are highly suitable for deployment on cloud and allows continuous deployment and are easily horizontally scalable without requiring many changes in the codebase. The factors are as follows:

1. Codebase

Single codebase per application tracked in version control with many deploys. Let us break the official statement of it and try to understand the two main terms in it. In naive terms, a codebase is all the human-written source code that doesn’t include the class or object files generated by the tools. A deploy refers to a single running instance of that particular application. Each application must have only a single codebase. Each such codebase must be managed in a version control system. Some popular VCS (Version Control System) includes git, svn and mercurial. It there exists multiple codebases, then it’s not an application, it’s a distributed system. Each component of that system will be known as an application that shall follow the principles of the 12-factor apps.


  • There is always a one-to-one correlation between the app and it’s codebase. If there are multiple codebases, it’s not an app – it’s a distributed system. Each component in a distributed system is a 12 factor app

  • There may be many deploys (running instances) of the app


  • Multiple apps sharing the same code is a violation of twelve-factor

2. Dependencies

Explicitly declare and isolate dependencies. We all must have observed this at least once in our programming career that we have downloaded a python library for some purpose and then found out that we have downloaded the wrong version of the library or have the wrong version of python installed in our system. This generally happens because the task of managing the dependencies is given to the developer. Hence, this factor states that we must always declare the dependencies in the manifest file, a file containing the metadata for the dependencies like name, version. It increases the speed of the development as now the developer is free from the task of managing the correct version of the libraries. There is no need for explicitly downloading the required JARs anymore.


  • Declare all dependencies, completely and exactly, via a dependency declaration manifest

  • Use dependency isolation tool to ensure no implicit dependency leak in


  • Never relies on implicit existence of system-wide packages

3. Config

Store config in the environment. The source code and the configurations must be completely separated from each other. We must store all the configurations like DB credentials, path, URI in the environment variables as in general practice in the industry, application’s configurations vary from environment to environment like dev, test, prod, etc. Also, no configuration should be stored in git in plain text. There is a very simple way in which you can check whether your current application follows this Config principle or not by asking yourself that whether you can make your application open source right now without making any changes and without compromising any of your credentials.


  • Keep everything that is likely to vary between deploys in config

  • Resource handles to the database, Mem cached, and other backing services

  • Credentials to external services such as Amazon S3 or Twitter

  • Per-deploy values such as the canonical host name for the deploy

  • Environment variables needn’t be checked into GitHub


  • Storing config as constants in code

4. Backing Services

Treat backing services as attached resources. If we talk in very simple terms then any service that your application consumes over the network is known as a backing service. Your application must treat these services as resources which it is consuming over the network. It gives us the advantage that our services become easily interchangeable and offer great portability to our application. A simple example is supposed your application is currently using a local PostgreSQL database for its operations and it is afterward replaced with the one hosted on the server of your company by just changing the URL and the database credentials.


  • Make no distinction between local and third party services

  • Both should be accessed via a URL or other locator/credentials stored in the config

  • Resources can be attached and detached to deploys at will


  • Make changes to app’s code to change resource

5. Build, Release and Run

Strictly separate built and run stages. The deployment of your application must be properly separated into three non-overlapping non-dependent phases namely build, release and run. The build phase constitutes the compilation of the code which in the end generates the artifact like a JAr or WAR file. The second phase, release, take the artifact file generated at the end of the previous phase and adds the configurations for a particular environment to it. The last phase includes the running of the instance of the application.

6. Processes

Execute the application as one or more stateless processes. An application is said to follow this principle if its instances can be created and destroyed any time without making any effect on the overall functionality of our application. To achieve and fulfill this principle, our application must store any type of data generated by it in any persistent (stateful) datastore. But it doesn’t mean that we can’t use the in-memory of the processes of our application. We can use it to store as temporary storage. In simple terms, the usage of sticky sessions must be avoided. Sticky sessions, in simple terms, refers to catching the logged-in user’s session data in the local memory of the process and then directing each subsequent request from that particular user to the same process. The problem with sticky sessions is that it causes uneven load balancing among the instances of the application.


  • Execute app as a stateless process that share nothing between each other

  • Complex apps may have many stateless processes

  • Data that has to be persisted should be stored in stateful backing service


  • Never assumes that anything cached in memory or on disk will be available on a future request or job

  • Sticky sessions are a violation of twelve-factor and should never be used or relied upon

7. Port Binding

Export services via port binding. Any application that follows this principle is completely self-contained and standalone. It exports HTTP as a service and doesn’t require any server like a tomcat to listen to requests. It binds itself to some particular port and listens to all the requests hitting on that port. You must also have observed it while designing an application or a microservice which runs on a particular port and you hit requests on that particular port using postman to get a response. This information that to which port our service will be listening to is also stored in the configuration file itself.


  • Similar to all the backing services you are consuming, your application must also interfaces using a simple URL

  • Create a separate URL to your API that your website can use which doesn’t go through the same security (firewall and authentication), so it’s a bit faster for you than for untrusted clients


  • It can’t rely on the injection of a web container such as tomcat or unicorn; instead it must embed a server such as jetty or thin

8. Concurrency

Scale out via the process model. An application that follows this principle must be divided into smaller different processes instead of a single large application. Each such process must be able to start, terminate and replicate itself independently and at any time. This principle allows scaling our application very easily. By scaling out we refer to horizontal scaling in which we run multiple instances of our processes. It adds concurrency to our application in a very simpler way due to the existence of independent horizontally scalable processes.


  • Because a 12-factor exclusively uses stateless processes, it can scale out by adding processes

  • Ensure that a single instance runs in its own process. This independence will help your app scale out gracefully as real-world usage demands


  • Not desirable to scale out the clock processes, though, as they often generate events that you want to be scheduled singletons within your infrastructure

9. Disposability

Maximize robustness with fast startup and graceful shutdown. Robustness of an application refers to the graceful starting and termination of its processes without affecting the overall application’s functionality. For example one of the processes of our application is storing the details of a newly added employee to the company into a database. But while doing so, in between an unexpected error occurs which causes the process to terminate in between unexpectedly. However, the state of our application or database must not be affected by it and the process must fail-safe. Also, it must start quickly whenever required.


  • Use components that are ready and waiting to serve jobs once your app is ready

  • Technologies to consider are databases and caches like RabbitMQ, Redis, Memcached, and CenturyLink’s own Orchestrate. These services are optimized for speed and performance, and help accelerate startup times


  • You should never do any mandatory “cleanup” tasks when the app shuts down that might cause problems if they failed to run in a crash scenario

10. Development/Production Parity

Keep development, staging, and production as similar as possible. It simply means that the development and production environment must be as similar as possible. The processes being used, technologies and the infrastructure must be the same. This will help you in a way that whatever error that can happen over time will happen at the development stage itself instead of surprisingly occurring in the production. This helps in the continuous deployment of our application and reduces the development time and efforts also.


  • Adopt continuous integration and continuous deployment models

  • Public cloud services make the infrastructure side easy, at least between QA/test, staging and production. This is one of Docker’s primary benefits (CF)


  • Using different backing services (databases, caches, queues, etc) in development as production increases the number of bugs that arise in inconsistencies between technologies or integration

11. Logs

Treat logs as event streams. Logs are very essential to understand the internal working of the application and it can be of different levels and are generally stored in a file named “logFile” in the storage. Ideally, our twelve-factor application must not be worried about the storage of the logs. Whenever a request enters into the system, corresponding logs are made and they are treated as a sequence of events that can be used to debug when some problem occurs.


  • View logs in local consoles (developers), and in production, view the automatically captured stream of events which is pushed into real-time consolidated system for long-term archival

  • Stream the logs to two places: view these logs in real-time on your dev box and store them intools, like ElasticSearch, LogStash and Kafka. SaaS products like Splunk are also popular for this purpose


  • Don’t rely on these logs as the primary focus

12. Admin Processes

Run admin/management tasks as one-off processes. Most of the applications require a few one-off tasks to be executed before the actual flow of the application starts. These tasks are not required very often and hence, we generally create a script for it which we run from some other environment. On the other hand, the twelve-factor methodology says that such scripts must be made a part of our codebase itself managed in the version control system. These tasks should also follow twelve-factor principles.


  • Admin/management tasks mentioned in the final factor should be done in production

  • Perform these activities from a machine in the production environment that’s running the latest production code

  • access to production console


  • Updates should not be run directly against a database or from a local terminal window, because we get an incomplete and inaccurate picture of the actual production issues

Twelve Factor App Benefits:

  • Use declarative formats for setup automation. This minimizes the time and cost for new developers joining the project

  • Have a clean contract with the underlying operating system, offering maximum portability between execution environments

  • Are suitable for deployment on modern cloud platforms, thus removing the need for servers and systems administration

  • Limits differences between development and production, enabling continuous deployment for maximum agility

  • Can scale up without any major changes to tooling, architecture, or development practices, hence performance is a priority

The Tech Platform



bottom of page