top of page

How to Choose a Database for Microservices — CAP Theorem

Updated: Jul 5, 2023

Choosing the right database for microservices is a critical decision that can significantly impact the success and performance of your architecture. In this article, we will explore the considerations and best practices for selecting a suitable database for microservices, particularly in the context of designing an e-commerce microservice architecture.

CAP Theorem

How to Choose a Database for Microservices?

When considering database requirements for microservices, there are several important factors to take into account. Let's explore these key points that can help us understand the database requirements in the context of microservices architecture.

  1. Consistency Level: One crucial consideration is the required level of consistency. Do we need strict consistency or eventual consistency? For example, in the banking industry, strict consistency is vital for operations like debiting or withdrawing funds from a bank account. In such cases, relational databases that support ACID transactions are typically chosen. However, in many microservices architectures, eventual consistency is preferred as it offers greater scalability and high availability.

  2. High Scalability: If our application needs to handle a high volume of requests, it is essential to ensure fast and easy scalability. However, achieving high scalability often involves sacrificing strict consistency. Distributing data across multiple servers introduces network partitioning, making it challenging to maintain strict consistency. Therefore, in highly scalable systems, eventual consistency is typically favored over strict consistency.

  3. High Availability: To achieve high availability, it is common to separate data centers, splitting them into different nodes and partitions. However, this approach often comes at the cost of consistency. Maintaining strict consistency across distributed systems with high availability can be challenging due to network latency and potential data synchronization issues.

Considering all these key points, the CAP Theorem provides valuable insights. The CAP Theorem states that in a distributed system, it is impossible to simultaneously guarantee consistency (C), availability (A), and partition tolerance (P). Therefore, when deciding on databases in a microservices architecture, it is crucial to weigh the trade-offs and make informed decisions based on the specific requirements of the system.

CAP Theorem

CAP Theorem, also known as Brewer's Theorem, is a fundamental concept to consider when choosing a database for microservices. Formulated by Professor Eric Brewer in 1998, the theorem states that in a distributed system, it is impossible to simultaneously achieve Consistency, Availability, and Partition Tolerance.

According to the CAP Theorem, when designing a distributed system, you have to make trade-offs among consistency, availability, and partition tolerance. It states that a database or system can only guarantee two out of the three concepts: consistency, availability, and partition tolerance. The specific choice of trade-offs depends on the specific requirements and characteristics of the system.

CAP Theorem (1)

Below are the concepts of the CAP Theorem.


Consistency refers to the requirement that a distributed system should provide the most recent and up-to-date data to all read requests. When a read request is made, the system must return the latest updated value from the database. If the data cannot be retrieved or is not up-to-date, an error should be thrown. In order to maintain consistency, the system may need to block requests until all replicas are updated with the latest data.


Availability measures the system's ability to respond to requests at any time, ensuring uninterrupted access to the system's services. A highly available distributed system can respond to requests even if some nodes or clusters are down. Fault-tolerant mechanisms are often employed to ensure high availability, enabling the system to accommodate requests even under partial failures.

Partition Tolerance

Partition Tolerance deals with network partitioning, where different parts of the system are located in separate networks or suffer from communication issues. Partition Tolerance refers to the system's ability to continue functioning despite communication failures or network partitions. Even if one or more nodes are isolated or disconnected, the system should still be able to operate and provide its services.

Understanding the implications of the CAP Theorem is important when selecting a database for microservices architecture. It helps inform the decision-making process by guiding you to prioritize the desired properties based on the specific needs of your distributed system. By carefully considering the trade-offs, you can design a resilient and effective architecture that aligns with your system's requirements.

Consistency and Availability at the same time

In a distributed system, achieving both consistency and availability simultaneously can be challenging due to the inherent trade-off between these two properties. However, there are certain techniques and approaches that aim to strike a balance between consistency and availability.

One way to achieve consistency and availability at the same time is by adopting a "strong consistency" model in a distributed system. Strong consistency guarantees that all nodes in the system have a consistent view of the data at all times. This can be achieved through techniques such as distributed transactions, where all updates are performed atomically across multiple nodes.

However, strong consistency often comes at the cost of availability. In a distributed system, ensuring strong consistency may require synchronization and coordination among the nodes, which can introduce delays and potential bottlenecks. If a node becomes unavailable or experiences a failure, it may impact the availability of the entire system.

Alternatively, another approach is to embrace an "eventual consistency" model, which prioritizes availability over strict consistency. Eventual consistency allows temporary inconsistencies in the system, but guarantees that the system will eventually converge to a consistent state. In this model, nodes in the distributed system may have locally inconsistent views of the data, but over time, through background processes or reconciliation mechanisms, these inconsistencies are resolved, and the system achieves eventual consistency.

By adopting an eventual consistency model, a distributed system can provide higher availability since operations can continue even in the presence of network partitions or temporary failures. However, it's important to note that eventual consistency may introduce challenges in scenarios where real-time or immediate consistency is critical, such as in financial transactions or certain business domains.

Consistency and Partitioning at the same time

When considering both consistency and partitioning, the challenge arises from maintaining data consistency across different partitions. In a distributed system with partitioning, data may be spread across multiple nodes that are not always in perfect sync. Ensuring consistency in such scenarios requires careful coordination and synchronization mechanisms between the nodes. Techniques like distributed transactions, distributed locks, or consensus algorithms (e.g., Paxos or Raft) may be employed to achieve consistency across partitions. However, these techniques often introduce additional complexity and can impact system performance.

Availability and Partitioning at the same time

When considering availability and partitioning together, the goal is to design the system in a way that ensures availability despite the presence of network partitions or failures in individual nodes. By partitioning the data and workload across multiple nodes, the system can continue to operate and serve requests even if some nodes become unreachable or isolated.

Achieving availability in the face of partitioning typically involves replication and redundancy. Each partition may have multiple replicas spread across different nodes or data centers. In the event of a network partition or node failure, the replicas in other partitions can continue to serve requests and maintain system availability. Techniques like quorum-based replication or leader-follower replication can be employed to ensure that a sufficient number of replicas are available to maintain system availability even in the presence of partitions.

Scale Database in Microservices

Scaling databases is a crucial aspect of microservices architecture, and one of the key considerations is data partitioning. Data partitioning involves dividing a database into smaller subsets or partitions to distribute the workload and enable efficient scaling. In this article, we will explore three types of data partitioning: horizontal, vertical, and functional.

CAP Theorem (2)

Horizontal Data Partitioning

Horizontal data partitioning, also known as sharding, involves splitting the data across multiple servers or nodes based on specific criterion such as ranges, hash values, or key values. Each partition contains a subset of the data, and different partitions can be stored on different servers. This approach allows for distributing the data and workload across multiple nodes, enabling better scalability and improved performance. It is particularly useful when dealing with large datasets or high write/read loads.

Vertical Data Partitioning

Vertical data partitioning involves splitting the data based on different attributes or columns within a database table. In this approach, each partition contains a subset of columns, allowing for better optimization and specialization of data storage. For example, frequently accessed or critical columns can be placed in one partition, while less frequently accessed columns can be placed in another. Vertical partitioning can improve performance by reducing the amount of data retrieved and accessed in each operation.

Functional Data Partitioning

Functional data partitioning involves separating the data based on different functional requirements or business domains. Each partition represents a specific functionality or domain within the system. This approach allows for better isolation and autonomy of different microservices, as each microservice can be responsible for its own partition of data. Functional partitioning simplifies development and maintenance efforts, as each microservice can focus on a specific area without impacting others.

By employing these data partitioning techniques, microservices can achieve better scalability and handle larger workloads. However, it is important to consider the trade-offs and challenges associated with data partitioning. These include maintaining data consistency across partitions, ensuring proper data distribution, and handling joins or queries that span multiple partitions.



bottom of page