top of page

What is Distributed Cache? What are Distributed Cache Strategies ?

A distributed cache is a system that pools together the random-access memory (RAM) of multiple networked computers into a single in-memory data store used as a data cache to provide fast access to data. While most caches are traditionally in one physical server or hardware component, a distributed cache can grow beyond the memory limits of a single computer by linking together multiple computers–referred to as a distributed architecture or a distributed cluster–for larger capacity and increased processing power.


Distributed caches are especially useful in environments with high data volume and load. The distributed architecture allows incremental expansion/scaling by adding more computers to the cluster, allowing the cache to grow in step with the data growth.



Distributed Cache Strategies

There are different kinds of caching strategies which serve specific use cases. Those are Cache Aside, Read-through cache, Write-through cache & Write-back


Cache Aside

This is the most common caching strategy, in this approach the cache works along with the database trying to reduce the hits on it as much as possible. The data is lazy loaded in the cache.

When the user sends a request for particular data. The system first looks for it in the cache. If present it’s simply returned from it. If not, the data is fetched from the database, the cache is updated and is returned to the user.

This kind of strategy works best with read-heavy workloads. The kind of data which is not much frequently updated, for instance, user profile data in a portal. His name, account number etc.

The data in this strategy is written directly to the database. This means things between the cache and the database could get inconsistent.

To avoid this data on the cache has a TTL Time to Live. After that stipulated period the data is invalidated from the cache.

Read-Through

This strategy is pretty similar to the Cache Aside strategy with the subtle differences such as in the Cache Aside strategy the system has to fetch information from the database if it is not found in the cache but in Read-through strategy, the cache always stays consistent with the database. The cache library takes the onus of maintaining the consistency with the backend;

The Information in this strategy too is lazy loaded in the cache, only when the user requests it.

So, for the first time when information is requested it results in a cache miss, the backend has to update the cache while returning the response to the user.

Though the developers can pre-load the cache with the information which is expected to be requested most by the users.

Write-Through

In this strategy, every information written to the database goes through the cache. Before the data is written to the DB, the cache is updated with it.

This maintains high consistency between the cache and the database though it adds a little latency during the write operations as data is to be updated in the cache additionally. This works well for write-heavy workloads like online massive multiplayer games.

This strategy is used with other caching strategies to achieve optimized performance.

Write-Back

I’ve already talked about this approach in the introduction of this write-up. It helps optimize costs significantly.

In the Write-back caching strategy the data is directly written to the cache instead of the database. And the cache after some delay as per the business logic writes data to the database.

If there are quite a heavy number of writes in the application. Developers can reduce the frequency of database writes to cut down the load & the associated costs.

A risk in this approach is if the cache fails before the DB is updated, the data might get lost. Again this strategy is used with other caching strategies to make the most out of these.



Uses of Distributed Cache

Distributed caches have several use cases stated below:


1. Database Caching

The Cache layer in-front of a database saves frequently accessed data in-memory to cut down latency & unnecessary load on it. There is no DB bottleneck when the cache is implemented.


2. Storing User Sessions

User sessions are stored in the cache to avoid losing the user state in case any of the instances go down.

If any of the instances goes down, a new instance spins up, reads the user state from the cache & continues the session without having the user notice anything amiss.


3. Cross-Module Communication & Shared Storage

In memory distributed caching is also used for message communication between the different micro-services running in conjunction with each other.

It saves the shared data which is commonly accessed by all the services. It acts as a backbone for micro-service communication. Distributed caching in specific use cases is often used as a NoSQL datastore.


4. In-memory Data Stream Processing & Analytics

As opposed to traditional ways of storing data in batches & then running analytics on it. In-memory data stream processing involves processing data & running analytics on it as it streams in real-time.

This is helpful in many situations such as anomaly detection, fraud monitoring, Online gaming stats in real-time, real-time recommendations, payments processing etc.



Advantages:

  • Single point of failure- As distributed cache run across many nodes. Hence, the failure of a single node does not result in a complete failure of the cache.

  • Data Consistency- It tracks the modification timestamps of cache files. It then, notifies that the files should not change until a job is executing. Using hashing algorithm, the cache engine can always determine on which node a particular key-value resides. As we know, that there is always a single state of the cache cluster, so, it is never inconsistent.

  • Store complex data – It distributes simple, read-only text file. It also stores complex types like jars, archives. These achieves are then un-archived at the slave node.


Disadvantages:

a) Object serialization– It must serialize objects. But the serialization mechanism has two main problems:

  • Very bulky– Serialization stores complete class name, cluster, and assembly details. It also stores references to other instances in member variables. All this makes the serialization very bulky.

  • Very slow– Serialization uses reflection to inspect the type of information at runtime. Reflection is a very slow process as compared to pre-compiled code.



The Tech Platform

Recent Posts

See All
bottom of page