top of page

Spring Webflux: EventLoop vs Thread per Request Model

Spring Webflux is a reactive web framework that allows developers to build high-performance, scalable applications that can handle large volumes of traffic. One of the key features of Spring Webflux is the choice between two different execution models: the EventLoop model and the Thread per Request model. In this article, we will discuss these two models in detail and provide code examples to illustrate their differences.


EventLoop Model

The EventLoop model is based on the Reactor library, which uses a single-threaded event loop to handle all incoming requests. When a request comes in, it is assigned to the event loop, which processes the request and returns a response. While the request is being processed, the event loop can handle other requests, improving overall performance and scalability.


The following diagram shows how EventLoops work:



Here, EventLoop is shown to explain its usage on the server-side. But, it works the same way when it's on the client-side, and it sends requests to another server i.e. for I/O requests.

  1. All requests are received on a unique socket, associated with a channel, known as SocketChannel.

  2. There is always a single EventLoop thread associated with a range of SocketChannels. So, all requests to that Sockets/SocketChannels are handed over to the same EventLoop.

  3. Requests on the EventLoop go through a Channel Pipeline, where a number of Inbound channel handlers or WebFilters are configured for the required processing.

  4. After this, EventLoop executes the application-specific code.

  5. On its completion, EventLoop again goes through a number of Outbound Channel handlers for configured processing.

  6. In the end, EventLoop handed back the response to the same SocketChannel/Socket.

  7. Repeat Step 1 to Step 6 in a loop.


Let's take a look at an example of using the EventLoop model in a Spring Webflux application:

@GetMapping("/users/{id}")public Mono<User> getUser(@PathVariable Long id) {
  return userRepository.findById(id);
}

In this example, we are defining a GET endpoint that returns a single user by ID. The findById() method is asynchronous and returns a Mono object, which represents a single result or an error signal. When this endpoint is called, the request is assigned to the event loop, which handles the findById() method and returns the Mono object. The event loop can then move on to handle other requests while the findById() method is running in the background.


Advantages:

  1. Low resource usage: The EventLoop model uses a single-threaded event loop to handle all incoming requests, which reduces the amount of resources needed to handle a large number of requests.

  2. Non-blocking I/O: The model is designed to work with non-blocking I/O, which allows it to handle a large number of requests without blocking the thread.

  3. High concurrency: The event loop can handle a large number of requests in a highly concurrent manner, which can improve the performance of I/O-bound applications.

  4. Reactive programming: The model is well-suited to reactive programming paradigms, such as those provided by the Reactor library, which can simplify the development of reactive applications.


Disadvantages:

  1. Unsuitable for CPU-bound tasks: Since the model is designed for I/O-bound tasks, it may not be suitable for CPU-bound tasks that require a lot of processing power.

  2. Blocking operations: The model can still be used to handle blocking operations, but it may require the use of a separate thread pool to avoid blocking the event loop.

  3. Limited parallelism: The single-threaded nature of the event loop means that the model may not be able to take full advantage of modern multi-core processors.

  4. Limited tooling support: The model is still relatively new compared to other Java frameworks, which means that there may be limited tooling support available.


Thread per Request Model

The Thread per Request model is based on the traditional servlet model, in which each incoming request is handled by a separate thread. When a request comes in, a new thread is created to handle the request, and that thread is responsible for processing the request and returning a response.


The following diagram shows how Thread Per Request Model work:


The following are the steps executed while handling a request in this model:

  1. All requests are received on a unique socket, associated with a channel known as SocketChannel.

  2. The request is assigned to a Thread from the Thread Pool.

  3. The request on the Thread goes through a certain handler (like filter, servlet) for necessary pre-processing.

  4. The request thread can delegate requests to a Worker thread or Reactive Web client, while executing any blocking code in a Controller.

  5. On its completion, the Worker Thread or Web client (EventLoop) will be responsible to give back response to concerned Socket.


Let's take a look at an example of using the Thread per Request model in a Spring Webflux application:

@GetMapping("/users")public Flux<User> getUsers() 
{
  return userRepository.findAll().subscribeOn(Schedulers.elastic());
}

In this example, we are defining a GET endpoint that returns all users. The findAll() method is asynchronous and returns a Flux object, which represents a stream of results or an error signal. We are using the subscribeOn() method to specify that the findAll() method should be executed on an elastic thread pool, which is optimized for blocking I/O operations.


When this endpoint is called, a new thread is created to handle the request, and that thread is responsible for executing the findAll() method and returning the Flux object. Because we are using an elastic thread pool, the thread can be freed up while the findAll() method is waiting for I/O operations to complete.


Advantages:

  1. High performance: The Thread per Request model can take full advantage of modern multi-core processors, which can lead to better performance for CPU-bound tasks.

  2. Independent execution: Each request is handled in its own thread, which means that requests are executed independently of each other. This can improve the overall responsiveness of the application.

  3. Familiarity: The Thread per Request model is more similar to traditional synchronous programming models, which may make it easier for developers to understand and work with.

  4. Tooling support: Since the model is more widely used, there is more tooling support available, which can make it easier to debug and profile the application.


Disadvantages:

  1. High resource usage: Creating a new thread for each request can be resource-intensive, especially for applications that handle a large number of requests.

  2. Scalability: Creating a large number of threads can lead to scalability issues, especially if the application needs to handle a large number of concurrent requests.

  3. Synchronization issues: Since each request is handled in its own thread, there may be synchronization issues that need to be handled carefully.

  4. Complexity: The Thread per Request model can be more complex than the EventLoop model, especially when dealing with complex synchronization or thread management scenarios.


The Difference:

EventLoop

Thread Per Request

Uses a single-threaded event loop to handle all incoming requests

Creates a new thread for each incoming request

Ideal for I/O-bound applications with low resource usage

Ideal for CPU-bound applications with high performance requirements

Can handle a large number of requests with a single thread

Can handle a large number of requests with multiple threads

Non-blocking, can handle blocking operations with Project Reactor

Can block if not executed on a separate thread pool

Mono and Flux methods with Reactor library

subscribeOn(Schedulers.elastic()) for separate thread pool

The EventLoop model is more suited for I/O-bound applications that require low resource usage, while the Thread per Request model is better for CPU-bound applications that require high performance under heavy load. Both models have their tradeoffs, and the choice between them depends on the specific requirements of your application.


Which Model to Choose?

When deciding between the EventLoop model and the Thread per Request model, it is important to consider the specific requirements of your application. If your application is I/O-bound and requires low resource usage, the EventLoop model may be the better choice. However, if your application is CPU-bound and requires high performance under heavy load, the Thread per Request model may be more appropriate.


Both models can be used together in a hybrid model, in which requests are handled by the EventLoop model when they are I/O-bound, and by the Thread per Request model when they are CPU-bound. This can provide the best of both worlds, allowing for high performance and low resource usage.


Conclusion

Spring Webflux provides developers with the flexibility to choose between two different execution models: the EventLoop model and the Thread per Request model. While both models have their advantages and disadvantages, the choice between them ultimately depends on the specific requirements of your application.

0 comments

Recent Posts

See All
bottom of page