Async vs Sync Benchmark (.NET)

One of my favorite interview questions is “What do such words as async and await tell you?”, because it opens up an opportunity to have an interesting discussion with an interviewee… Or it doesn’t because they float in this topic. In my opinion it is drastically important to understand why we use this technique. I have a feeling that many developers prefer to rely just on “it is the best practice” and blindly use asynchronous methods.

This article shows the difference between asynchronous and synchronous methods in practice.

Tools

  • .NET 5 Web Api application (test target)

  • Azure Sql Database

  • Azure App Service (hosts the application)

  • Azure App Insights (to gather metrics)

  • locust framework (to simulate user load).



Configuration

Experiment schema


I will run a benchmark in the following way. There are two independent locust instances running on two machines*. Locust instances simulate a user that does the following:

  • a user from the locust host 1 hits the synchronous endpoint of the App Service 1, gets the response, stays idle during 0.5–1 seconds (the exact time delay is random). Repeats till the end of the experiment

  • a user from the locust host 2 behaves exactly the same, with the only one difference — he hits the asynchronous endpoint of the App Service 2.

Under the hood each App Service connects to its own database and executes a SELECT query that takes five seconds and returns a few rows of data. See the controller’s code below for references. I’ll use dapper to make a call to the database. I’d like to draw your attention to the fact that the asynchronous endpoint calls the database asynchronously as well (QueryAsync<T>).

App Services code


Worth adding that I deploy the same code to both app services.

During the test the number of users grows evenly up to the target number (Number of Users). The speed of growth is controlled by a Spawn Rate parameter (number of unique users to join per second) — the higher the number is, the quicker the users are being added. Spawn rate is set to 10 users/s for all experiments.

All experiments are limited to 15 minutes. You may find machines configuration details in the Technical details section of the article.

Metrics

  • requests per minute — shows a number of requests that the application actually processed and returned a status code

  • thread count — shows the number of threads the app service consumes

  • median response time, ms

The red lines refer to asynchronous, the blue lines — to the synchronous endpoint respectively. That’s it about theory, let’s start.