top of page


Comparing “gRPC + Protocol Buffers (Protobuf)” and “REST + JSON”


Designing your API using gRPC with Protocol Buffers (Protobuf) is more performant compared to using REST and JSON.

The main reasons being:

  • gRPC uses as its transfer protocol HTTP/2.0 — better connection management.

  • Protobuf transports data in binary format — easier serialization/deserialization.


Since the benchmarking is conducted on my local machine, and I acknowledge that it just demonstrates their performances relative to each other. And the sole purpose of this blog post is to share my experiences about the process and give a little bit of theory into these topics.

As we are all witnessing the world moving towards the microservices architecture, gRPC’s popularity is on the rise. It is because it is said to be more performant than REST and its drawbacks are somewhat negligible if we are planning to use it to design our internal APIs.

So, I wanted to experiment with the implementation of such API, its interaction with other frameworks and, its performance compared to REST APIs using JMeter.

This blog post demonstrates the results of a benchmarking between REST and gRPC and explains the reasonings. We are briefly going into the details of these two types of API design models in terms of their characteristics — if we were to go into a more detailed explanation about these topics, that easily could be another blog post by itself.



The first API Design model we are going to look at is Representational State Transfer (REST). REST follows the principles that HTTP and World-Wide Web are based on. It is practically just a set of rules built on top of what HTTP as a protocol allowed.

For example, REST suggests that HTTP methods (GET, POST, PUT, PATCH…) are used accordingly to interact with resources, and this brings the advantage of having a predictable design when you know the resources you are dealing with. For an API to be intuitive and to be predictable has great importance. If your API sends and receives “Person” objects, then your resource would be “person”.

A sample REST API looks like this:

POST /persons -> To create new "Person"
PUT /persons/{personId} -> To update a "Person"
PATCH /persons/{personId} -> To partially update a "Person"
DELETE /persons/{personId} -> To delete a "Person"
GET /persons -> To get all "Person"s
GET /persons/{personId} ->  To get a specific Person


Although in REST, data can be transferred in many forms JSON is the most commonly used one. The main reason behind that is it is being human-readable and generally performs well. Its format is pretty simple; it mainly consists of key-value pairs.

In JSON format, a person object can be represented as:

  name: "Recep",
  age: 25, 
  city: "Istanbul" 

gRPC and Protocol Buffers

Remote Procedure Call (RPC)

Before getting into gRPC, let’s first talk a little bit about Remote Procedure Call (RPC). RPC is the way a software calls a procedure (call it a function or a method) of another software as if it were it is a local procedure.

There’s this insightful comparison between REST APIs and RPC APIs in terms of their usability that says: “Learning an RPC API is like learning a library while learning a REST API is like learning a database schema” — compared to a typical programming library, there is much less detail to learn in a database.

RPC transmits data in a binary format that makes data transportation more efficient.

A sample RPC API looks like this:

POST /createPerson -> To create new "Person" + {personId}
PUT /updatePerson -> To update a "Person" + {personId}
PATCH /patchPerson -> To partially update a "Person" + {personId}
DELETE /removePerson -> To delete a "Person" + {personId}
GET /listAllPersons -> To get all "Person"s
GET /loadPerson ->  To get a specific Person + {personId}


gRPC is developed by Google on top of RPC.

gRPC uses HTTP/2 under the covers as its transfer protocol and makes use of the benefits of HTTP/2.

gRPC uses Protocol Buffers (Protobuf), Google’s mature open source mechanism for serializing structured data (although it can be used with other data formats such as JSON). Since HTTP/2 relies on the transferred data being binary encoded protobuf plays a very important role for gRPC.

gRPC supports code-generation for many of the most popular programming languages. This can be done as we can easily define services using Protobuf and generate client-side and server-side code based on that definition. In microservices, we may have one service written in Java (the server) and the other one in Python (the client), with gRPC’s code generation we can define our service in Protobuf and generate the stub for both Python and Java easily and we do not have to worry about the contract and the communication between the two application, we can just make use of the generated code and access the methods on server-side as if it was a method defined in client’s codebase. This feature makes gRPC a great option to consider for microservices architecture where polyglot architectures are usually chosen.

gRPC does not expose HTTP to the API designer or the API user. gRPC made all the decisions for you to map the RPC layer on top of HTTP and gRPC-generated stubs and skeletons hide HTTP from the client and server too, so nobody has to worry how the RPC concepts are mapped to HTTP — they just have to learn gRPC (I am not quite sure if it is a good thing though).

With gRPC, you can describe your API in terms of methods or procedures. However, if a lot of methods are added over time, the result can be a complex and confusing API because developers must understand what each method does individually.

Of course, you can use binary payloads and HTTP/2 without gRPC, but that requires more work and more mastery in other technologies.

The way a client uses a gRPC API is by following these three steps:

  1. Decide which procedure to call

  2. Calculate the parameter values to use (if any)

  3. Use a code-generated stub to make the call, passing the parameter values

gRPC comes with a modular code generator called protoc. Each supported language is implemented as a separate plugin. The code generator was part of the gRPC project from its inception.

With the help of a grpc-plugin using protoc, you create Server and Client stubs that works as your local interface to the remote procedures, and under the hood, gRPC transports request to the methods of the Client’s stub to the Server’s stub. On the server-side, the request is read and the related method is invoked, and the result is sent back again following a similar route.

Protocol Buffers (Protobuf)

Google focused on simplicity and efficiency when they were designing Protocol Buffers. Protobuf has its syntax, and compiler to generate code from the Protobuf files (.proto) you defined to supported languages.

Protocol Buffers offer typed variables where JSON does not, this simply helps programmers with the type of the field they are reading from the data file — in JSON sometimes we have to do extra checks to decide if the field is a number or not.

Protocol Buffers represent data in the binary format while JSON is a text-based format. This means compromising on human-readability for the sake of better encoding/decoding performance. Since the payload takes much less space in binary format this also helps with the bandwidth. You can read more on it here.

JSON (9 Bytes)


Protobuf (2 Bytes)

0x08 0x2a

In Proto format, a person object can be represented as:

message Person {
  required string name = 1;
  optional int32 age = 2;
  optional string city = 3;

The above is the format of the message and not the actual instance of a Person object. The instances can be built with the language you chose. For example, Protobuf Compiler creates the file by looking at the Person.proto file above, and there you can create an instance with the help of the builders of that class.


Disclaimer: These benchmarking tests ignore the throughput comparisons — for now — but focuses on the difference in the latency.

1. Setup


JMeter, Spring Boot, Maven, Java, gRPC, Protocol Buffer, JSON


How the components work with each other in this benchmarking.

To clearly see the effects of using Protocol Buffers, I created a really big object as Proto and Java object, named it LargeObject (generated by LargeObject.proto), LargeObjectPOJO (a Java object used for JSON Serialization/Deserialization) and tested the APIs by fetching instances of this object at different sizes - as it can be set for each endpoint with the count parameter.


To be able to focus directly on the performances of data transportation and serialization/deserialization, the benchmarking setup has the following constraints:

  • No Database Connection

  • No Business Logic

  • No Logging

To remove the effects of generating the LargeObjectResponse (proto object) and LargeObjectPOJO (java object), I call the “SetUp Thread Group” and let the servers generate the objects and cache them.

This way I can focus only on the performance aspect of both gRPC and REST approaches during data transportation.

2. Scenario

Test Scenarios can be examined under two categories: SetUp and Actual Tests.

SetUp Thread Group

SetUp Thread Group’s main purpose is to trigger all endpoints individually to generate the data that other test scenarios are going to ask for and let the servers cache the responses before they are asked.

Actual Test Thread Groups

  • The Test Plans scenarios start from 1 user and ramp-up to 100 users in 10 seconds (Every second 10 requests are sent).

  • The same test plan is run for both the REST and the gRPC.

  • There are 6 Different thread groups in total, 3 for REST and 3 for GRPC.

  • Each protocol is tested against 1, 100, 1000 LargeObjects to test the performance differences with regards to the input size.

  • Thread Groups are executed sequentially (1 Thread Group runs at a time).

3. Result

It looks even more clear with higher loads of data that gRPC and Protobuff really out-performs REST and JSON.


To conclude, we can clearly see that gRPC with Protobuf beats REST with JSON. gRPC brings the two powerful technologies together to increase the performance — HTTP/2 and Protocol Buffers. HTTP/2 offers more performant data transfer models compared to HTTP/1.1 by relying on the data being transferred is in binary format, and that is enabled by using Protocol Buffers. In addition to that, using Protocol Buffers compared to JSON (or any other text-based data format) plays an important role as it is more performant to encode/decode binary data.

The ability to generate client-side and server-side code for many supported languages without any trouble is one of the game-changer features of gRPC. As the world moves towards the Microservices architecture, where there are many services written in different languages (a polyglot system), stable and efficient code-generation is a great advantage that we should not overlook.

If we aim to design an internal API, with a defined set of features, performance is a major concern for us and we are accepting the loss in the human-readability of the format of the data we should consider working with gRPC and Protobuf.

However, if we are planning to build a public-facing API, we want to be more flexible and more intuitive and we care about the human-readability of the data being transformed, then maybe a REST API could be a better choice.

Source: Medium

The Tech Platform



bottom of page