top of page

DynamoDB: Set Up, Secondary Indexes and Read/Write Capacity.



The AWS DynamoDB is the fully managed NoSQL database offering from Amazon. This is excellent for organizations that do not want to manage their NoSQL databases. DynamoDB provides faster performance and improved scalability. The biggest advantage is that organizations don’t need to worry about provisioning the database or maintaining it in terms of patching or scaling.



Set Up DynamoDB


Step 1: Sign into AWS using your user account.


Step 2: Navigate to the DynamoDB Management Console. You see a Welcome page that contains interesting information about DynamoDB and what it can do for you. However, you don’t see the actual console at this point. Notice the Getting Started Guide link, which you can use to obtain access to tutorials and introductory videos.


Step 3:Click Create Table.

You see the Create DynamoDB Table page. Amazon assumes that most people have worked with an RDBMS database, so the instructions for working with RDS are fewer and less detailed. Notice the level of detail provided for DynamoDB. The wizard explains each part of the table creation process carefully to reduce the likelihood that you will make mistakes.

Start defining the characteristics of the table you want to create.


Step 4: Type TestDB in the Table Name field.Pick a descriptive name for your table. In this case, you need to remember that your entire database could consist of a single, large table.


Step 5: Type EmployeeID in the Primary Key field and choose Number for its type. When working with a NoSQL database, you must define a unique value as the key in the key-value pair. An employee ID is likely to provide a unique value across all employees. Duplicated keys will cause problems because you can’t uniquely identify a particular piece of data after the key is duplicated. A key must also provide a simple value. When working with DynamoDB, you have a choice of making the key a number, string, or binary value. You can’t use a Boolean value because you would have only a choice between true and false. Likewise, other data types won’t work because they are either too complex or don’t offer enough choices.

The default settings create a NoSQL table that lacks a secondary index, allows a specific provisioned capacity, and sets alarms for occasions when applications exceed the provisioned capacity. A provisioned capacity essentially determines the number of reads and writes that you expect per second. Given that this is a test setup, a setting of 5 reads and 5 writes should work well. Read more about provisioned capacity in this AWS article.


Step 6: Select Add Sort Key.

You see another field added for entering a sort field. Notice that this second field is connected to the first, so the two fields are essentially used together.

Choose a sort key that people will understand well.


Step 7: Type EmployeeName in the sort key field and set its type to String.


Step 8: Click Create. You see the Tables page of the DynamoDB Management Console. Each of the tabs tells you something about the table. The More link on the right of the list of tabs tells you that more tabs are available for you to access.

Click the right-pointing arrow to show the Navigation pane, where you can choose other DynamoDB views (Dashboard and Reserved Capacity).

The table you created appears in the Tables page.



Read and Write Capacity

So provisioned throughput is based on the read and write capacity that is put for the table. So now let’s understand the capacity in more detail.

  • Read Capacity – This is the number of items, 4KB in size that can be read in one second from the table. So let’s say for example you have items which are 20KB in size that need to be read per second from the table. Then to calculate the read capacity, you need to divide the anticipated reads per second by 4KB. So in our case, we need to divide 20Kb by 4Kb and we get 5. So we need to provision 5 read capacity units for our table. We then need to go a step further because there are 2 types of read capacity. One is eventual consistency and the other is strongly consistent read.

    • Eventual consistency is where AWS tells that when data is read after a write, it may not reflect the exact data. Yes over a brief period of time, it eventual become consistent and a read would reflect the correct data.

    • Eventual consistency is where AWS tells that when data is read after a write it will always give you the most up to date data.


So now the following things need to be considered further for read capacity. Strong consistent reads are more expensive than eventual consistent reads. The default model for read capacity is eventual consistency. The next point to note is that one strongly consistent read is equal to 2 eventual consist reads.


So in our above case, we would get a value of 5 eventual consistent reads. But if we wanted strongly consistent reads then we would need to multiply the read capacity by 2, which would give the right read capacity for our table and that would be 10.

  • Write Capacity – This is the number of items, 1KB in size that can be written in one second to the table. So let’s say for example you have items which are 20KB in size that need to be written per second to the table. Then to calculate the write capacity, you need to divide the anticipated writes per second by 1KB. So in our case, we need to divide 20Kb by 1Kb and we get 20. So we need to provision 20 writes capacity units for our table.


Calculating Reads (RCU)

A read capacity unit represents:

  • one strongly consistent read per second,

  • or two eventually consistent reads per second,

  • for an item up to 4 KB in size.


How to calculate RCUs for strong

  1. Round data up to nearest 4.

  2. Divide data by 4

  3. Times by number of reads


Here's an example:

  • 50 reads at 40KB per item. (40/4) x 50 = 500 RCUs

  • 10 reads at 6KB per item. (8/4) x 10 = 20 RCUs

  • 33 reads at 17KB per item. (20/4) x 33 = 132 RCUs


How to calculate RCUs for eventual

  1. Round data up to nearest 4.

  2. Divide data by 4

  3. Times by number of reads

  4. Divide final number by 2

  5. Round up to the nearest whole number


Here's an example:

  • 50 reads at 40KB per item. (40/4) x 50 / 2 = 250 RCUs

  • 11 reads at 9KB per item. (12/4) x 11 / 2 = 17 RCUs

  • 14 reads at 24KB per item. (24/4) x 14 / 2 = 35 RCUs


Calculating Writes (Writes)

A write capacity unit represents:

  • one write per second,

  • for an item up to 1 KB


How to calculate Writes

  1. Round data up to nearest 1.

  2. Times by number of writes


Here's an example:

  • 50 writes at 40KB per item. 40 x 50 = 2000 WCUs

  • 11 writes at 1KB per item. 1 x 11 = 11 WCUs

  • 18 writes at 500 BYTES per item. 1 x 18 = 18 WCUs


Secondary Indexes

There are a few basics of secondary indexes that are worth knowing:

  • No uniqueness requirement. Recall that for a table's primary key, every item is uniquely identified by its primary key. Thus, you can't have two Items with the same key in a table. This requirement is not applicable to secondary indexes. You may have Items in your secondary index with the exact same key structure.

  • Secondary index attributes aren't required. When writing an Item, you must specify the primary key elements. This isn't true with secondary indexes -- you may write an Item that doesn't include the attributes for secondary indexes. If you do this, the Item won't be written to the secondary index. This is known as a sparse index and can be a very useful pattern.

  • Index limits per table. You may create 20 global secondary indexes and 5 local secondary indexes per table.


Local and Global Secondary Indexes


Local secondary indexes can be used on a table with a composite primary key to specify an index with the same HASH key but a different RANGE key for a table. This is useful for the scenario mentioned in the intro above -- we still want to partition our data by Username, but we want to retrieve Items by a different attribute (Amount).


The features are:

  • Strongly-consistent reads

  • Reuse of base table capacity


Global secondary indexes can be used to specify a completely different key structure for a table. If you had a table with a composite primary key, you could have a global secondary index with a simple key. Or, you could add a global secondary index with a completely different HASH key and RANGE key. If your table has a simple primary key, you could add a global secondary index with a composite key structure.


Your options for attribute projections are:

  • KEYS_ONLY: Your index will include only the keys for the index and the table's underlying partition and sort key values, but no other attributes.

  • ALL: The full Item is available in the secondary index with all attributes.

  • INCLUDE: You may choose to name certain attributes that are projected into the secondary index.



Resource: Whizlabs, Dynamodbguide


The Tech Platform

1 comment
bottom of page