Dynamodb DG
Dynamodb DG
Developer Guide
API Version 2012-08-10
Amazon DynamoDB Developer Guide
Amazon's trademarks and trade dress may not be used in connection with any product or service that is not
Amazon's, in any manner that is likely to cause confusion among customers, or in any manner that disparages or
discredits Amazon. All other trademarks not owned by Amazon are the property of their respective owners, who may
or may not be affiliated with, connected to, or sponsored by Amazon.
Amazon DynamoDB Developer Guide
Table of Contents
What Is Amazon DynamoDB? ............................................................................................................... 1
High Availability and Durability .................................................................................................... 1
Getting started with DynamoDB .................................................................................................. 1
How It Works ............................................................................................................................ 2
Core Components .............................................................................................................. 2
The DynamoDB API ............................................................................................................ 9
Naming Rules and Data Types ........................................................................................... 11
Read Consistency ............................................................................................................. 15
Read/write Capacity Mode ................................................................................................. 16
Partitions and Data Distribution ......................................................................................... 20
From SQL to NoSQL ................................................................................................................. 22
SQL or NoSQL? ................................................................................................................ 23
Accessing the Database ..................................................................................................... 24
Creating a Table ............................................................................................................... 26
Getting Information About a Table ..................................................................................... 28
Writing Data To a Table .................................................................................................... 29
Reading Data From a Table ............................................................................................... 31
Managing Indexes ............................................................................................................. 35
Modifying Data in a Table ................................................................................................. 38
Deleting Data from a Table ............................................................................................... 40
Removing a Table ............................................................................................................. 41
Setting Up DynamoDB ...................................................................................................................... 43
Setting Up DynamoDB Local (Downloadable Version) .................................................................... 43
DynamoDB (Downloadable Version) on Your Computer ......................................................... 43
DynamoDB (Downloadable Version) and Apache Maven ......................................................... 45
DynamoDB (Downloadable Version) and Docker ................................................................... 45
Usage Notes .................................................................................................................... 46
Setting Up DynamoDB (Web Service) .......................................................................................... 48
Signing Up for AWS .......................................................................................................... 48
Getting an AWS Access Key ............................................................................................... 49
Configuring Your Credentials ............................................................................................. 49
Accessing DynamoDB ........................................................................................................................ 51
Using the Console .................................................................................................................... 51
Working with User Preferences .......................................................................................... 52
Using the CLI ........................................................................................................................... 53
Downloading and Configuring the AWS CLI ......................................................................... 53
Using the AWS CLI with DynamoDB .................................................................................... 53
Using the AWS CLI with Downloadable DynamoDB ............................................................... 54
Using the API .......................................................................................................................... 55
IP Address Ranges .................................................................................................................... 55
Getting Started with DynamoDB ........................................................................................................ 56
Basic Concepts ......................................................................................................................... 56
Prerequisites ............................................................................................................................ 56
Step 1: Create a Table .............................................................................................................. 56
Step 2: Write Data .................................................................................................................... 59
Step 3: Read Data .................................................................................................................... 62
Step 4: Update Data ................................................................................................................. 63
Step 5: Query Data .................................................................................................................. 65
Step 6: Create a Global Secondary Index ..................................................................................... 67
Step 7: Query the Global Secondary Index .................................................................................. 70
Step 8: (Optional) Clean Up ....................................................................................................... 72
Next Steps ............................................................................................................................... 72
Getting Started with DynamoDB SDK ................................................................................................. 74
Java and DynamoDB ................................................................................................................. 74
Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable
performance with seamless scalability. DynamoDB lets you offload the administrative burdens
of operating and scaling a distributed database, so that you don't have to worry about hardware
provisioning, setup and configuration, replication, software patching, or cluster scaling. Also, DynamoDB
offers encryption at rest, which eliminates the operational burden and complexity involved in protecting
sensitive data. For more information, see DynamoDB Encryption at Rest (p. 706).
With DynamoDB, you can create database tables that can store and retrieve any amount of data, and
serve any level of request traffic. You can scale up or scale down your tables' throughput capacity
without downtime or performance degradation, and use the AWS Management Console to monitor
resource utilization and performance metrics.
Amazon DynamoDB provides on-demand backup capability. It allows you to create full backups of your
tables for long-term retention and archival for regulatory compliance needs. For more information, see
On-Demand Backup and Restore for DynamoDB (p. 596).
You can create on-demand backups as well as enable point-in-time recovery for your Amazon DynamoDB
tables. Point-in-time recovery helps protect your Amazon DynamoDB tables from accidental write or
delete operations. With point-in-time recovery, you can restore that table to any point in time during the
last 35 days. For more information, see Point-in-Time Recovery: How It Works (p. 608).
DynamoDB allows you to delete expired items from tables automatically to help you reduce storage
usage and the cost of storing data that is no longer relevant. For more information, see Time To
Live (p. 406).
• Amazon DynamoDB: How It Works (p. 2)—To learn essential DynamoDB concepts.
• Setting Up DynamoDB (p. 43)—To learn how to set up DynamoDB (Downloadable Version or Web
Service).
• Accessing DynamoDB (p. 51)—To learn how to access DynamoDB using the console, CLI, or API.
To get started quickly with DynamoDB, see Getting Started with DynamoDB SDK (p. 74).
To quickly find recommendations for maximizing performance and minimizing throughput costs see
Best Practices for DynamoDB (p. 806). To learn how to tag DynamoDB resources see Tagging for
DynamoDB (p. 357).
For best practices, how-to guides and tools, be sure to check the DynamoDB Developer Resources page:
http://aws.amazon.com/dynamodb/developer-resources/.
You can use AWS Database Migration Service to migrate data from a Relational Database or MongoDB
to an Amazon DynamoDB table. For more information, see AWS Database Migration Service User
Guide. To learn how to use MongoDB as a migration source, see Using MongoDB as a Source for AWS
Database Migration Service. To learn how to use DynamoDB as a migration target, see Using an Amazon
DynamoDB Database as a Target for AWS Database Migration Service.
After you read this introduction, try working through the Creating Tables and Loading Sample
Data (p. 323) section, which walks you through the process of creating sample tables, uploading data,
and performing some basic database operations.
For language-specific tutorials with sample code, see Getting Started with DynamoDB SDK (p. 74).
Topics
• DynamoDB Core Components (p. 2)
• The DynamoDB API (p. 9)
• Naming Rules and Data Types (p. 11)
• Read Consistency (p. 15)
• Read/Write Capacity Mode (p. 16)
• Partitions and Data Distribution (p. 20)
There are limits in DynamoDB. For more information, see Limits in DynamoDB (p. 873).
Topics
• Tables, Items, and Attributes (p. 2)
• Primary Key (p. 5)
• Secondary Indexes (p. 5)
• DynamoDB Streams (p. 8)
• Tables – Similar to other database systems, DynamoDB stores data in tables. A table is a collection of
data. For example, see the example table called People that you could use to store personal contact
information about friends, family, or anyone else of interest. You could also have a Cars table to store
information about vehicles that people drive.
• Items – Each table contains zero or more items. An item is a group of attributes that is uniquely
identifiable among all of the other items. In a People table, each item represents a person. For a Cars
table, each item represents one vehicle. Items in DynamoDB are similar in many ways to rows, records,
or tuples in other database systems. In DynamoDB, there is no limit to the number of items you can
store in a table.
• Attributes – Each item is composed of one or more attributes. An attribute is a fundamental data
element, something that does not need to be broken down any further. For example, an item in a
People table contains attributes called PersonID, LastName, FirstName, and so on. For a Department
table, an item might have attributes such as DepartmentID, Name, Manager, and so on. Attributes in
DynamoDB are similar in many ways to fields or columns in other database systems.
The following diagram shows a table named People with some example items and attributes.
• Each item in the table has a unique identifier, or primary key, that distinguishes the item from all of
the others in the table. In the People table, the primary key consists of one attribute (PersonID).
• Other than the primary key, the People table is schemaless, which means that neither the attributes
nor their data types need to be defined beforehand. Each item can have its own distinct attributes.
• Most of the attributes are scalar, which means that they can have only one value. Strings and numbers
are common examples of scalars.
• Some of the items have a nested attribute (Address). DynamoDB supports nested attributes up to 32
levels deep.
The following is another example table named Music that you could use to keep track of your music
collection.
• The primary key for Music consists of two attributes (Artist and SongTitle). Each item in the table must
have these two attributes. The combination of Artist and SongTitle distinguishes each item in the table
from all of the others.
• Other than the primary key, the Music table is schemaless, which means that neither the attributes nor
their data types need to be defined beforehand. Each item can have its own distinct attributes.
• One of the items has a nested attribute (PromotionInfo), which contains other nested attributes.
DynamoDB supports nested attributes up to 32 levels deep.
For more information, see Working with Tables in DynamoDB (p. 333).
Primary Key
When you create a table, in addition to the table name, you must specify the primary key of the table.
The primary key uniquely identifies each item in the table, so that no two items can have the same key.
• Partition key – A simple primary key, composed of one attribute known as the partition key.
DynamoDB uses the partition key's value as input to an internal hash function. The output from the
hash function determines the partition (physical storage internal to DynamoDB) in which the item will
be stored.
In a table that has only a partition key, no two items can have the same partition key value.
The People table described in Tables, Items, and Attributes (p. 2) is an example of a table with a
simple primary key (PersonID). You can access any item in the People table directly by providing the
PersonId value for that item.
• Partition key and sort key – Referred to as a composite primary key, this type of key is composed of
two attributes. The first attribute is the partition key, and the second attribute is the sort key.
DynamoDB uses the partition key value as input to an internal hash function. The output from the
hash function determines the partition (physical storage internal to DynamoDB) in which the item will
be stored. All items with the same partition key value are stored together, in sorted order by sort key
value.
In a table that has a partition key and a sort key, it's possible for two items to have the same partition
key value. However, those two items must have different sort key values.
The Music table described in Tables, Items, and Attributes (p. 2) is an example of a table with a
composite primary key (Artist and SongTitle). You can access any item in the Music table directly, if you
provide the Artist and SongTitle values for that item.
A composite primary key gives you additional flexibility when querying data. For example, if you
provide only the value for Artist, DynamoDB retrieves all of the songs by that artist. To retrieve only a
subset of songs by a particular artist, you can provide a value for Artist along with a range of values for
SongTitle.
Note
The partition key of an item is also known as its hash attribute. The term hash attribute derives
from the use of an internal hash function in DynamoDB that evenly distributes data items across
partitions, based on their partition key values.
The sort key of an item is also known as its range attribute. The term range attribute derives
from the way DynamoDB stores items with the same partition key physically close together, in
sorted order by the sort key value.
Each primary key attribute must be a scalar (meaning that it can hold only a single value). The only data
types allowed for primary key attributes are string, number, or binary. There are no such restrictions for
other, non-key attributes.
Secondary Indexes
You can create one or more secondary indexes on a table. A secondary index lets you query the data
in the table using an alternate key, in addition to queries against the primary key. DynamoDB doesn't
require that you use indexes, but they give your applications more flexibility when querying your data.
After you create a secondary index on a table, you can read data from the index in much the same way as
you do from the table.
• Global secondary index – An index with a partition key and sort key that can be different from those
on the table.
• Local secondary index – An index that has the same partition key as the table, but a different sort key.
Each table in DynamoDB has a limit of 20 global secondary indexes (default limit) and 5 local secondary
indexes per table.
In the example Music table shown previously, you can query data items by Artist (partition key) or by
Artist and SongTitle (partition key and sort key). What if you also wanted to query the data by Genre and
AlbumTitle? To do this, you could create an index on Genre and AlbumTitle, and then query the index in
much the same way as you'd query the Music table.
The following diagram shows the example Music table, with a new index called GenreAlbumTitle. In the
index, Genre is the partition key and AlbumTitle is the sort key.
• Every index belongs to a table, which is called the base table for the index. In the preceding example,
Music is the base table for the GenreAlbumTitle index.
• DynamoDB maintains indexes automatically. When you add, update, or delete an item in the base
table, DynamoDB adds, updates, or deletes the corresponding item in any indexes that belong to that
table.
• When you create an index, you specify which attributes will be copied, or projected, from the base
table to the index. At a minimum, DynamoDB projects the key attributes from the base table into the
index. This is the case with GenreAlbumTitle, where only the key attributes from the Music table
are projected into the index.
You can query the GenreAlbumTitle index to find all albums of a particular genre (for example, all Rock
albums). You can also query the index to find all albums within a particular genre that have certain
album titles (for example, all Country albums with titles that start with the letter H).
For more information, see Improving Data Access with Secondary Indexes (p. 493).
DynamoDB Streams
DynamoDB Streams is an optional feature that captures data modification events in DynamoDB tables.
The data about these events appear in the stream in near real time, and in the order that the events
occurred.
Each event is represented by a stream record. If you enable a stream on a table, DynamoDB Streams
writes a stream record whenever one of the following events occurs:
• A new item is added to the table: The stream captures an image of the entire item, including all of its
attributes.
• An item is updated: The stream captures the "before" and "after" image of any attributes that were
modified in the item.
• An item is deleted from the table: The stream captures an image of the entire item before it was
deleted.
Each stream record also contains the name of the table, the event timestamp, and other metadata.
Stream records have a lifetime of 24 hours; after that, they are automatically removed from the stream.
You can use DynamoDB Streams together with AWS Lambda to create a trigger—code that executes
automatically whenever an event of interest appears in a stream. For example, consider a Customers
table that contains customer information for a company. Suppose that you want to send a "welcome"
email to each new customer. You could enable a stream on that table, and then associate the stream
with a Lambda function. The Lambda function would execute whenever a new stream record appears,
but only process new items added to the Customers table. For any item that has an EmailAddress
attribute, the Lambda function would invoke Amazon Simple Email Service (Amazon SES) to send an
email to that address.
Note
In this example, the last customer, Craig Roe, will not receive an email because he doesn't have
an EmailAddress.
In addition to triggers, DynamoDB Streams enables powerful solutions such as data replication within
and across AWS Regions, materialized views of data in DynamoDB tables, data analysis using Kinesis
materialized views, and much more.
For more information, see Capturing Table Activity with DynamoDB Streams (p. 566).
Topics
• Control Plane (p. 10)
• Data Plane (p. 10)
• DynamoDB Streams (p. 11)
• Transactions (p. 11)
Control Plane
Control plane operations let you create and manage DynamoDB tables. They also let you work with
indexes, streams, and other objects that are dependent on tables.
• CreateTable – Creates a new table. Optionally, you can create one or more secondary indexes, and
enable DynamoDB Streams for the table.
• DescribeTable– Returns information about a table, such as its primary key schema, throughput
settings, index information, and so on.
• ListTables – Returns the names of all of your tables in a list.
• UpdateTable – Modifies the settings of a table or its indexes, creates or remove new indexes on a
table, or modifies DynamoDB Streams settings for a table.
• DeleteTable – Removes a table and all of its dependent objects from DynamoDB.
Data Plane
Data plane operations let you perform create, read, update, and delete (also called CRUD) actions on
data in a table. Some of the data plane operations also let you read data from a secondary index.
Creating Data
• PutItem – Writes a single item to a table. You must specify the primary key attributes, but you don't
have to specify other attributes.
• BatchWriteItem – Writes up to 25 items to a table. This is more efficient than calling PutItem
multiple times because your application only needs a single network round trip to write the items. You
can also use BatchWriteItem for deleting multiple items from one or more tables.
Reading Data
• GetItem – Retrieves a single item from a table. You must specify the primary key for the item that you
want. You can retrieve the entire item, or just a subset of its attributes.
• BatchGetItem – Retrieves up to 100 items from one or more tables. This is more efficient than calling
GetItem multiple times because your application only needs a single network round trip to read the
items.
• Query – Retrieves all items that have a specific partition key. You must specify the partition key value.
You can retrieve entire items, or just a subset of their attributes. Optionally, you can apply a condition
to the sort key values, so that you only retrieve a subset of the data that has the same partition key.
You can use this operation on a table, provided that the table has both a partition key and a sort key.
You can also use this operation on an index, provided that the index has both a partition key and a sort
key.
• Scan – Retrieves all items in the specified table or index. You can retrieve entire items, or just a subset
of their attributes. Optionally, you can apply a filtering condition to return only the values that you are
interested in and discard the rest.
Updating Data
• UpdateItem – Modifies one or more attributes in an item. You must specify the primary key for the
item that you want to modify. You can add new attributes and modify or remove existing attributes.
You can also perform conditional updates, so that the update is only successful when a user-defined
condition is met. Optionally, you can implement an atomic counter, which increments or decrements a
numeric attribute without interfering with other write requests.
Deleting Data
• DeleteItem – Deletes a single item from a table. You must specify the primary key for the item that
you want to delete.
• BatchWriteItem – Deletes up to 25 items from one or more tables. This is more efficient than calling
DeleteItem multiple times because your application only needs a single network round trip to delete
the items. You can also use BatchWriteItem for adding multiple items to one or more tables.
DynamoDB Streams
DynamoDB Streams operations let you enable or disable a stream on a table, and allow access to the
data modification records contained in a stream.
• ListStreams – Returns a list of all your streams, or just the stream for a specific table.
• DescribeStream – Returns information about a stream, such as its Amazon Resource Name (ARN)
and where your application can begin reading the first few stream records.
• GetShardIterator – Returns a shard iterator, which is a data structure that your application uses to
retrieve the records from the stream.
• GetRecords – Retrieves one or more stream records, using a given shard iterator.
Transactions
Transactions provide atomicity, consistency, isolation, and durability (ACID) enabling you to maintain
data correctness in your applications more easily.
• TransactWriteItems – A batch operation that allows Put, Update and Delete operations to
multiple items both within and across tables with a guaranteed all-or-nothing result.
• TransactGetItems – A batch operation that allows Get operations to retrieves multiple items from
one or more tables.
Topics
• Naming Rules (p. 11)
• Data Types (p. 12)
Naming Rules
Tables, attributes, and other objects in DynamoDB must have names. Names should be meaningful and
concise—for example, names such as Products, Books, and Authors are self-explanatory.
• A-Z
• 0-9
• _ (underscore)
• - (dash)
• . (dot)
• Attribute names must be between 1 and 255 characters long.
Although DynamoDB allows you to use these reserved words and special characters for names, we
recommend that you avoid it because you have to define placeholder variables whenever you use these
names in an expression. For more information, see Expression Attribute Names (p. 387).
Data Types
DynamoDB supports many different data types for attributes within a table. They can be categorized as
follows:
• Scalar Types – A scalar type can represent exactly one value. The scalar types are number, string,
binary, Boolean, and null.
• Document Types – A document type can represent a complex structure with nested attributes—such
as you would find in a JSON document. The document types are list and map.
• Set Types – A set type can represent multiple scalar values. The set types are string set, number set,
and binary set.
When you create a table or a secondary index, you must specify the names and data types of each
primary key attribute (partition key and sort key). Furthermore, each primary key attribute must be
defined as type string, number, or binary.
DynamoDB is a NoSQL database and is schemaless. This means that, other than the primary key
attributes, you don't have to define any attributes or data types when you create tables. By comparison,
relational databases require you to define the names and data types of each column when you create a
table.
The following are descriptions of each data type, along with examples in JSON format.
Scalar Types
The scalar types are number, string, binary, Boolean, and null.
Number
Numbers can be positive, negative, or zero. Numbers can have up to 38 digits precision. Exceeding this
results in an exception.
In DynamoDB, numbers are represented as variable length. Leading and trailing zeroes are trimmed.
All numbers are sent across the network to DynamoDB as strings, to maximize compatibility across
languages and libraries. However, DynamoDB treats them as number type attributes for mathematical
operations.
Note
If number precision is important, you should pass numbers to DynamoDB using strings that you
convert from the number type.
You can use the number data type to represent a date or a timestamp. One way to do this is by using
epoch time—the number of seconds since 00:00:00 UTC on 1 January 1970. For example, the epoch time
1437136300 represents 12:31:40 PM UTC on 17 July 2015.
String
Strings are Unicode with UTF-8 binary encoding. The length of a string must be greater than zero and is
constrained by the maximum DynamoDB item size limit of 400 KB.
The following additional constraints apply to primary key attributes that are defined as type string:
• For a simple primary key, the maximum length of the first attribute value (the partition key) is 2048
bytes.
• For a composite primary key, the maximum length of the second attribute value (the sort key) is 1024
bytes.
DynamoDB collates and compares strings using the bytes of the underlying UTF-8 string encoding. For
example, "a" (0x61) is greater than "A" (0x41), and "¿" (0xC2BF) is greater than "z" (0x7A).
You can use the string data type to represent a date or a timestamp. One way to do this is by using ISO
8601 strings, as shown in these examples:
• 2016-02-15
• 2015-12-21T17:42:34Z
• 20150311T122706Z
Binary
Binary type attributes can store any binary data, such as compressed text, encrypted data, or images.
Whenever DynamoDB compares binary values, it treats each byte of the binary data as unsigned.
The length of a binary attribute must be greater than zero, and is constrained by the maximum
DynamoDB item size limit of 400 KB.
If you define a primary key attribute as a binary type attribute, the following additional constraints
apply:
• For a simple primary key, the maximum length of the first attribute value (the partition key) is 2048
bytes.
• For a composite primary key, the maximum length of the second attribute value (the sort key) is 1024
bytes.
Your applications must encode binary values in base64-encoded format before sending them to
DynamoDB. Upon receipt of these values, DynamoDB decodes the data into an unsigned byte array and
uses that as the length of the binary attribute.
dGhpcyB0ZXh0IGlzIGJhc2U2NC1lbmNvZGVk
Boolean
A Boolean type attribute can store either true or false.
Null
Null represents an attribute with an unknown or undefined state.
Document Types
The document types are list and map. These data types can be nested within each other, to represent
complex data structures up to 32 levels deep.
There is no limit on the number of values in a list or a map, as long as the item containing the values fits
within the DynamoDB item size limit (400 KB).
An attribute value cannot be an empty String or empty Set (String Set, Number Set, or Binary Set).
However, empty Lists and Maps are allowed. For more information, see Attributes (p. 878).
List
A list type attribute can store an ordered collection of values. Lists are enclosed in square brackets:
[ ... ]
A list is similar to a JSON array. There are no restrictions on the data types that can be stored in a list
element, and the elements in a list element do not have to be of the same type.
The following example shows a list that contains two strings and a number:
Note
DynamoDB lets you work with individual elements within lists, even if those elements are deeply
nested. For more information, see Using Expressions in DynamoDB (p. 383).
Map
A map type attribute can store an unordered collection of name-value pairs. Maps are enclosed in curly
braces: { ... }
A map is similar to a JSON object. There are no restrictions on the data types that can be stored in a map
element, and the elements in a map do not have to be of the same type.
Maps are ideal for storing JSON documents in DynamoDB. The following example shows a map that
contains a string, a number, and a nested list that contains another map.
{
Day: "Monday",
UnreadEmails: 42,
ItemsOnMyDesk: [
"Coffee Cup",
"Telephone",
{
Pens: { Quantity : 3},
Pencils: { Quantity : 2},
Erasers: { Quantity : 1}
}
]
}
Note
DynamoDB lets you work with individual elements within maps, even if those elements are
deeply nested. For more information, see Using Expressions in DynamoDB (p. 383).
Sets
DynamoDB supports types that represent sets of Number, String, or Binary values. All of the elements
within a set must be of the same type. For example, an attribute of type Number Set can only contain
numbers; String Set can only contain strings; and so on.
There is no limit on the number of values in a set, as long as the item containing the values fits within
the DynamoDB item size limit (400 KB).
Each value within a set must be unique. The order of the values within a set is not preserved; therefore,
your applications must not rely on any particular order of elements within the set. Finally, DynamoDB
does not support empty sets.
The following example shows a string set, a number set, and a binary set:
Read Consistency
Amazon DynamoDB is available in multiple AWS regions around the world. Each region is independent
and isolated from other AWS regions. For example, if you have a table called People in the us-east-2
region and another table named People in the us-west-2 region, these are considered two entirely
separate tables. For a list of all the AWS regions in which DynamoDB is available, see AWS Regions and
Endpoints in the Amazon Web Services General Reference.
Every AWS region consists of multiple distinct locations called Availability Zones. Each Availability Zone
is isolated from failures in other Availability Zones, and provides inexpensive, low-latency network
connectivity to other Availability Zones in the same region. This allows rapid replication of your data
among multiple Availability Zones in a region.
When your application writes data to a DynamoDB table and receives an HTTP 200 response (OK), the
write has occurred and is durable. The data is eventually consistent across all storage locations, usually
within one second or less.
When you read data from a DynamoDB table, the response might not reflect the results of a recently
completed write operation. The response might include some stale data. If you repeat your read request
after a short time, the response should return the latest data.
When you request a strongly consistent read, DynamoDB returns a response with the most up-to-date
data, reflecting the updates from all prior write operations that were successful. A strongly consistent
read might not be available if there is a network delay or outage. Consistent reads are not supported on
global secondary indexes (GSI).
Note
DynamoDB uses eventually consistent reads, unless you specify otherwise. Read operations
(such as GetItem, Query, and Scan) provide a ConsistentRead parameter. If you set this
parameter to true, DynamoDB uses strongly consistent reads during the operation.
• On-demand
• Provisioned (default, free-tier eligible)
The read/write capacity mode controls how you are charged for read and write throughput and how you
manage capacity. You can set the read/write capacity mode when creating a table or you can change it
later.
Global secondary indexes inherit the read/write capacity mode from the base table. For more
information, see Considerations When Changing Read/Write Capacity Mode (p. 338).
Topics
• On-Demand Mode (p. 16)
• Provisioned Mode (p. 18)
On-Demand Mode
Amazon DynamoDB on-demand is a flexible billing option capable of serving thousands of requests per
second without capacity planning. DynamoDB on-demand offers pay-per-request pricing for read and
write requests so that you pay only for what you use.
When you choose on-demand mode, DynamoDB instantly accommodates your workloads as they
ramp up or down to any previously reached traffic level. If a workload’s traffic level hits a new peak,
DynamoDB adapts rapidly to accommodate the workload. Tables that use on-demand mode deliver
the same single-digit millisecond latency, service-level agreement (SLA) commitment, and security that
DynamoDB already offers. You can choose on-demand for both new and existing tables and you can
continue using the existing DynamoDB APIs without changing code.
The request rate is only limited by the DynamoDB throughput default table limits, but it can be raised
upon request. For more information, see Throughput Default Limits (p. 874).
To get started with on-demand, you can create or update a table to use on-demand mode. For more
information, see Basic Operations for Tables (p. 333).
You can switch between read/write capacity modes once every 24 hours. For issues you should consider
when switching read/write capacity modes, see Considerations When Changing Read/Write Capacity
Mode (p. 338).
Note
On-demand is currently not supported by AWS Data Pipeline, the DynamoDB import/export
tool, and AWS Glue.
Topics
• Read Request Units and Write Request Units (p. 17)
• Peak Traffic and Scaling Properties (p. 17)
• Initial Throughput for On-Demand Capacity Mode (p. 18)
• Table Behavior while Switching Read/Write Capacity Mode (p. 18)
• One read request unit represents one strongly consistent read request, or two eventually consistent
read requests, for an item up to 4 KB in size. Transactional read requests require 2 read request units to
perform one read for items up to 4 KB. If you need to read an item that is larger than 4 KB, DynamoDB
needs additional read request units. The total number of read request units required depends on the
item size, and whether you want an eventually consistent or strongly consistent read. For example, if
your item size is 8 KB, you require 2 read request units to sustain one strongly consistent read, 1 read
request unit if you choose eventually consistent reads, or 4 read request units for a transactional read
request.
Note
To learn more about DynamoDB read consistency models, see Read Consistency (p. 15).
• One write request unit represents one write for an item up to 1 KB in size. If you need to write an item
that is larger than 1 KB, DynamoDB needs to consume additional write request units. Transactional
write requests require 2 write request units to perform one write for items up to 1 KB. The total
number of write request units required depends on the item size. For example, if your item size is
2 KB, you require 2 write request units to sustain one write request or 4 write request units for a
transactional write request.
For a list of AWS Regions where DynamoDB on-demand is available, see Amazon DynamoDB Pricing.
If you need more than double your previous peak on table, DynamoDB automatically allocates more
capacity as your traffic volume increases to help ensure that your workload does not experience
throttling. However, throttling can occur if you exceed double your previous peak within 30 minutes.
For example, if your application’s traffic pattern varies between 25,000 and 50,000 strongly consistent
reads per second where 50,000 reads per second is the previously reached traffic peak, DynamoDB
recommends spacing your traffic growth over at least 30 minutes before driving more than 100,000
reads per second.
• Newly created table with on-demand capacity mode: The previous peak is 2,000 write request units
or 6,000 read request units. You can drive up to double the previous peak immediately, which enables
newly created on-demand tables to serve up to 4,000 write request units or 12,000 read request units,
or any linear combination of the two.
• Existing table switched to on-demand capacity mode: The previous peak is the maximum previous
write capacity units and read capacity units provisioned for the table or the settings for a newly
created table with on-demand capacity mode, whichever is higher.
Provisioned Mode
If you choose provisioned mode, you specify the number of reads and writes per second that you require
for your application. You can use auto scaling to adjust your table’s provisioned capacity automatically
in response to traffic changes. This helps you govern your DynamoDB use to stay at or below a defined
request rate in order to obtain cost predictability.
• One read capacity unit represents one strongly consistent read per second, or two eventually
consistent reads per second, for an item up to 4 KB in size. Transactional read requests require two
read capacity units to perform one read per second for items up to 4 KB. If you need to read an item
that is larger than 4 KB, DynamoDB must consume additional read capacity units. The total number of
read capacity units required depends on the item size, and whether you want an eventually consistent
or strongly consistent read. For example, if your item size is 8 KB, you require 2 read capacity units
to sustain one strongly consistent read per second, 1 read capacity unit if you choose eventually
consistent reads, or 4 read capacity units for a transactional read request. For more information, see
Capacity Unit Consumption for Reads (p. 340).
Note
To learn more about DynamoDB read consistency models, see Read Consistency (p. 15).
• One write capacity unit represents one write per second for an item up to 1 KB in size. If you need
to write an item that is larger than 1 KB, DynamoDB must consume additional write capacity units.
Transactional write requests require 2 write capacity units to perform one write per second for items
up to 1 KB. The total number of write capacity units required depends on the item size. For example,
if your item size is 2 KB, you require 2 write capacity units to sustain one write request per second
or 4 write capacity units for a transactional write request. For more information, see Capacity Unit
Consumption for Writes (p. 340).
Important
When calling DescribeTable on an on-demand table, read capacity units and write capacity
units are set to 0.
If your application reads or writes larger items (up to the DynamoDB maximum item size of 400 KB), it
will consume more capacity units.
For example, suppose that you create a provisioned table with 6 read capacity units and 6 write capacity
units. With these settings, your application could do the following:
For more information, see Managing Throughput Settings on Provisioned Tables (p. 339).
Provisioned throughput is the maximum amount of capacity that an application can consume from a
table or index. If your application exceeds your provisioned throughput capacity on a table or index, it is
subject to request throttling.
Throttling prevents your application from consuming too many capacity units.
When a request is throttled, it fails with an HTTP 400 code (Bad Request) and a
ProvisionedThroughputExceededException. The AWS SDKs have built-in support for retrying
throttled requests (see Error Retries and Exponential Backoff (p. 224)), so you do not need to write this
logic yourself.
You can use the AWS Management Console to monitor your provisioned and actual throughput, and to
modify your throughput settings if necessary.
With DynamoDB auto scaling, a table or a global secondary index can increase its provisioned read and
write capacity to handle sudden increases in traffic, without request throttling. When the workload
decreases, DynamoDB auto scaling can decrease the throughput so that you don't pay for unused
provisioned capacity.
Note
If you use the AWS Management Console to create a table or a global secondary index,
DynamoDB auto scaling is enabled by default.
You can manage auto scaling settings at any time by using the console, the AWS CLI, or one of
the AWS SDKs.
For more information, see Managing Throughput Capacity Automatically with DynamoDB Auto
Scaling (p. 343).
Reserved Capacity
As a DynamoDB customer, you can purchase reserved capacity in advance, as described at Amazon
DynamoDB Pricing. With reserved capacity, you pay a one-time upfront fee and commit to a minimum
usage level over a period of time. By reserving your read and write capacity units ahead of time, you
realize significant cost savings compared to on-demand provisioned throughput settings.
Note
Reserved capacity is not available in on-demand mode.
To manage reserved capacity, go to the DynamoDB console and choose Reserved Capacity.
Note
You can prevent users from viewing or purchasing reserved capacity, while still allowing them
to access the rest of the console. For more information, see "Grant Permissions to Prevent
Purchasing of Reserved Capacity Offerings" in Identity and Access Management in Amazon
DynamoDB (p. 714).
When you create a table, the initial status of the table is CREATING. During this phase, DynamoDB
allocates sufficient partitions to the table so that it can handle your provisioned throughput
requirements. You can begin writing and reading table data after the table status changes to ACTIVE.
• If you increase the table's provisioned throughput settings beyond what the existing partitions can
support.
• If an existing partition fills to capacity and more storage space is required.
Partition management occurs automatically in the background and is transparent to your applications.
Your table remains available throughout and fully supports your provisioned throughput requirements.
Global secondary indexes in DynamoDB are also composed of partitions. The data in a GSI is stored
separately from the data in its base table, but index partitions behave in much the same way as table
partitions.
To write an item to the table, DynamoDB uses the value of the partition key as input to an internal hash
function. The output value from the hash function determines the partition in which the item will be
stored.
To read an item from the table, you must specify the partition key value for the item. DynamoDB uses
this value as input to its hash function, yielding the partition in which the item can be found.
The following diagram shows a table named Pets, which spans multiple partitions. The table's primary
key is AnimalType (only this key attribute is shown). DynamoDB uses its hash function to determine
where to store a new item, in this case based on the hash value of the string Dog. Note that the items are
not stored in sorted order. Each item's location is determined by the hash value of its partition key.
Note
DynamoDB is optimized for uniform distribution of items across a table's partitions, no matter
how many partitions there may be. We recommend that you choose a partition key that can
have a large number of distinct values relative to the number of items in the table.
To write an item to the table, DynamoDB calculates the hash value of the partition key to determine
which partition should contain the item. In that partition, there could be several items with the same
partition key value, so DynamoDB stores the item among the others with the same partition key, in
ascending order by sort key.
To read an item from the table, you must specify its partition key value and sort key value. DynamoDB
calculates the partition key's hash value, yielding the partition in which the item can be found.
You can read multiple items from the table in a single operation (Query), provided that the items you
want have the same partition key value. DynamoDB returns all of the items with that partition key value.
Optionally, you can apply a condition to the sort key so that it returns only the items within a certain
range of values.
Suppose that the Pets table has a composite primary key consisting of AnimalType (partition key) and
Name (sort key). The following diagram shows DynamoDB writing an item with a partition key value of
Dog and a sort key value of Fido.
To read that same item from the Pets table, DynamoDB calculates the hash value of Dog, yielding the
partition in which these items are stored. DynamoDB then scans the sort key attribute values until it
finds Fido.
To read all of the items with an AnimalType of Dog, you can issue a Query operation without specifying
a sort key condition. By default, the items are returned in the order that they are stored (that is, in
ascending order by sort key). Optionally, you can request descending order instead.
To query only some of the Dog items, you can apply a condition to the sort key (for example, only the
Dog items where Name begins with a letter that is within the range A through K).
Note
In a DynamoDB table, there is no upper limit on the number of distinct sort key values per
partition key value. If you needed to store many billions of Dog items in the Pets table,
DynamoDB automatically allocates enough storage to handle this requirement.
Amazon DynamoDB, you will encounter many similarities, but also many things that are different.
This section describes common database tasks, comparing and contrasting SQL statements with their
equivalent DynamoDB operations.
NoSQL is a term used to describe non-relational database systems that are highly available, scalable,
and optimized for high performance. Instead of the relational model, NoSQL databases (like DynamoDB)
use alternate models for data management, such as key-value pairs or document storage. For more
information, see http://aws.amazon.com/nosql.
Note
The SQL examples in this section are compatible with the MySQL relational database
management system.
The DynamoDB examples in this section show the name of the DynamoDB operation, along with
the parameters for that operation in JSON format. For code examples that use these operations,
see Getting Started with DynamoDB SDK (p. 74).
Topics
• SQL or NoSQL? (p. 23)
• Accessing the Database (p. 24)
• Creating a Table (p. 26)
• Getting Information About a Table (p. 28)
• Writing Data To a Table (p. 29)
• Reading Data From a Table (p. 31)
• Managing Indexes (p. 35)
• Modifying Data in a Table (p. 38)
• Deleting Data from a Table (p. 40)
• Removing a Table (p. 41)
SQL or NoSQL?
Today's applications have more demanding requirements than ever before. For example, an online game
might start out with just a few users and a very small amount of data. However, if the game becomes
successful, it can easily outstrip the resources of the underlying database management system. It is not
uncommon for web-based applications to have hundreds, thousands, or millions of concurrent users,
with terabytes or more of new data generated per day. Databases for such applications must handle tens
(or hundreds) of thousands of reads and writes per second.
Amazon DynamoDB is well-suited for these kinds of workloads. As a developer, you can start with a small
amount of provisioned throughput and gradually increase it as your application becomes more popular.
DynamoDB scales seamlessly to handle very large amounts of data and very large numbers of users.
The following table shows some high-level differences between a relational database management
system (RDBMS) and DynamoDB:
Data Access SQL (Structured Query You can use the AWS
Language) is the standard for Management Console or
storing and retrieving data. the AWS CLI to work with
Relational databases offer a rich DynamoDB and perform ad
set of tools for simplifying the hoc tasks. Applications can
development of database-driven leverage the AWS software
applications, but all of these development kits (SDKs) to work
tools use SQL. with DynamoDB using object-
based, document-centric, or low-
level interfaces.
The following diagram shows client interaction with a relational database, and with DynamoDB.
The following table has more details about client interaction tasks:
Tools for Accessing the Most relational databases In most cases, you write
Database provide a command line application code. You can also
interface (CLI), so that you can use the AWS Management
enter ad hoc SQL statements Console or the AWS Command
and see the results immediately. Line Interface (AWS CLI) to send
ad hoc requests to DynamoDB
and view the results.
Sending a Request The application issues a SQL The application sends HTTP(S)
statement for every database requests to DynamoDB. The
operation that it wants to requests contain the name of
perform. Upon receipt of the the DynamoDB operation to
SQL statement, the RDBMS perform, along with parameters.
checks its syntax, creates a plan DynamoDB executes the request
for performing the operation, immediately.
and then executes the plan.
Receiving a Response The RDBMS returns the results DynamoDB returns an HTTP(S)
from the SQL statement. If there response containing the results
is an error, the RDBMS returns of the operation. If there is
an error status and message. an error, DynamoDB returns
an HTTP error status and
message(s).
Creating a Table
Tables are the fundamental data structures in relational databases and in DynamoDB. A relational
database management system (RDBMS) requires you to define the table's schema when you create it. In
contrast, DynamoDB tables are schemaless—other than the primary key, you do not need to define any
attributes or data types at table creation time.
SQL
Use the CREATE TABLE statement to create a table, as shown in the following example.
The primary key for this table consists of Artist and SongTitle.
You must define all of the table's columns and data types, and the table's primary key. (You can use the
ALTER TABLE statement to change these definitions later, if necessary.)
Many SQL implementations let you define storage specifications for your table, as part of the CREATE
TABLE statement. Unless you indicate otherwise, the table is created with default storage settings. In a
production environment, a database administrator can help determine the optimal storage parameters.
DynamoDB
Use the CreateTable action to create a provisioned mode table, specifying parameters as shown
following:
{
TableName : "Music",
KeySchema: [
{
AttributeName: "Artist",
KeyType: "HASH", //Partition key
},
{
AttributeName: "SongTitle",
KeyType: "RANGE" //Sort key
}
],
AttributeDefinitions: [
{
AttributeName: "Artist",
AttributeType: "S"
},
{
AttributeName: "SongTitle",
AttributeType: "S"
}
],
ProvisionedThroughput: { // Only specified if using provisioned mode
ReadCapacityUnits: 1,
WriteCapacityUnits: 1
}
}
The primary key for this table consists of Artist (partition key) and SongTitle (sort key).
Note
For code examples using CreateTable, see Getting Started with DynamoDB SDK (p. 74).
SQL
Most relational database management system (RDBMS) allow you to describe a table's structure—
columns, data types, primary key definition, and so on. There is no standard way to do this in SQL.
However, many database systems provide a DESCRIBE command. Here is an example from MySQL:
DESCRIBE Music;
This returns the structure of your table, with all of the column names, data types, and sizes:
+------------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+------------+-------------+------+-----+---------+-------+
| Artist | varchar(20) | NO | PRI | NULL | |
| SongTitle | varchar(30) | NO | PRI | NULL | |
| AlbumTitle | varchar(25) | YES | | NULL | |
| Year | int(11) | YES | | NULL | |
| Price | float | YES | | NULL | |
| Genre | varchar(10) | YES | | NULL | |
| Tags | text | YES | | NULL | |
+------------+-------------+------+-----+---------+-------+
The primary key for this table consists of Artist and SongTitle.
DynamoDB
DynamoDB has a DescribeTable action, which is similar. The only parameter is the table name, as
shown following:
{
TableName : "Music"
}
{
"Table": {
"AttributeDefinitions": [
{
"AttributeName": "Artist",
"AttributeType": "S"
},
{
"AttributeName": "SongTitle",
"AttributeType": "S"
}
],
"TableName": "Music",
"KeySchema": [
{
"AttributeName": "Artist",
"KeyType": "HASH" //Partition key
},
{
"AttributeName": "SongTitle",
"KeyType": "RANGE" //Sort key
}
],
...
DescribeTable also returns information about indexes on the table, provisioned throughput settings,
an approximate item count, and other metadata.
This section describes how to write one row (or item) to a table.
SQL
A table in a relational database is a two-dimensional data structure composed of rows and columns.
Some database management systems also provide support for semi-structured data, usually with native
JSON or XML data types. However, the implementation details vary among vendors.
The primary key for this table consists of Artist and SongTitle. You must specify values for these columns.
Note
In this example, we are using the Tags column to store semi-structured data about the songs in
the Music table. We have defined the Tags column as type TEXT, which can store up to 65535
characters in MySQL.
DynamoDB
In Amazon DynamoDB, you use the PutItem action to add an item to a table:
{
TableName: "Music",
Item: {
"Artist":"No One You Know",
"SongTitle":"Call Me Today",
"AlbumTitle":"Somewhat Famous",
"Year": 2015,
"Price": 2.14,
"Genre": "Country",
"Tags": {
"Composers": [
"Smith",
"Jones",
"Davis"
],
"LengthInSeconds": 214
}
}
}
The primary key for this table consists of Artist and SongTitle. You must specify values for these
attributes.
Here are some key things to know about this PutItem example:
• DynamoDB provides native support for documents, using JSON. This makes DynamoDB ideal for
storing semi-structured data, such as Tags. You can also retrieve and manipulate data from within
JSON documents.
• The Music table does not have any predefined attributes, other than the primary key (Artist and
SongTitle).
• Most SQL databases are transaction-oriented. When you issue an INSERT statement, the data
modifications are not permanent until you issue a COMMIT statement. With Amazon DynamoDB, the
effects of a PutItem action are permanent when DynamoDB replies with an HTTP 200 status code
(OK).
Note
For code examples using PutItem, see Getting Started with DynamoDB SDK (p. 74).
{
TableName: "Music",
Item: {
"Artist": "No One You Know",
"SongTitle": "My Dog Spot",
"AlbumTitle":"Hey Now",
"Price": 1.98,
"Genre": "Country",
"CriticRating": 8.4
}
}
{
TableName: "Music",
Item: {
"Artist": "No One You Know",
"SongTitle": "Somewhere Down The Road",
"AlbumTitle":"Somewhat Famous",
"Genre": "Country",
"CriticRating": 8.4,
"Year": 1984
}
}
{
TableName: "Music",
Item: {
"Artist": "The Acme Band",
"SongTitle": "Still In Love",
{
TableName: "Music",
Item: {
"Artist": "The Acme Band",
"SongTitle": "Look Out, World",
"AlbumTitle":"The Buck Starts Here",
"Price": 0.99,
"Genre": "Rock"
}
}
Note
In addition to PutItem, Amazon DynamoDB supports a BatchWriteItem action for writing
multiple items at the same time.
• GetItem – Retrieves a single item from a table. This is the most efficient way to read a single item,
because it provides direct access to the physical location of the item. (DynamoDB also provides the
BatchGetItem operation, allowing you to perform up to 100 GetItem calls in a single operation.)
• Query – Retrieves all of the items that have a specific partition key. Within those items, you can apply
a condition to the sort key and retrieve only a subset of the data. Query provides quick, efficient
access to the partitions where the data is stored. (For more information, see Partitions and Data
Distribution (p. 20).)
• Scan – Retrieves all of the items in the specified table. (This operation should not be used with large
tables, because it can consume large amounts of system resources.)
Note
With a relational database, you can use the SELECT statement to join data from multiple
tables and return the results. Joins are fundamental to the relational model. To ensure that
joins execute efficiently, the database and its applications should be performance-tuned on an
ongoing basis.
DynamoDB is a non-relational NoSQL database, and does not support table joins. Instead,
applications read data from one table at a time.
The following sections describe different use cases for reading data, and how to perform these tasks with
a relational database and with DynamoDB.
Topics
• Reading an Item Using Its Primary Key (p. 32)
• Querying a Table (p. 33)
• Scanning a Table (p. 35)
SQL
In SQL, you use the SELECT statement to retrieve data from a table. You can request one or more
columns in the result (or all of them, if you use the * operator). The WHERE clause determines which
row(s) to return.
The following is a SELECT statement to retrieve a single row from the Music table. The WHERE clause
specifies the primary key values.
SELECT *
FROM Music
WHERE Artist='No One You Know' AND SongTitle = 'Call Me Today'
You can modify this query to retrieve only a subset of the columns:
Note that the primary key for this table consists of Artist and SongTitle.
DynamoDB
DynamoDB provides the GetItem action for retrieving an item by its primary key. GetItem is highly
efficient because it provides direct access to the physical location of the item. (For more information, see
Partitions and Data Distribution (p. 20).)
By default, GetItem returns the entire item with all of its attributes.
{
TableName: "Music",
Key: {
"Artist": "No One You Know",
"SongTitle": "Call Me Today"
}
}
You can add a ProjectionExpression parameter to return only some of the attributes:
{
TableName: "Music",
Key: {
"Artist": "No One You Know",
"SongTitle": "Call Me Today"
},
Note that the primary key for this table consists of Artist and SongTitle.
The DynamoDB GetItem action is very efficient: It uses the primary key value(s) to determine the exact
storage location of the item in question, and retrieves it directly from there. The SQL SELECT statement
is similarly efficient, in the case of retrieving items by primary key values.
The SQL SELECT statement supports many kinds of queries and table scans. DynamoDB provides similar
functionality with its Query and Scan actions, which are described in Querying a Table (p. 33) and
Scanning a Table (p. 35).
The SQL SELECT statement can perform table joins, allowing you to retrieve data from multiple
tables at the same time. Joins are most effective where the database tables are normalized and the
relationships among the tables are clear. However, if you join too many tables in one SELECT statement
application performance can be affected. You can work around such issues by using database replication,
materialized views, or query rewrites.
DynamoDB is a non-relational database. As such, it does not support table joins. If you are migrating an
existing application from a relational database to DynamoDB, you need to denormalize your data model
to eliminate the need for joins.
Note
For code examples using GetItem, see Getting Started with DynamoDB SDK (p. 74).
Querying a Table
Another common access pattern is reading multiple items from a table, based on your query criteria.
SQL
The SQL SELECT statement lets you query on key columns, non-key columns, or any combination. The
WHERE clause determines which rows are returned, as shown in the following examples:
/* Return all of the songs by an artist, with a particular word in the title...
...but only if the price is less than 1.00 */
Note that the primary key for this table consists of Artist and SongTitle.
DynamoDB
The DynamoDB Query action lets you retrieve data in a similar fashion. The Query action provides quick,
efficient access to the physical locations where the data is stored. (For more information, see Partitions
and Data Distribution (p. 20).)
You can use Query with any table that has a composite primary key (partition key and sort key).
You must specify an equality condition for the partition key, and you can optionally provide another
condition for the sort key.
The KeyConditionExpression parameter specifies the key values that you want to query. You can use
an optional FilterExpression to remove certain items from the results before they are returned to
you.
Note that the primary key for this table consists of Artist and SongTitle.
{
TableName: "Music",
KeyConditionExpression: "Artist = :a and SongTitle = :t",
ExpressionAttributeValues: {
":a": "No One You Know",
":t": "Call Me Today"
}
}
{
TableName: "Music",
KeyConditionExpression: "Artist = :a",
ExpressionAttributeValues: {
":a": "No One You Know"
}
}
{
TableName: "Music",
KeyConditionExpression: "Artist = :a and begins_with(SongTitle, :t)",
ExpressionAttributeValues: {
":a": "No One You Know",
":t": "Call"
}
}
Note
For code examples using Query, see Getting Started with DynamoDB SDK (p. 74).
Scanning a Table
In SQL, a SELECT statement without a WHERE clause will return every row in a table. In DynamoDB,
the Scan operation does the same thing. In both cases, you can retrieve all of the items, or just some of
them.
Whether you are using a SQL or a NoSQL database, scans should be used sparingly because they can
consume large amounts of system resources. Sometimes a scan is appropriate (such as scanning a small
table) or unavoidable (such as performing a bulk export of data). However, as a general rule, you should
design your applications to avoid performing scans.
SQL
In SQL, you can scan a table and retrieve all of its data by using a SELECT statement without specifying
a WHERE clause. You can request one or more columns in the result. Or, you can request all of them if you
use the wildcard character (*).
DynamoDB
DynamoDB provides a Scan action that works in a similar way. Here are some examples:
The Scan action also provides a FilterExpression parameter, to discard items that you do not want
to appear in the results. A FilterExpression is applied after the entire table is scanned, but before
the results are returned to you. (This is not recommended with large tables: You are still charged for the
entire Scan, even if only a few matching items are returned.)
Note
For code examples that use Scan, see Getting Started with DynamoDB SDK (p. 74).
Managing Indexes
Indexes give you access to alternate query patterns, and can speed up queries. This section compares and
contrasts index creation and usage in SQL and DynamoDB.
Whether you are using a relational database or DynamoDB, you should be judicious with index creation.
Whenever a write occurs on a table, all of the table's indexes must be updated. In a write-heavy
environment with large tables, this can consume large amounts of system resources. In a read-only or
read-mostly environment, this is not as much of a concern—however, you should ensure that the indexes
are actually being used by your application, and not simply taking up space.
Topics
• Creating an Index (p. 36)
• Querying and Scanning an Index (p. 37)
Creating an Index
SQL
In a relational database, an index is a data structure that lets you perform fast queries on different
columns in a table. You can use the CREATE INDEX SQL statement to add an index to an existing table,
specifying the columns to be indexed. After the index has been created, you can query the data in the
table as usual, but now the database can use the index to quickly find the specified rows in the table
instead of scanning the entire table.
After you create an index, the database maintains it for you. Whenever you modify data in the table, the
index is automatically modified to reflect changes in the table.
DynamoDB
In DynamoDB, you can create and use a secondary index for similar purposes.
Indexes in DynamoDB are different from their relational counterparts. When you create a secondary
index, you must specify its key attributes – a partition key and a sort key. After you create the secondary
index, you can Query it or Scan it just as you would with a table. DynamoDB does not have a query
optimizer, so a secondary index is only used when you Query it or Scan it.
• Global secondary indexes – The primary key of the index can be any two attributes from its table.
• Local secondary indexes – The partition key of the index must be the same as the partition key of its
table. However, the sort key can be any other attribute.
DynamoDB ensures that the data in a secondary index is eventually consistent with its table. You can
request strongly consistent Query or Scan actions on a table or a local secondary index. However, global
secondary indexes only support eventual consistency.
You can add a global secondary index to an existing table, using the UpdateTable action and specifying
GlobalSecondaryIndexUpdates:
{
TableName: "Music",
AttributeDefinitions:[
{AttributeName: "Genre", AttributeType: "S"},
{AttributeName: "Price", AttributeType: "N"}
],
GlobalSecondaryIndexUpdates: [
{
Create: {
IndexName: "GenreAndPriceIndex",
KeySchema: [
{AttributeName: "Genre", KeyType: "HASH"}, //Partition key
{AttributeName: "Price", KeyType: "RANGE"}, //Sort key
],
Projection: {
"ProjectionType": "ALL"
},
ProvisionedThroughput: { // Only specified
if using provisioned mode
"ReadCapacityUnits": 1,"WriteCapacityUnits": 1
}
}
}
]
}
Part of this operation involves backfilling data from the table into the new index. During backfilling, the
table remains available. However, the index is not ready until its Backfilling attribute changes from
true to false. You can use the DescribeTable action to view this attribute.
Note
For code examples that use UpdateTable, see Getting Started with DynamoDB SDK (p. 74).
A query optimizer is a relational database management system (RDBMS) component that evaluates the
available indexes, and determines whether they can be used to speed up a query. If the indexes can be
used to speed up a query, the RDBMS accesses the index first and then uses it to locate the data in the
table.
Here are some SQL statements that can use GenreAndPriceIndex to improve performance. We assume
that the Music table has enough data in it that the query optimizer decides to use this index, rather than
simply scanning the entire table.
DynamoDB
In DynamoDB, you perform Query operations directly on the index, in the same way that you would do
so on a table. You must specify both TableName and IndexName.
The following are some queries on GenreAndPriceIndex in DynamoDB. (The key schema for this index
consists of Genre and Price.)
{
TableName: "Music",
IndexName: "GenreAndPriceIndex",
KeyConditionExpression: "Genre = :genre",
ExpressionAttributeValues: {
":genre": "Rock"
},
};
{
TableName: "Music",
IndexName: "GenreAndPriceIndex",
KeyConditionExpression: "Genre = :genre and Price < :price",
ExpressionAttributeValues: {
":genre": "Country",
":price": 0.50
},
ProjectionExpression: "Artist, SongTitle, Price"
};
This example uses a ProjectionExpression to indicate that we only want some of the attributes,
rather than all of them, to appear in the results.
You can also perform Scan operations on a secondary index, in the same way that you would do so on a
table. The following is a scan on GenreAndPriceIndex:
{
TableName: "Music",
IndexName: "GenreAndPriceIndex"
}
SQL
In SQL, you use the UPDATE statement to modify one or more rows. The SET clause specifies new values
for one or more columns, and the WHERE clause determines which rows are modified. Here is an example:
UPDATE Music
SET RecordLabel = 'Global Records'
WHERE Artist = 'No One You Know' AND SongTitle = 'Call Me Today';
If no rows match the WHERE clause, the UPDATE statement has no effect.
DynamoDB
In DynamoDB, you use the UpdateItem action to modify a single item. (If you want to modify multiple
items, you must use multiple UpdateItem operations.)
Here is an example:
{
TableName: "Music",
Key: {
"Artist":"No One You Know",
"SongTitle":"Call Me Today"
},
UpdateExpression: "SET RecordLabel = :label",
ExpressionAttributeValues: {
":label": "Global Records"
}
}
You must specify the Key attributes of the item to be modified, and an UpdateExpression to specify
attribute values.
UpdateItem behaves like an "upsert" operation: The item is updated if it exists in the table, but if not, a
new item is added (inserted).
UpdateItem supports conditional writes, where the operation succeeds only if a specific
ConditionExpression evaluates to true. For example, the following UpdateItem action does not
perform the update unless the price of the song is greater than or equal to 2.00:
{
TableName: "Music",
Key: {
"Artist":"No One You Know",
"SongTitle":"Call Me Today"
},
UpdateExpression: "SET RecordLabel = :label",
ConditionExpression: "Price >= :p",
ExpressionAttributeValues: {
":label": "Global Records",
":p": 2.00
}
}
UpdateItem also supports atomic counters, or attributes of type Number that can be incremented or
decremented. Atomic counters are similar in many ways to sequence generators, identity columns, or
auto-increment fields in SQL databases.
The following is an example of an UpdateItem action to initialize a new attribute (Plays) to keep track
of the number of times a song has been played:
{
TableName: "Music",
Key: {
"Artist":"No One You Know",
"SongTitle":"Call Me Today"
},
UpdateExpression: "SET Plays = :val",
ExpressionAttributeValues: {
":val": 0
},
ReturnValues: "UPDATED_NEW"
}
The ReturnValues parameter is set to UPDATED_NEW, which returns the new values of any attributes
that were updated. In this case, it returns 0 (zero).
Whenever someone plays this song, we can use the following UpdateItem action to increment Plays by
one:
{
TableName: "Music",
Key: {
"Artist":"No One You Know",
"SongTitle":"Call Me Today"
},
UpdateExpression: "SET Plays = Plays + :incr",
ExpressionAttributeValues: {
":incr": 1
},
ReturnValues: "UPDATED_NEW"
}
Note
For code examples using UpdateItem, see Getting Started with DynamoDB SDK (p. 74).
SQL
In SQL, you use the DELETE statement to delete one or more rows. The WHERE clause determines the
rows that you want to modify. Here is an example:
You can modify the WHERE clause to delete multiple rows. For example, you could delete all of the songs
by a particular artist, as shown following:
Note
If you omit the WHERE clause, the database attempts to delete all of the rows from the table.
DynamoDB
In DynamoDB, you use the DeleteItem action to delete data from a table, one item at a time. You must
specify the item's primary key values. Here is an example:
{
TableName: "Music",
Key: {
Artist: "The Acme Band",
SongTitle: "Look Out, World"
}
}
Note
In addition to DeleteItem, Amazon DynamoDB supports a BatchWriteItem action for
deleting multiple items at the same time.
DeleteItem supports conditional writes, where the operation succeeds only if a specific
ConditionExpression evaluates to true. For example, the following DeleteItem action deletes the
item only if it has a RecordLabel attribute:
{
TableName: "Music",
Key: {
Artist: "The Acme Band",
SongTitle: "Look Out, World"
},
ConditionExpression: "attribute_exists(RecordLabel)"
}
Note
For code examples using DeleteItem, see Getting Started with DynamoDB SDK (p. 74).
Removing a Table
In SQL, you use the DROP TABLE statement to remove a table. In DynamoDB, you use the DeleteTable
operation.
SQL
When you no longer need a table and want to discard it permanently, you use the DROP TABLE
statement in SQL:
After a table is dropped, it cannot be recovered. (Some relational databases do allow you to undo a DROP
TABLE operation, but this is vendor-specific functionality and it is not widely implemented.)
DynamoDB
DynamoDB has a similar action: DeleteTable. In the following example, the table is permanently
deleted:
{
TableName: "Music"
}
Note
For code examples using DeleteTable, see Getting Started with DynamoDB SDK (p. 74).
Setting Up DynamoDB
In addition to the Amazon DynamoDB web service, AWS provides a downloadable version of DynamoDB
that you can run on your computer. The downloadable version lets you write and test applications locally
without accessing the DynamoDB web service.
The topics in this section describe how to set up DynamoDB (Downloadable Version) and the DynamoDB
web service.
Topics
• Setting Up DynamoDB Local (Downloadable Version) (p. 43)
• Setting Up DynamoDB (Web Service) (p. 48)
Having this local version helps you save on provisioned throughput, data storage, and data transfer fees.
In addition, you don't need an internet connection while you're developing your application.
If you prefer to use the Amazon DynamoDB web service instead, see Setting Up DynamoDB (Web
Service) (p. 48).
Topics
• DynamoDB (Downloadable Version) on Your Computer (p. 43)
• DynamoDB (Downloadable Version) and Apache Maven (p. 45)
• DynamoDB (Downloadable Version) and Docker (p. 45)
• DynamoDB Usage Notes (p. 46)
Downloadable DynamoDB is available on Apache Maven. For more information, see DynamoDB
(Downloadable Version) and Apache Maven (p. 45), later in this topic. DynamoDB is also available
as part of the AWS Toolkit for Eclipse. For more information, see AWS Toolkit For Eclipse.
Important
To run DynamoDB on your computer, you must have the Java Runtime Environment (JRE)
version 6.x or newer. The application doesn't run on earlier JRE versions.
2. After you download the archive, extract the contents and copy the extracted directory to a location
of your choice.
3. To start DynamoDB on your computer, open a command prompt window, navigate to the directory
where you extracted DynamoDBLocal.jar, and type the following command:
Note
If you're using Windows PowerShell, be sure to enclose the parameter name or the entire
name and value like this:
java -D"java.library.path=./DynamoDBLocal_lib" -jar DynamoDBLocal.jar
DynamoDB processes incoming requests until you stop it. To stop DynamoDB, press Ctrl+C
at the command prompt.
DynamoDB uses port 8000 by default. If port 8000 is unavailable, this command throws an
exception. For a complete list of DynamoDB runtime options, including -port , type this
command:
java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -
help
4. Before you can access DynamoDB programmatically or through the AWS Command Line Interface
(AWS CLI), you must configure your credentials to enable authorization for your applications.
Downloadable DynamoDB requires any credentials to work. For example:
You can use the aws configure command of the AWS CLI to set up credentials. For more
information, see Using the CLI (p. 53).
5. You can start writing applications. To access DynamoDB running locally, use the --endpoint-url
parameter. For example, use the following command to list DynamoDB tables:
1. Download and install Apache Maven. For more information, see Downloading Apache Maven and
Installing Apache Maven.
2. Add the DynamoDB Maven repository to your application's Project Object Model (POM) file:
<!--Dependency:-->
<dependencies>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>DynamoDBLocal</artifactId>
<version>[1.11,2.0)</version>
</dependency>
</dependencies>
<!--Custom repository:-->
<repositories>
<repository>
<id>dynamodb-local-oregon</id>
<name>DynamoDB Local Release Repository</name>
<url>https://s3-us-west-2.amazonaws.com/dynamodb-local/release</url>
</repository>
</repositories>
Note
Alternatively, you can use one of the following repository URLs, depending on your AWS
Region:
id Repository URL
dynamodb-local-mumbai https://s3.ap-south-1.amazonaws.com/
dynamodb-local-mumbai/release
dynamodb-local-singapore https://s3-ap-
southeast-1.amazonaws.com/dynamodb-
local-singapore/release
dynamodb-local-tokyo https://s3-ap-
northeast-1.amazonaws.com/dynamodb-
local-tokyo/release
dynamodb-local-frankfurt https://s3.eu-central-1.amazonaws.com/
dynamodb-local-frankfurt/release
dynamodb-local-sao-paulo https://s3-sa-east-1.amazonaws.com/
dynamodb-local-sao-paulo/release
The aws-dynamodb-examples repository in GitHub contains examples for starting and stopping
DynamoDB Local inside a Java program and using DynamoDB Local in JUnit tests.
For an example of using DynamoDB local as part of a REST application built on the AWS serverless
application model (SAM) see SAM DynamoDB application for managing orders. This sample application
demonstrates how to use DynamoDB local for testing.
• If you use the -sharedDb option, DynamoDB creates a single database file named shared-local-
instance.db. Every program that connects to DynamoDB accesses this file. If you delete the file, you
lose any data that you have stored in it.
• If you omit -sharedDb, the database file is named myaccesskeyid_region.db, with the AWS access key
ID and Region as they appear in your application configuration. If you delete the file, you lose any data
that you have stored in it.
• If you use the -inMemory option, DynamoDB doesn't write any database files at all. Instead, all data is
written to memory, and the data is not saved when you terminate DynamoDB.
• If you use the -optimizeDbBeforeStartup option, you must also specify the -dbPath parameter
so that DynamoDB can find its database file.
• The AWS SDKs for DynamoDB require you to specify an access key value and an AWS Region value.
Unless you're using the -sharedDb or the -inMemory option, DynamoDB uses these values to name
the local database file. These values don't have to be valid AWS values to run locally. However, you
might find it convenient to use valid values so that you can run your code in the cloud later simply by
changing the endpoint you're using.
Topics
• Command Line Options (p. 46)
• Setting the Local Endpoint (p. 47)
• Differences Between Downloadable DynamoDB and the DynamoDB Web Service (p. 47)
• -cors value — Enables support for cross-origin resource sharing (CORS) for JavaScript. You must
provide a comma-separated "allow" list of specific domains. The default setting for -cors is an
asterisk (*), which allows public access.
• -dbPath value — The directory where DynamoDB writes its database file. If you don't specify this
option, the file is written to the current directory. You can't specify both -dbPath and -inMemory at
once.
• -delayTransientStatuses — Causes DynamoDB to introduce delays for certain operations.
DynamoDB (Downloadable Version) can perform some tasks almost instantaneously, such as create/
update/delete operations on tables and indexes. However, the DynamoDB service requires more time
for these tasks. Setting this parameter helps DynamoDB running on your computer simulate the
behavior of the DynamoDB web service more closely. (Currently, this parameter introduces delays only
for global secondary indexes that are in either CREATING or DELETING status.)
• -help — Prints a usage summary and options.
• -inMemory — DynamoDB runs in memory instead of using a database file. When you stop
DynamoDB, none of the data is saved. You can't specify both -dbPath and -inMemory at once.
• -optimizeDbBeforeStartup — Optimizes the underlying database tables before starting
DynamoDB on your computer. You also must specify -dbPath when you use this parameter.
• -port value — The port number that DynamoDB uses to communicate with your application. If you
don't specify this option, the default port is 8000.
Note
DynamoDB uses port 8000 by default. If port 8000 is unavailable, this command throws an
exception. You can use the -port option to specify a different port number. For a complete
list of DynamoDB runtime options, including -port , type this command:
java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -
help
• -sharedDb — If you specify -sharedDb, DynamoDB uses a single database file instead of separate
files for each credential and Region.
http://localhost:8000
To access DynamoDB running locally, use the --endpoint-url parameter. The following is an example
of using the AWS CLI to list the tables in DynamoDB on your computer:
Note
The AWS CLI can't use the downloadable version of DynamoDB as a default endpoint. Therefore,
you must specify --endpoint-url with each AWS CLI command.
AWS SDKs
The way you specify an endpoint depends on the programming language and AWS SDK you're using. The
following sections describe how to do this:
Note
For examples in other programming languages, see Getting Started with DynamoDB
SDK (p. 74).
The downloadable version of DynamoDB differs from the web service in the following ways:
• Regions and distinct AWS accounts are not supported at the client level.
• Provisioned throughput settings are ignored in downloadable DynamoDB, even though the
CreateTable operation requires them. For CreateTable, you can specify any numbers you want
for provisioned read and write throughput, even though these numbers are not used. You can call
UpdateTable as many times as you want per day. However, any changes to provisioned throughput
values are ignored.
• Scan operations are performed sequentially. Parallel scans are not supported. The Segment and
TotalSegments parameters of the Scan operation are ignored.
• The speed of read and write operations on table data is limited only by the speed of your computer.
CreateTable, UpdateTable, and DeleteTable operations occur immediately, and table state is
always ACTIVE. UpdateTable operations that change only the provisioned throughput settings on
tables or global secondary indexes occur immediately. If an UpdateTable operation creates or deletes
any global secondary indexes, then those indexes transition through normal states (such as CREATING
and DELETING, respectively) before they become an ACTIVE state. The table remains ACTIVE during
this time.
• Read operations are eventually consistent. However, due to the speed of DynamoDB running on your
computer, most reads appear to be strongly consistent.
• Item collection metrics and item collection sizes are not tracked. In operation responses, nulls are
returned instead of item collection metrics.
• In DynamoDB, there is a 1 MB limit on data returned per result set. Both the DynamoDB web service
and the downloadable version enforce this limit. However, when querying an index, the DynamoDB
service calculates only the size of the projected key and attributes. By contrast, the downloadable
version of DynamoDB calculates the size of the entire item.
• If you're using DynamoDB Streams, the rate at which shards are created might differ. In the DynamoDB
web service, shard-creation behavior is partially influenced by table partition activity. When you
run DynamoDB locally, there is no table partitioning. In either case, shards are ephemeral, so your
application should not be dependent on shard behavior.
• TransactionConflictExceptions are not thrown by downloadable DynamoDB for
transactional APIs. We recommend that you use a Java mocking framework to simulate
TransactionConflictExceptions in the DynamoDB handler to test how your application
responds to conflicting transactions.
1. Open https://portal.aws.amazon.com/billing/signup.
Part of the sign-up procedure involves receiving a phone call and entering a verification code on the
phone keypad.
Access keys consist of an access key ID and secret access key, which are used to sign programmatic
requests that you make to AWS. If you don't have access keys, you can create them from the AWS
Management Console. As a best practice, do not use the AWS account root user access keys for any task
where it's not required. Instead, create a new administrator IAM user with access keys for yourself.
The only time that you can view or download the secret access key is when you create the keys. You
cannot recover them later. However, you can create new access keys at any time. You must also have
permissions to perform the required IAM actions. For more information, see Permissions Required to
Access IAM Resources in the IAM User Guide.
1. Sign in to the AWS Management Console and open the IAM console at https://
console.aws.amazon.com/iam/.
2. In the navigation pane, choose Users.
3. Choose the name of the user whose access keys you want to create, and then choose the Security
credentials tab.
4. In the Access keys section, choose Create access key.
5. To view the new access key pair, choose Show. You will not have access to the secret access key again
after this dialog box closes. Your credentials will look something like this:
Keep the keys confidential in order to protect your AWS account and never email them. Do not share
them outside your organization, even if an inquiry appears to come from AWS or Amazon.com. No
one who legitimately represents Amazon will ever ask you for your secret key.
7. After you download the .csv file, choose Close. When you create an access key, the key pair is active
by default, and you can use the pair right away.
Related topics
There are several ways to do this. For example, you can manually create the credentials file to store your
AWS access key ID and secret access key. You can also use the aws configure command of the AWS CLI
to automatically create the file. Alternatively, you can use environment variables. For more information
on configuring your credentials, see the programming-specific AWS SDK developer guide.
To install and configure the AWS CLI, see Using the CLI (p. 53).
Accessing DynamoDB
You can access Amazon DynamoDB using the AWS Management Console, the AWS Command Line
Interface (AWS CLI), or the DynamoDB API.
Topics
• Using the Console (p. 51)
• Using the CLI (p. 53)
• Using the API (p. 55)
• IP Address Ranges (p. 55)
• Monitor recent alerts, total capacity, service health, and the latest DynamoDB news on the DynamoDB
dashboard.
• Create, update, and delete tables. The capacity calculator provides estimates of how many capacity
units to request based on the usage information you provide.
• Manage streams.
• View, add, update, and delete items that are stored in tables. Manage Time To Live (TTL) to define
when items in a table expire so that they can be automatically deleted from the database.
• Query and scan a table.
• Set up and view alarms to monitor your table's capacity usage. View your table's top monitoring
metrics on real-time graphs from CloudWatch.
• Modify a table's provisioned capacity.
• Create and delete global secondary indexes.
• Create triggers to connect DynamoDB streams to AWS Lambda functions.
• Apply tags to your resources to help organize and identify them.
• Purchase reserved capacity.
The console displays an introductory screen that prompts you to create your first table. To view your
tables, from the navigation pane on the left side of the console, choose Tables.
Here's a high-level overview of the actions available per table within each navigation tab:
• Overview – View stream and table details, and manage streams and Time To Live (TTL).
• Items – Manage items and perform queries and scans.
• Metrics – Monitor CloudWatch metrics.
• Alarms – Manage CloudWatch alarms.
• Capacity – Modify a table's provisioned capacity.
• Indexes – Manage global secondary indexes.
• Triggers – Manage triggers to connect DynamoDB streams to Lambda functions.
• Access control – Set up fine-grained access control with web identity federation.
• Tags – Apply tags to your resources to help organize and identify them.
You can still change individual settings on console pages without having saved any user preferences.
Those choices persist until you close the console window. When you return to the console, any saved user
preferences are applied.
Note
User preferences are available only for IAM users. You can't set preferences if you use federated
access, temporary access, or an AWS account root user to access the console.
• Table detail view mode: View all the table-specific information vertically, horizontally, or covering the
full screen (if enabled, the navigation bar still appears).
• Show navigation bar: Enable this option to show the navigation bar on the left side (expanded).
Disable it to automatically collapse the navigation bar (you can expand it using the right chevron).
• Default entry page (Dashboard or Tables): Choose the page that loads when you access DynamoDB.
This option automatically loads the Dashboard or the Tables page, respectively.
• Items editor mode (Tree or Text): Choose the default editor mode to use when you create or edit an
item.
• Items default query type (Scan or Query): Choose the default query type to use when you access
the Items tab. Choose Scan if you want to either enable or disable the automatic scan operation that
occurs when accessing the Items tab.
• Automatic scan operation when accessing the items tab: If Scan is the default query type for items
and you enable this setting, an automatic scan operation occurs when you access the Items tab. If you
disable this setting, you can perform a scan by choosing Start search on the Items tab.
To view and save preferences in the DynamoDB console for your IAM user
Sign in as an IAM user. You can't configure user preferences for other user types.
2. In the title bar navigation, choose Preferences.
3. In Preferences, make changes to configure your preferences.
• To view the DynamoDB console default settings, choose Restore. These defaults are applied if you
choose Save.
Before you can use the AWS CLI with DynamoDB, you must get an access key ID and secret access key. For
more information, see Getting an AWS Access Key (p. 49).
For a complete listing of all the commands available for DynamoDB in the AWS CLI, go to https://
docs.aws.amazon.com/cli/latest/reference/dynamodb/index.html.
Topics
• Downloading and Configuring the AWS CLI (p. 53)
• Using the AWS CLI with DynamoDB (p. 53)
• Using the AWS CLI with Downloadable DynamoDB (p. 54)
For example, the following command creates a table named Music. The partition key is Artist, and the
sort key is SongTitle. (For easier readability, long commands in this section are broken into separate
lines.)
AttributeName=Artist,AttributeType=S \
AttributeName=SongTitle,AttributeType=S \
--key-schema AttributeName=Artist,KeyType=HASH AttributeName=SongTitle,KeyType=RANGE \
--provisioned-throughput ReadCapacityUnits=1,WriteCapacityUnits=1
The following commands add new items to the table. These examples use a combination of shorthand
syntax and JSON.
On the command line, it can be difficult to compose valid JSON. However, the AWS CLI can read
JSON files. For example, consider the following JSON snippet, which is stored in a file named key-
conditions.json:
{
"Artist": {
"AttributeValueList": [
{
"S": "No One You Know"
}
],
"ComparisonOperator": "EQ"
},
"SongTitle": {
"AttributeValueList": [
{
"S": "Call Me Today"
}
],
"ComparisonOperator": "EQ"
}
}
You can now issue a Query request using the AWS CLI. In this example, the contents of the key-
conditions.json file are used for the --key-conditions parameter:
--endpoint-url http://localhost:8000
Here is an example that uses the AWS CLI to list the tables in a local database:
If DynamoDB is using a port number other than the default (8000), modify the --endpoint-url value
accordingly.
Note
The AWS CLI can't use the downloadable version of DynamoDB as a default endpoint. Therefore,
you must specify --endpoint-url with each command.
The AWS SDKs provide broad support for DynamoDB in Java, JavaScript in the browser, .NET, Node.js,
PHP, Python, Ruby, C++, Go, Android, and iOS. To get started quickly with these languages, see Getting
Started with DynamoDB SDK (p. 74).
Before you can use the AWS SDKs with DynamoDB, you must get an AWS access key ID and secret access
key. For more information, see Setting Up DynamoDB (Web Service) (p. 48).
For a high-level overview of DynamoDB application programming with the AWS SDKs, see Programming
with DynamoDB and the AWS SDKs (p. 211).
IP Address Ranges
Amazon Web Services (AWS) publishes its current IP address ranges in JSON format. To view the current
ranges, download ip-ranges.json. For more information, see AWS IP Address Ranges in the AWS General
Reference.
To find the IP address ranges that you can use to access to DynamoDB tables and indexes, search the ip-
ranges.json file for the following string: "service": "DYNAMODB".
Note
The IP address ranges do not apply to DynamoDB Streams or DynamoDB Accelerator (DAX).
Topics
• Basic Concepts in DynamoDB (p. 56)
• Prerequisites (p. 56)
• Step 1: Create a Table (p. 56)
• Step 2: Write Data to a Table (p. 59)
• Step 3: Read Data from a Table (p. 62)
• Step 4: Update Data in a Table (p. 63)
• Step 5: Query Data in a Table (p. 65)
• Step 6: Create a Global Secondary Index (p. 67)
• Step 7: Query the Global Secondary Index (p. 70)
• Step 8: (Optional) Clean Up Resources (p. 72)
• Getting Started with DynamoDB: Next Steps (p. 72)
Prerequisites
Before starting the Amazon DynamoDB tutorial, follow the steps in Setting Up DynamoDB. Then
continue on to Step 1: Create a Table (p. 56).
Note
• If you plan to interact with DynamoDB only through the AWS Management Console, you don't
need an AWS access key. Complete the steps in Signing Up for AWS, and then continue on to
Step 1: Create a Table (p. 56).
• If you don't want to sign up for a free tier account, you can set up DynamoDB Local
(Downloadable Version). Then continue on to Step 1: Create a Table (p. 56).
For more information about table operations, see Working with Tables in DynamoDB (p. 333).
Note
Before you begin, make sure that you followed the steps in Prerequisites (p. 56).
1. Sign in to the AWS Management Console and open the DynamoDB console at https://
console.aws.amazon.com/dynamodb/.
2. In the navigation pane on the left side of the console, choose Dashboard.
3. On the right side of the console choose Create Table.
AWS CLI
The following AWS CLI example creates a new Music table using create-table.
{
"TableDescription": {
"TableArn": "arn:aws:dynamodb:us-west-2:522194210714:table/Music",
"AttributeDefinitions": [
{
"AttributeName": "Artist",
"AttributeType": "S"
},
{
"AttributeName": "SongTitle",
"AttributeType": "S"
}
],
"ProvisionedThroughput": {
"NumberOfDecreasesToday": 0,
"WriteCapacityUnits": 5,
"ReadCapacityUnits": 10
},
"TableSizeBytes": 0,
"TableName": "Music",
"TableStatus": "CREATING",
"TableId": "d04c7240-0e46-435d-b231-d54091fe1017",
"KeySchema": [
{
"KeyType": "HASH",
"AttributeName": "Artist"
},
{
"KeyType": "RANGE",
"AttributeName": "SongTitle"
}
],
"ItemCount": 0,
"CreationDateTime": 1558028402.69
}
}
To verify that DynamoDB has finished creating the Music table, use the describe-table command.
This command returns the result shown below. When DynamoDB has finished creating the table, the
value of the TableStatus field is set to ACTIVE.
"TableStatus": "ACTIVE",
After creating the new table, proceed to Step 2: Write Data to a Table (p. 59).
For more information about write operations, see Writing an Item (p. 373).
2. In the navigation pane on the left side of the console, choose Tables.
3. In the table list, choose the Music table.
4. Choose the Items tab for the Music table.
5. On the Items tab, choose Create item.
11. Repeat this process and create another item with the following values:
AWS CLI
The following AWS CLI example creates two new items in the Music table using put-item.
For more information about supported data types in DynamoDB, see Data Types.
For more information about how to represent DynamoDB data types in JSON, see Attribute Values.
After writing data to your table, proceed to Step 3: Read Data from a Table (p. 62).
For more information about read operations in DynamoDB, see Reading an Item (p. 373).
The first item in the list is the one with the Artist Acme Band and the SongTitle Happy Day.
AWS CLI
The following AWS CLI example reads an item from the Music table using get-item.
Note
The default behavior for DynamoDB is eventually consistent reads. The consistent-read
parameter is used below to demonstrate strongly consistent reads.
{
"Item": {
"AlbumTitle": {
"S": "Songs About Life"
},
"Awards": {
"N": "10"
},
"SongTitle": {
"S": "Happy Day"
},
"Artist": {
"S": "Acme Band"
}
}
}
To update the data in your table, proceed to Step 4: Update Data in a Table (p. 63).
For more information about write operations, see Writing an Item (p. 373).
AWS CLI
The following AWS CLI example updates an item in the Music table using update-item.
{
"Attributes": {
"AlbumTitle": {
"S": "Updated Album Title"
},
"Awards": {
"N": "10"
},
"SongTitle": {
"S": "Happy Day"
},
"Artist": {
"S": "Acme Band"
}
}
}
To query the data in the Music table, proceed to Step 5: Query Data in a Table (p. 65).
For more information about query operations, see Working with Queries (p. 455).
6. For Partition key, enter Acme Band, and then choose Start search.
AWS CLI
The following AWS CLI example queries an item in the Music table using query.
{
"Count": 1,
"Items": [
{
"AlbumTitle": {
"S": "Updated Album Title"
},
"Awards": {
"N": "10"
},
"SongTitle": {
"S": "Happy Day"
},
"Artist": {
"S": "Acme Band"
}
}
],
"ScannedCount": 1,
"ConsumedCapacity": null
}
To create a global secondary index for your table, proceed to Step 6: Create a Global Secondary
Index (p. 67).
For more information about global secondary indexes, see Global Secondary Indexes (p. 496).
6. For the Partition key, enter AlbumTitle, and then choose Create index.
AWS CLI
The following AWS CLI example creates a global secondary index AlbumTitle-index for the Music
table using update-table.
{
"TableDescription": {
"TableArn": "arn:aws:dynamodb:us-west-2:522194210714:table/Music",
"AttributeDefinitions": [
{
"AttributeName": "AlbumTitle",
"AttributeType": "S"
},
{
"AttributeName": "Artist",
"AttributeType": "S"
},
{
"AttributeName": "SongTitle",
"AttributeType": "S"
}
],
"GlobalSecondaryIndexes": [
{
"IndexSizeBytes": 0,
"IndexName": "AlbumTitle-index",
"Projection": {
"ProjectionType": "ALL"
},
"ProvisionedThroughput": {
"NumberOfDecreasesToday": 0,
"WriteCapacityUnits": 5,
"ReadCapacityUnits": 10
},
"IndexStatus": "CREATING",
"Backfilling": false,
"KeySchema": [
{
"KeyType": "HASH",
"AttributeName": "AlbumTitle"
}
],
"IndexArn": "arn:aws:dynamodb:us-west-2:522194210714:table/Music/index/
AlbumTitle-index",
"ItemCount": 0
}
],
"ProvisionedThroughput": {
"NumberOfDecreasesToday": 0,
"WriteCapacityUnits": 5,
"ReadCapacityUnits": 10
},
"TableSizeBytes": 0,
"TableName": "Music",
"TableStatus": "UPDATING",
"TableId": "d04c7240-0e46-435d-b231-d54091fe1017",
"KeySchema": [
{
"KeyType": "HASH",
"AttributeName": "Artist"
},
{
"KeyType": "RANGE",
"AttributeName": "SongTitle"
}
],
"ItemCount": 0,
"CreationDateTime": 1558028402.69
}
}
To verify that DynamoDB has finished creating the AlbumTitle-index global secondary index, use the
describe-table command.
This command returns the result shown below. The index is ready for use when the value of the
IndexStatus field returned is set to ACTIVE.
"IndexStatus": "ACTIVE",
Next, you can query the global secondary index. For details, see Step 7: Query the Global Secondary
Index (p. 70).
For more information about global secondary indexes, see Global Secondary Indexes (p. 496).
For AlbumTitle, enter Somewhat Famous, and then choose Start search.
AWS CLI
The following AWS CLI example queries a global secondary index AlbumTitle-index on the Music
table.
{
"Count": 1,
"Items": [
{
"AlbumTitle": {
"S": "Somewhat Famous"
},
"Awards": {
"N": "1"
},
"SongTitle": {
"S": "Call Me Today"
},
"Artist": {
For more information about table operations in DynamoDB, see Working with Tables in
DynamoDB (p. 333).
AWS CLI
The following AWS CLI example deletes the Music table using delete-table.
Topics
• Java and DynamoDB (p. 74)
• JavaScript and DynamoDB (p. 93)
• Node.js and DynamoDB (p. 116)
• Tutorial: Microsoft .NET and DynamoDB (p. 133)
• PHP and DynamoDB (p. 154)
• Python and DynamoDB (p. 175)
• Ruby and DynamoDB (p. 193)
• Create a table called Movies and load sample data in JSON format.
• Perform create, read, update, and delete operations on the table.
• Run simple queries.
The SDK for Java offers several programming models for different use cases. In this exercise, the Java
code uses the document model that provides a level of abstraction that makes it easier for you to work
with JSON documents.
As you work through this tutorial, you can refer to the AWS SDK for Java API Reference.
Tutorial Prerequisites
• Download and run DynamoDB on your computer. For more information, see Setting Up DynamoDB
Local (Downloadable Version) (p. 43).
DynamoDB (Downloadable Version) is also available as part of the AWS Toolkit for Eclipse. For more
information, see AWS Toolkit For Eclipse.
Note
You use the downloadable version of DynamoDB in this tutorial. For information about how to
run the same code against the DynamoDB web service, see the Summary (p. 93).
• Set up an AWS access key to use the AWS SDKs. For more information, see Setting Up DynamoDB (Web
Service) (p. 48).
• Set up the AWS SDK for Java:
• Install a Java development environment. If you are using the Eclipse IDE, install the AWS Toolkit for
Eclipse.
• Install the AWS SDK for Java.
• Set up your AWS security credentials for use with the SDK for Java.
For instructions, see Getting Started in the AWS SDK for Java Developer Guide.
1. Copy and paste the following program into your Java development environment:
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package com.amazonaws.codesamples.gsg;
import java.util.Arrays;
import com.amazonaws.client.builder.AwsClientBuilder;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import com.amazonaws.services.dynamodbv2.document.DynamoDB;
import com.amazonaws.services.dynamodbv2.document.Table;
import com.amazonaws.services.dynamodbv2.model.AttributeDefinition;
import com.amazonaws.services.dynamodbv2.model.KeySchemaElement;
import com.amazonaws.services.dynamodbv2.model.KeyType;
import com.amazonaws.services.dynamodbv2.model.ProvisionedThroughput;
import com.amazonaws.services.dynamodbv2.model.ScalarAttributeType;
try {
System.out.println("Attempting to create table; please wait...");
Table table = dynamoDB.createTable(tableName,
}
catch (Exception e) {
System.err.println("Unable to create table: ");
System.err.println(e.getMessage());
}
}
}
Note
• You set the endpoint to indicate that you are creating the table in DynamoDB on your
computer.
• In the createTable call, you specify table name, primary key attributes, and its data
types.
• The ProvisionedThroughput parameter is required, but the downloadable version of
DynamoDB ignores it. (Provisioned throughput is beyond the scope of this exercise.)
2. Compile and run the program.
To learn more about managing tables, see Working with Tables in DynamoDB (p. 333).
Topics
• Step 2.1: Download the Sample Data File (p. 77)
• Step 2.2: Load the Sample Data into the Movies Table (p. 77)
This scenario uses a sample data file that contains information about a few thousand movies from the
Internet Movie Database (IMDb). The movie data is in JSON format, as shown in the following example.
For each movie, there is a year, a title, and a JSON map named info.
[
{
"year" : ... ,
"title" : ... ,
"info" : { ... }
},
{
"year" : ...,
"title" : ...,
"info" : { ... }
},
...
• You use the year and title as the primary key attribute values for the Movies table.
• You store the rest of the info values in a single attribute called info. This program illustrates how
you can store JSON in a DynamoDB attribute.
{
"year" : 2013,
"title" : "Turn It Down, Or Else!",
"info" : {
"directors" : [
"Alice Smith",
"Bob Jones"
],
"release_date" : "2013-01-18T00:00:00Z",
"rating" : 6.2,
"genres" : [
"Comedy",
"Drama"
],
"image_url" : "http://ia.media-imdb.com/images/N/
O9ERWAU7FS797AJ7LU8HN09AMUP908RLlo5JF90EWR7LJKQ7@@._V1_SX400_.jpg",
"plot" : "A rock band plays their music at high volumes, annoying the neighbors.",
"rank" : 11,
"running_time_secs" : 5215,
"actors" : [
"David Matthewman",
"Ann Thomas",
"Jonathan G. Neff"
]
}
}
Step 2.2: Load the Sample Data into the Movies Table
After you download the sample data, you can run the following program to populate the Movies table.
1. Copy and paste the following program into your Java development environment:
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package com.amazonaws.codesamples.gsg;
import java.io.File;
import java.util.Iterator;
import com.amazonaws.client.builder.AwsClientBuilder;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import com.amazonaws.services.dynamodbv2.document.DynamoDB;
import com.amazonaws.services.dynamodbv2.document.Item;
import com.amazonaws.services.dynamodbv2.document.Table;
import com.fasterxml.jackson.core.JsonFactory;
import com.fasterxml.jackson.core.JsonParser;
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.node.ObjectNode;
ObjectNode currentNode;
while (iter.hasNext()) {
currentNode = (ObjectNode) iter.next();
try {
table.putItem(new Item().withPrimaryKey("year", year, "title",
title).withJSON("info",
currentNode.path("info").toString()));
System.out.println("PutItem succeeded: " + year + " " + title);
}
catch (Exception e) {
System.err.println("Unable to add movie: " + year + " " + title);
System.err.println(e.getMessage());
break;
}
}
parser.close();
}
This program uses the open source Jackson library to process JSON. Jackson is included in the AWS
SDK for Java. You don't have to install it separately.
2. Compile and run the program.
To learn more about reading and writing data, see Working with Items in DynamoDB (p. 372).
Topics
• Step 3.1: Create a New Item (p. 79)
• Step 3.2: Read an Item (p. 80)
• Step 3.3: Update an Item (p. 82)
• Step 3.4: Increment an Atomic Counter (p. 83)
• Step 3.5: Update an Item (Conditionally) (p. 85)
• Step 3.6: Delete an Item (p. 86)
1. Copy and paste the following program into your Java development environment.
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package com.amazonaws.codesamples.gsg;
import java.util.HashMap;
import java.util.Map;
import com.amazonaws.client.builder.AwsClientBuilder;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import com.amazonaws.services.dynamodbv2.document.DynamoDB;
import com.amazonaws.services.dynamodbv2.document.Item;
import com.amazonaws.services.dynamodbv2.document.PutItemOutcome;
import com.amazonaws.services.dynamodbv2.document.Table;
try {
System.out.println("Adding a new item...");
PutItemOutcome outcome = table
.putItem(new Item().withPrimaryKey("year", year, "title",
title).withMap("info", infoMap));
}
catch (Exception e) {
System.err.println("Unable to add item: " + year + " " + title);
System.err.println(e.getMessage());
}
}
}
Note
The primary key is required. This code adds an item that has primary key (year, title) and
info attributes. The info attribute stores sample JSON that provides more information
about the movie.
2. Compile and run the program.
{
year: 2015,
title: "The Big New Movie",
info: {
plot: "Nothing happens at all.",
rating: 0
}
}
You can use the getItem method to read the item from the Movies table. You must specify the primary
key values, so you can read any item from Movies if you know its year and title.
1. Copy and paste the following program into your Java development environment:
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package com.amazonaws.codesamples.gsg;
import com.amazonaws.client.builder.AwsClientBuilder;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import com.amazonaws.services.dynamodbv2.document.DynamoDB;
import com.amazonaws.services.dynamodbv2.document.Item;
import com.amazonaws.services.dynamodbv2.document.Table;
import com.amazonaws.services.dynamodbv2.document.spec.GetItemSpec;
try {
System.out.println("Attempting to read the item...");
Item outcome = table.getItem(spec);
System.out.println("GetItem succeeded: " + outcome);
}
catch (Exception e) {
System.err.println("Unable to read item: " + year + " " + title);
System.err.println(e.getMessage());
}
}
}
{
year: 2015,
title: "The Big New Movie",
info: {
plot: "Nothing happens at all.",
rating: 0
}
}
To the following:
{
year: 2015,
title: "The Big New Movie",
info: {
plot: "Everything happens all at once.",
rating: 5.5,
actors: ["Larry", "Moe", "Curly"]
}
}
1. Copy and paste the following program into your Java development environment:
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package com.amazonaws.codesamples.gsg;
import java.util.Arrays;
import com.amazonaws.client.builder.AwsClientBuilder;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import com.amazonaws.services.dynamodbv2.document.DynamoDB;
import com.amazonaws.services.dynamodbv2.document.Table;
import com.amazonaws.services.dynamodbv2.document.UpdateItemOutcome;
import com.amazonaws.services.dynamodbv2.document.spec.UpdateItemSpec;
import com.amazonaws.services.dynamodbv2.document.utils.ValueMap;
import com.amazonaws.services.dynamodbv2.model.ReturnValue;
try {
System.out.println("Updating the item...");
UpdateItemOutcome outcome = table.updateItem(updateItemSpec);
System.out.println("UpdateItem succeeded:\n" +
outcome.getItem().toJSONPretty());
}
catch (Exception e) {
System.err.println("Unable to update item: " + year + " " + title);
System.err.println(e.getMessage());
}
}
}
Note
This program uses an UpdateExpression to describe all updates you want to perform on
the specified item.
The ReturnValues parameter instructs DynamoDB to return only the updated attributes
(UPDATED_NEW).
2. Compile and run the program.
The following program shows how to increment the rating for a movie. Each time you run it, the
program increments this attribute by one.
1. Copy and paste the following program into your Java development environment:
/**
package com.amazonaws.codesamples.gsg;
import com.amazonaws.client.builder.AwsClientBuilder;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import com.amazonaws.services.dynamodbv2.document.DynamoDB;
import com.amazonaws.services.dynamodbv2.document.Table;
import com.amazonaws.services.dynamodbv2.document.UpdateItemOutcome;
import com.amazonaws.services.dynamodbv2.document.spec.UpdateItemSpec;
import com.amazonaws.services.dynamodbv2.document.utils.ValueMap;
import com.amazonaws.services.dynamodbv2.model.ReturnValue;
try {
System.out.println("Incrementing an atomic counter...");
UpdateItemOutcome outcome = table.updateItem(updateItemSpec);
System.out.println("UpdateItem succeeded:\n" +
outcome.getItem().toJSONPretty());
}
catch (Exception e) {
System.err.println("Unable to update item: " + year + " " + title);
System.err.println(e.getMessage());
}
}
}
In this case, the movie item is updated only if there are more than three actors.
1. Copy and paste the following program into your Java development environment:
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package com.amazonaws.codesamples.gsg;
import com.amazonaws.client.builder.AwsClientBuilder;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import com.amazonaws.services.dynamodbv2.document.DynamoDB;
import com.amazonaws.services.dynamodbv2.document.PrimaryKey;
import com.amazonaws.services.dynamodbv2.document.Table;
import com.amazonaws.services.dynamodbv2.document.UpdateItemOutcome;
import com.amazonaws.services.dynamodbv2.document.spec.UpdateItemSpec;
import com.amazonaws.services.dynamodbv2.document.utils.ValueMap;
import com.amazonaws.services.dynamodbv2.model.ReturnValue;
}
catch (Exception e) {
System.err.println("Unable to update item: " + year + " " + title);
System.err.println(e.getMessage());
}
}
}
This is because the movie has three actors in it, but the condition is checking for greater than three
actors.
3. Modify the program so that the ConditionExpression looks like this:
In the following example, you try to delete a specific movie item if its rating is 5 or less.
1. Copy and paste the following program into your Java development environment:
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package com.amazonaws.codesamples.gsg;
import com.amazonaws.client.builder.AwsClientBuilder;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import com.amazonaws.services.dynamodbv2.document.DynamoDB;
import com.amazonaws.services.dynamodbv2.document.PrimaryKey;
import com.amazonaws.services.dynamodbv2.document.Table;
import com.amazonaws.services.dynamodbv2.document.spec.DeleteItemSpec;
import com.amazonaws.services.dynamodbv2.document.utils.ValueMap;
try {
System.out.println("Attempting a conditional delete...");
table.deleteItem(deleteItemSpec);
System.out.println("DeleteItem succeeded");
}
catch (Exception e) {
System.err.println("Unable to delete item: " + year + " " + title);
System.err.println(e.getMessage());
}
}
}
This is because the rating for this particular move is greater than 5.
3. Modify the program to remove the condition in DeleteItemSpec.
4. Compile and run the program. Now, the delete succeeds because you removed the condition.
The primary key for the Movies table is composed of the following:
To find all movies released during a year, you need to specify only the year. You can also provide the
title to retrieve a subset of movies based on some condition (on the sort key); for example, to find
movies released in 2014 that have a title starting with the letter "A".
In addition to query, there is also a scan method that can retrieve all of the table data.
To learn more about querying and scanning data, see Working with Queries (p. 455) and Working with
Scans (p. 473), respectively.
Topics
• Step 4.1: Query (p. 88)
• Step 4.2: Scan (p. 90)
1. Copy and paste the following program into your Java development environment:
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package com.amazonaws.codesamples.gsg;
import java.util.HashMap;
import java.util.Iterator;
import com.amazonaws.client.builder.AwsClientBuilder;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import com.amazonaws.services.dynamodbv2.document.DynamoDB;
import com.amazonaws.services.dynamodbv2.document.Item;
import com.amazonaws.services.dynamodbv2.document.ItemCollection;
import com.amazonaws.services.dynamodbv2.document.QueryOutcome;
import com.amazonaws.services.dynamodbv2.document.Table;
import com.amazonaws.services.dynamodbv2.document.spec.QuerySpec;
try {
System.out.println("Movies from 1985");
items = table.query(querySpec);
iterator = items.iterator();
while (iterator.hasNext()) {
item = iterator.next();
System.out.println(item.getNumber("year") + ": " +
item.getString("title"));
}
}
catch (Exception e) {
System.err.println("Unable to query movies from 1985");
System.err.println(e.getMessage());
}
valueMap.put(":yyyy", 1992);
valueMap.put(":letter1", "A");
valueMap.put(":letter2", "L");
try {
System.out.println("Movies from 1992 - titles A-L, with genres and lead
actor");
items = table.query(querySpec);
iterator = items.iterator();
while (iterator.hasNext()) {
item = iterator.next();
System.out.println(item.getNumber("year") + ": " +
item.getString("title") + " " + item.getMap("info"));
}
}
catch (Exception e) {
System.err.println("Unable to query movies from 1992:");
System.err.println(e.getMessage());
}
}
}
Note
First, you create the querySpec object, which describes the query parameters, and then you pass
the object to the query method.
2. Compile and run the program.
Note
The preceding program shows how to query a table by its primary key attributes. In DynamoDB,
you can optionally create one or more secondary indexes on a table, and query those indexes
in the same way that you query a table. Secondary indexes give your applications additional
flexibility by allowing queries on non-key attributes. For more information, see Improving Data
Access with Secondary Indexes (p. 493).
The following program scans the entire Movies table, which contains approximately 5,000 items. The
scan specifies the optional filter to retrieve only the movies from the 1950s (approximately 100 items),
and discard all the others.
1. Copy and paste the following program into your Java development environment:
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package com.amazonaws.codesamples.gsg;
import java.util.Iterator;
import com.amazonaws.client.builder.AwsClientBuilder;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import com.amazonaws.services.dynamodbv2.document.DynamoDB;
import com.amazonaws.services.dynamodbv2.document.Item;
import com.amazonaws.services.dynamodbv2.document.ItemCollection;
import com.amazonaws.services.dynamodbv2.document.ScanOutcome;
import com.amazonaws.services.dynamodbv2.document.Table;
import com.amazonaws.services.dynamodbv2.document.spec.ScanSpec;
import com.amazonaws.services.dynamodbv2.document.utils.NameMap;
import com.amazonaws.services.dynamodbv2.document.utils.ValueMap;
try {
ItemCollection<ScanOutcome> items = table.scan(scanSpec);
}
catch (Exception e) {
System.err.println("Unable to scan the table:");
System.err.println(e.getMessage());
}
}
}
Note
You can also use the Scan operation with any secondary indexes that you created on the table.
For more information, see Improving Data Access with Secondary Indexes (p. 493).
1. Copy and paste the following program into your Java development environment:
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package com.amazonaws.codesamples.gsg;
import com.amazonaws.client.builder.AwsClientBuilder;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import com.amazonaws.services.dynamodbv2.document.DynamoDB;
import com.amazonaws.services.dynamodbv2.document.Table;
try {
System.out.println("Attempting to delete table; please wait...");
table.delete();
table.waitForDelete();
System.out.print("Success.");
}
catch (Exception e) {
System.err.println("Unable to delete table: ");
System.err.println(e.getMessage());
}
}
}
Summary
In this tutorial, you created the Movies table in DynamoDB on your computer and performed basic
operations. The downloadable version of DynamoDB is useful during application development and
testing. However, when you're ready to run your application in a production environment, you must
modify your code so that it uses the Amazon DynamoDB web service.
import com.amazonaws.client.builder.AwsClientBuilder;
AmazonDynamoDB client =
AmazonDynamoDBClientBuilder.standard().withEndpointConfiguration(
new AwsClientBuilder.EndpointConfiguration("http://localhost:8000", "us-west-2"))
.build();
3. Now modify the client so that it accesses an AWS region instead of a specific endpoint:
For example, if you want to access the us-west-2 region, you would do this:
Instead of using DynamoDB on your computer, the program now uses the Amazon DynamoDB web
service endpoint in US West (Oregon).
Amazon DynamoDB is available in several regions worldwide. For the complete list, see Regions and
Endpoints in the AWS General Reference. For more information about setting regions and endpoints in
your code, see AWS Region Selection in the AWS SDK for Java Developer Guide.
• Create a table called Movies and load sample data in JSON format.
• Perform create, read, update, and delete operations on the table.
• Run simple queries.
As you work through this tutorial, you can refer to the AWS SDK for JavaScript API Reference.
Tutorial Prerequisites
• Download and run DynamoDB on your computer. For more information, see Setting Up DynamoDB
Local (Downloadable Version) (p. 43).
Note
You use the downloadable version of DynamoDB in this tutorial. For information about how to
run the same code against the DynamoDB web service, see the Summary (p. 115).
• Set up an AWS access key to use AWS SDKs. For more information, see Setting Up DynamoDB (Web
Service) (p. 48).
• Set up the AWS SDK for JavaScript. To do this, add or modify the following script tag to your HTML
pages:
<script src="https://sdk.amazonaws.com/js/aws-sdk-2.7.16.min.js"></script>
Note
The version of AWS SDK for JavaScript might have been updated. For the latest version, see
the AWS SDK for JavaScript API Reference.
• Enable cross-origin resource sharing (CORS) so your computer's browser can communicate with the
downloadable version of DynamoDB.
To enable CORS
1. Download the free ModHeader Chrome browser extension (or any other browser extension that
allows you to modify HTTP response headers).
2. Run the ModHeader Chrome browser extension, and add an HTTP response header with the name
set to "Access-Control-Allow-Origin" and a value of "null" or "*".
Important
This configuration is required only while running this tutorial program for JavaScript
on your computer. After you finish the tutorial, you should disable or remove this
configuration.
3. You can now run the JavaScript tutorial program files.
If you prefer to run a complete version of the JavaScript tutorial program instead of performing step-by-
step instructions, do the following:
1. Copy and paste the following program into a file named MoviesCreateTable.html:
API Version 2012-08-10
94
Amazon DynamoDB Developer Guide
Step 1: Create a Table
<!--
<!--
Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
This file is licensed under the Apache License, Version 2.0 (the "License").
You may not use this file except in compliance with the License. A copy of
the License is located at
http://aws.amazon.com/apache2.0/
<script>
AWS.config.update({
region: "us-west-2",
endpoint: 'http://localhost:8000',
// accessKeyId default can be used while using the downloadable version of DynamoDB.
// For security reasons, do not store AWS Credentials in your files. Use Amazon
Cognito instead.
accessKeyId: "fakeMyKeyId",
// secretAccessKey default can be used while using the downloadable version of
DynamoDB.
// For security reasons, do not store AWS Credentials in your files. Use Amazon
Cognito instead.
secretAccessKey: "fakeSecretAccessKey"
});
function createMovies() {
var params = {
TableName : "Movies",
KeySchema: [
{ AttributeName: "year", KeyType: "HASH"},
{ AttributeName: "title", KeyType: "RANGE" }
],
AttributeDefinitions: [
{ AttributeName: "year", AttributeType: "N" },
{ AttributeName: "title", AttributeType: "S" }
],
ProvisionedThroughput: {
ReadCapacityUnits: 5,
WriteCapacityUnits: 5
}
};
</script>
</head>
<body>
<input id="createTableButton" type="button" value="Create Table"
onclick="createMovies();" />
<br><br>
<textarea readonly id= "textarea" style="width:400px; height:800px"></textarea>
</body>
</html>
Note
• You set the endpoint to indicate that you are creating the table in DynamoDB on your
computer.
• In the createMovies function, you specify the table name, primary key attributes, and
its data types.
• The ProvisionedThroughput parameter is required, but the downloadable version of
DynamoDB ignores it. (Provisioned throughput is beyond the scope of this tutorial.)
2. Open the MoviesCreateTable.html file in your browser.
3. Choose Create Table.
To learn more about managing tables, see Working with Tables in DynamoDB (p. 333).
Topics
• Step 2.1: Download the Sample Data File (p. 97)
• Step 2.2: Load the Sample Data into the Movies Table (p. 97)
This scenario uses a sample data file that contains information about a few thousand movies from the
Internet Movie Database (IMDb). The movie data is in JSON format, as shown in the following example.
For each movie, there is a year, a title, and a JSON map named info.
[
{
"year" : ... ,
"title" : ... ,
"info" : { ... }
},
{
"year" : ...,
"title" : ...,
"info" : { ... }
},
...
• The year and title are used as the primary key attribute values for the Movies table.
• The rest of the info values are stored in a single attribute called info. This program illustrates how
you can store JSON in a DynamoDB attribute.
{
"year" : 2013,
"title" : "Turn It Down, Or Else!",
"info" : {
"directors" : [
"Alice Smith",
"Bob Jones"
],
"release_date" : "2013-01-18T00:00:00Z",
"rating" : 6.2,
"genres" : [
"Comedy",
"Drama"
],
"image_url" : "http://ia.media-imdb.com/images/N/
O9ERWAU7FS797AJ7LU8HN09AMUP908RLlo5JF90EWR7LJKQ7@@._V1_SX400_.jpg",
"plot" : "A rock band plays their music at high volumes, annoying the neighbors.",
"rank" : 11,
"running_time_secs" : 5215,
"actors" : [
"David Matthewman",
"Ann Thomas",
"Jonathan G. Neff"
]
}
}
Step 2.2: Load the Sample Data into the Movies Table
After you download the sample data, you can run the following program to populate the Movies table.
1. Copy and paste the following program into a file named MoviesLoadData.html:
<!--
<!--
Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
This file is licensed under the Apache License, Version 2.0 (the "License").
You may not use this file except in compliance with the License. A copy of
the License is located at
http://aws.amazon.com/apache2.0/
<html>
<head>
<script src="https://sdk.amazonaws.com/js/aws-sdk-2.7.16.min.js"></script>
<script type="text/javascript">
AWS.config.update({
region: "us-west-2",
endpoint: 'http://localhost:8000',
// accessKeyId default can be used while using the downloadable version of DynamoDB.
// For security reasons, do not store AWS Credentials in your files. Use Amazon
Cognito instead.
accessKeyId: "fakeMyKeyId",
// secretAccessKey default can be used while using the downloadable version of
DynamoDB.
// For security reasons, do not store AWS Credentials in your files. Use Amazon
Cognito instead.
secretAccessKey: "fakeSecretAccessKey"
});
function processFile(evt) {
document.getElementById('textarea').innerHTML = "";
document.getElementById('textarea').innerHTML += "Importing movies into DynamoDB.
Please wait..." + "\n";
var file = evt.target.files[0];
if (file) {
var r = new FileReader();
r.onload = function(e) {
var contents = e.target.result;
var allMovies = JSON.parse(contents);
allMovies.forEach(function (movie) {
document.getElementById('textarea').innerHTML += "Processing: " +
movie.title + "\n";
var params = {
TableName: "Movies",
Item: {
"year": movie.year,
"title": movie.title,
"info": movie.info
}
};
docClient.put(params, function (err, data) {
if (err) {
document.getElementById('textarea').innerHTML += "Unable to add
movie: " + count + movie.title + "\n";
document.getElementById('textarea').innerHTML += "Error JSON: "
+ JSON.stringify(err) + "\n";
} else {
document.getElementById('textarea').innerHTML += "PutItem
succeeded: " + movie.title + "\n";
textarea.scrollTop = textarea.scrollHeight;
}
});
});
};
r.readAsText(file);
} else {
alert("Could not read movie data file");
}
}
</script>
</head>
<body>
<input type="file" id="fileinput" accept='application/json'/>
<br><br>
<textarea readonly id= "textarea" style="width:400px; height:800px"></textarea>
<script>
document.getElementById('fileinput').addEventListener('change', processFile,
false);
</script>
</body>
</html>
To learn more about reading and writing data, see Working with Items in DynamoDB (p. 372).
Topics
• Step 3.1: Create a New Item (p. 99)
• Step 3.2: Read an Item (p. 100)
• Step 3.3: Update an Item (p. 102)
• Step 3.4: Increment an Atomic Counter (p. 104)
• Step 3.5: Update an Item (Conditionally) (p. 105)
• Step 3.6: Delete an Item (p. 107)
1. Copy and paste the following program into a file named MoviesItemOps01.html:
<!--
<!--
Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
This file is licensed under the Apache License, Version 2.0 (the "License").
You may not use this file except in compliance with the License. A copy of
the License is located at
http://aws.amazon.com/apache2.0/
<script>
AWS.config.update({
region: "us-west-2",
endpoint: 'http://localhost:8000',
// accessKeyId default can be used while using the downloadable version of DynamoDB.
// For security reasons, do not store AWS Credentials in your files. Use Amazon
Cognito instead.
accessKeyId: "fakeMyKeyId",
// secretAccessKey default can be used while using the downloadable version of
DynamoDB.
// For security reasons, do not store AWS Credentials in your files. Use Amazon
Cognito instead.
secretAccessKey: "fakeSecretAccessKey"
});
function createItem() {
var params = {
TableName :"Movies",
Item:{
"year": 2015,
"title": "The Big New Movie",
"info":{
"plot": "Nothing happens at all.",
"rating": 0
}
}
};
docClient.put(params, function(err, data) {
if (err) {
document.getElementById('textarea').innerHTML = "Unable to add item: " +
"\n" + JSON.stringify(err, undefined, 2);
} else {
document.getElementById('textarea').innerHTML = "PutItem succeeded: " +
"\n" + JSON.stringify(data, undefined, 2);
}
});
}
</script>
</head>
<body>
<input id="createItem" type="button" value="Create Item" onclick="createItem();" />
<br><br>
<textarea readonly id= "textarea" style="width:400px; height:800px"></textarea>
</body>
</html>
Note
The primary key is required. This code adds an item that has a primary key (year,
title) and info attributes. The info attribute stores sample JSON that provides more
information about the movie.
2. Open the MoviesItemOps01.html file in your browser.
3. Choose Create Item.
{
year: 2015,
title: "The Big New Movie",
info: {
plot: "Nothing happens at all.",
rating: 0
}
}
You can use the get method to read the item from the Movies table. You must specify the primary key
values, so you can read any item from Movies if you know its year and title.
1. Copy and paste the following program into a file named MoviesItemOps02.html:
<!--
<!--
Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
This file is licensed under the Apache License, Version 2.0 (the "License").
You may not use this file except in compliance with the License. A copy of
the License is located at
http://aws.amazon.com/apache2.0/
<script>
AWS.config.update({
region: "us-west-2",
endpoint: 'http://localhost:8000',
// accessKeyId default can be used while using the downloadable version of DynamoDB.
// For security reasons, do not store AWS Credentials in your files. Use Amazon
Cognito instead.
accessKeyId: "fakeMyKeyId",
// secretAccessKey default can be used while using the downloadable version of
DynamoDB.
// For security reasons, do not store AWS Credentials in your files. Use Amazon
Cognito instead.
secretAccessKey: "fakeSecretAccessKey"
});
function readItem() {
var table = "Movies";
var year = 2015;
var title = "The Big New Movie";
var params = {
TableName: table,
Key:{
"year": year,
"title": title
}
};
docClient.get(params, function(err, data) {
if (err) {
document.getElementById('textarea').innerHTML = "Unable to read item: " +
"\n" + JSON.stringify(err, undefined, 2);
} else {
</script>
</head>
<body>
<input id="readItem" type="button" value="Read Item" onclick="readItem();" />
<br><br>
<textarea readonly id= "textarea" style="width:400px; height:800px"></textarea>
</body>
</html>
{
year: 2015,
title: "The Big New Movie",
info: {
plot: "Nothing happens at all.",
rating: 0
}
}
To the following:
{
year: 2015,
title: "The Big New Movie",
info: {
plot: "Everything happens all at once.",
rating: 5.5,
actors: ["Larry", "Moe", "Curly"]
}
}
1. Copy and paste the following program into a file named MoviesItemOps03.html:
<!--
<!--
Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
This file is licensed under the Apache License, Version 2.0 (the "License").
You may not use this file except in compliance with the License. A copy of
the License is located at
http://aws.amazon.com/apache2.0/
<script>
AWS.config.update({
region: "us-west-2",
endpoint: 'http://localhost:8000',
// accessKeyId default can be used while using the downloadable version of DynamoDB.
// For security reasons, do not store AWS Credentials in your files. Use Amazon
Cognito instead.
accessKeyId: "fakeMyKeyId",
// secretAccessKey default can be used while using the downloadable version of
DynamoDB.
// For security reasons, do not store AWS Credentials in your files. Use Amazon
Cognito instead.
secretAccessKey: "fakeSecretAccessKey"
});
function updateItem() {
var table = "Movies";
var year = 2015;
var title = "The Big New Movie";
var params = {
TableName:table,
Key:{
"year": year,
"title": title
},
UpdateExpression: "set info.rating = :r, info.plot=:p, info.actors=:a",
ExpressionAttributeValues:{
":r":5.5,
":p":"Everything happens all at once.",
":a":["Larry", "Moe", "Curly"]
},
ReturnValues:"UPDATED_NEW"
};
</script>
</head>
<body>
<input id="updateItem" type="button" value="Update Item" onclick="updateItem();" />
<br><br>
<textarea readonly id= "textarea" style="width:400px; height:800px"></textarea>
</body>
</html>
Note
This program uses UpdateExpression to describe all updates you want to perform on the
specified item.
The ReturnValues parameter instructs DynamoDB to return only the updated attributes
("UPDATED_NEW").
2. Open the MoviesItemOps03.html file in your browser.
3. Choose Update Item.
The following program shows how to increment the rating for a movie. Each time you run it, the
program increments this attribute by one.
1. Copy and paste the following program into a file named MoviesItemOps04.html:
<!--
<!--
Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
This file is licensed under the Apache License, Version 2.0 (the "License").
You may not use this file except in compliance with the License. A copy of
the License is located at
http://aws.amazon.com/apache2.0/
<script>
AWS.config.update({
region: "us-west-2",
endpoint: 'http://localhost:8000',
// accessKeyId default can be used while using the downloadable version of DynamoDB.
// For security reasons, do not store AWS Credentials in your files. Use Amazon
Cognito instead.
accessKeyId: "fakeMyKeyId",
// secretAccessKey default can be used while using the downloadable version of
DynamoDB.
// For security reasons, do not store AWS Credentials in your files. Use Amazon
Cognito instead.
secretAccessKey: "fakeSecretAccessKey"
});
function increaseRating() {
var table = "Movies";
var year = 2015;
var title = "The Big New Movie";
var params = {
TableName:table,
Key:{
"year": year,
"title": title
},
UpdateExpression: "set info.rating = info.rating + :val",
ExpressionAttributeValues:{
":val":1
},
ReturnValues:"UPDATED_NEW"
};
</script>
</head>
<body>
<input id="increaseRating" type="button" value="Increase Rating"
onclick="increaseRating();" />
<br><br>
<textarea readonly id= "textarea" style="width:400px; height:800px"></textarea>
</body>
</html>
In this case, the item is updated only if there are more than three actors in the movie.
1. Copy and paste the following program into a file named MoviesItemOps05.html:
<!--
<!--
Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
This file is licensed under the Apache License, Version 2.0 (the "License").
You may not use this file except in compliance with the License. A copy of
the License is located at
http://aws.amazon.com/apache2.0/
<script>
AWS.config.update({
region: "us-west-2",
endpoint: 'http://localhost:8000',
// accessKeyId default can be used while using the downloadable version of DynamoDB.
// For security reasons, do not store AWS Credentials in your files. Use Amazon
Cognito instead.
accessKeyId: "fakeMyKeyId",
// secretAccessKey default can be used while using the downloadable version of
DynamoDB.
// For security reasons, do not store AWS Credentials in your files. Use Amazon
Cognito instead.
secretAccessKey: "fakeSecretAccessKey"
});
function conditionalUpdate() {
var table = "Movies";
var year = 2015;
var title = "The Big New Movie";
</script>
</head>
<body>
<input id="conditionalUpdate" type="button" value="Conditional Update"
onclick="conditionalUpdate();" />
<br><br>
</body>
</html>
This is because the movie has three actors in it, but the condition is checking for greater than three
actors.
4. Modify the program so that the ConditionExpression looks like this:
In the following example, you try to delete a specific movie item if its rating is 5 or less.
1. Copy and paste the following program into a file named MoviesItemOps06.html:
<!--
<!--
Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
This file is licensed under the Apache License, Version 2.0 (the "License").
You may not use this file except in compliance with the License. A copy of
the License is located at
http://aws.amazon.com/apache2.0/
<script>
AWS.config.update({
region: "us-west-2",
endpoint: 'http://localhost:8000',
// accessKeyId default can be used while using the downloadable version of DynamoDB.
// For security reasons, do not store AWS Credentials in your files. Use Amazon
Cognito instead.
accessKeyId: "fakeMyKeyId",
// secretAccessKey default can be used while using the downloadable version of
DynamoDB.
// For security reasons, do not store AWS Credentials in your files. Use Amazon
Cognito instead.
secretAccessKey: "fakeSecretAccessKey"
});
function conditionalDelete() {
var table = "Movies";
var year = 2015;
var title = "The Big New Movie";
var params = {
TableName:table,
Key:{
"year":year,
"title":title
},
ConditionExpression:"info.rating <= :val",
ExpressionAttributeValues: {
":val": 5.0
}
};
</script>
</head>
<body>
<input id="conditionalDelete" type="button" value="Conditional Delete"
onclick="conditionalDelete();" />
<br><br>
<textarea readonly id= "textarea" style="width:400px; height:800px"></textarea>
</body>
</html>
This is because the rating for this particular movie is greater than 5.
4. Modify the program to remove the condition from params.
var params = {
TableName:table,
Key:{
"title":title,
"year":year
}
};
5. Run the program again. The delete succeeds because you removed the condition.
The primary key for the Movies table is composed of the following:
To find all movies released during a year, you need to specify only the year. You can also provide the
title to retrieve a subset of movies based on some condition (on the sort key); for example, to find
movies released in 2014 that have a title starting with the letter "A".
In addition to query, there is also a scan method that can retrieve all the table data.
To learn more about querying and scanning data, see Working with Queries (p. 455) and Working with
Scans (p. 473), respectively.
Topics
• Step 4.1: Query - All Movies Released in a Year (p. 109)
• Step 4.2: Query - All Movies Released in a Year with Certain Titles (p. 111)
• Step 4.3: Scan (p. 112)
1. Copy and paste the following program into a file named MoviesQuery01.html:
<!--
<!--
Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
This file is licensed under the Apache License, Version 2.0 (the "License").
You may not use this file except in compliance with the License. A copy of
the License is located at
http://aws.amazon.com/apache2.0/
<script>
AWS.config.update({
region: "us-west-2",
endpoint: 'http://localhost:8000',
// accessKeyId default can be used while using the downloadable version of DynamoDB.
// For security reasons, do not store AWS Credentials in your files. Use Amazon
Cognito instead.
accessKeyId: "fakeMyKeyId",
// secretAccessKey default can be used while using the downloadable version of
DynamoDB.
// For security reasons, do not store AWS Credentials in your files. Use Amazon
Cognito instead.
secretAccessKey: "fakeSecretAccessKey"
});
function queryData() {
document.getElementById('textarea').innerHTML += "Querying for movies from 1985.";
var params = {
TableName : "Movies",
KeyConditionExpression: "#yr = :yyyy",
ExpressionAttributeNames:{
"#yr": "year"
},
ExpressionAttributeValues: {
":yyyy":1985
}
};
</script>
</head>
<body>
<input id="queryData" type="button" value="Query" onclick="queryData();" />
<br><br>
<textarea readonly id= "textarea" style="width:400px; height:800px"></textarea>
</body>
</html>
Note
ExpressionAttributeNames provides name substitution. This is used because year
is a reserved word in DynamoDB—you can't use it directly in any expression, including
KeyConditionExpression. For this reason, you use the expression attribute name #yr.
ExpressionAttributeValues provides value substitution. This is used because you can't
use literals in any expression, including KeyConditionExpression. For this reason, you
use the expression attribute value :yyyy.
2. Open the MoviesQuery01.html file in your browser.
3. Choose Query.
Note
The preceding program shows how to query a table by its primary key attributes. In DynamoDB,
you can optionally create one or more secondary indexes on a table, and query those indexes
in the same way that you query a table. Secondary indexes give your applications additional
flexibility by allowing queries on non-key attributes. For more information, see Improving Data
Access with Secondary Indexes (p. 493).
1. Copy and paste the following program into a file named MoviesQuery02.html:
<!--
<!--
Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
This file is licensed under the Apache License, Version 2.0 (the "License").
You may not use this file except in compliance with the License. A copy of
the License is located at
http://aws.amazon.com/apache2.0/
<script>
AWS.config.update({
region: "us-west-2",
endpoint: 'http://localhost:8000',
// accessKeyId default can be used while using the downloadable version of DynamoDB.
// For security reasons, do not store AWS Credentials in your files. Use Amazon
Cognito instead.
accessKeyId: "fakeMyKeyId",
// secretAccessKey default can be used while using the downloadable version of
DynamoDB.
// For security reasons, do not store AWS Credentials in your files. Use Amazon
Cognito instead.
secretAccessKey: "fakeSecretAccessKey"
});
function queryData() {
document.getElementById('textarea').innerHTML += "Querying for movies from 1985.";
var params = {
TableName : "Movies",
ProjectionExpression:"#yr, title, info.genres, info.actors[0]",
KeyConditionExpression: "#yr = :yyyy and title between :letter1 and :letter2",
ExpressionAttributeNames:{
"#yr": "year"
},
ExpressionAttributeValues: {
":yyyy":1992,
":letter1": "A",
":letter2": "L"
}
};
</script>
</head>
<body>
<input id="queryData" type="button" value="Query" onclick="queryData();" />
<br><br>
<textarea readonly id= "textarea" style="width:400px; height:800px"></textarea>
</body>
</html>
The following program scans the entire Movies table, which contains approximately 5,000 items. The
scan specifies the optional filter to retrieve only the movies from the 1950s (approximately 100 items),
and discard all the others.
1. Copy and paste the following program into a file named MoviesScan.html:
<!--
<!--
Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
This file is licensed under the Apache License, Version 2.0 (the "License").
You may not use this file except in compliance with the License. A copy of
the License is located at
http://aws.amazon.com/apache2.0/
<script>
AWS.config.update({
region: "us-west-2",
endpoint: 'http://localhost:8000',
// accessKeyId default can be used while using the downloadable version of DynamoDB.
// For security reasons, do not store AWS Credentials in your files. Use Amazon
Cognito instead.
accessKeyId: "fakeMyKeyId",
function scanData() {
document.getElementById('textarea').innerHTML += "Scanning Movies table." + "\n";
var params = {
TableName: "Movies",
ProjectionExpression: "#yr, title, info.rating",
FilterExpression: "#yr between :start_yr and :end_yr",
ExpressionAttributeNames: {
"#yr": "year",
},
ExpressionAttributeValues: {
":start_yr": 1950,
":end_yr": 1959
}
};
docClient.scan(params, onScan);
</script>
</head>
<body>
<input id="scanData" type="button" value="Scan" onclick="scanData();" />
<br><br>
<textarea readonly id= "textarea" style="width:400px; height:800px"></textarea>
</body>
</html>
Note
You can also use the Scan operation with any secondary indexes that you create on the table.
For more information, see Improving Data Access with Secondary Indexes (p. 493).
1. Copy and paste the following program into a file named MoviesDeleteTable.html:
<!--
<!--
Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
This file is licensed under the Apache License, Version 2.0 (the "License").
You may not use this file except in compliance with the License. A copy of
the License is located at
http://aws.amazon.com/apache2.0/
<script>
AWS.config.update({
region: "us-west-2",
endpoint: 'http://localhost:8000',
// accessKeyId default can be used while using the downloadable version of DynamoDB.
// For security reasons, do not store AWS Credentials in your files. Use Amazon
Cognito instead.
accessKeyId: "fakeMyKeyId",
// secretAccessKey default can be used while using the downloadable version of
DynamoDB.
// For security reasons, do not store AWS Credentials in your files. Use Amazon
Cognito instead.
secretAccessKey: "fakeSecretAccessKey"
});
function deleteMovies() {
var params = {
TableName : "Movies"
};
</script>
</head>
<body>
<input id="deleteTableButton" type="button" value="Delete Table"
onclick="deleteMovies();" />
<br><br>
<textarea readonly id= "textarea" style="width:400px; height:800px"></textarea>
</body>
</html>
Summary
In this tutorial, you created the Movies table in DynamoDB on your computer and performed basic
operations. The downloadable version of DynamoDB is useful during application development and
testing. However, when you're ready to run your application in a production environment, you must
modify your code so that it uses the Amazon DynamoDB web service.
AWS.config.update({region: "aws-region"});
For example, if you want to use the us-west-2 region, set the following region:
AWS.config.update({region: "us-west-2"});
The program now uses the Amazon DynamoDB web service region in US West (Oregon).
DynamoDB is available in several regions worldwide. For the complete list, see Regions and Endpoints in
the AWS General Reference. For more information about setting regions and endpoints in your code, see
Setting the Region in the AWS SDK for JavaScript Getting Started Guide.
For more information, see Configure AWS Credentials in Your Files Using Amazon Cognito (p. 843).
• Create a table called Movies and load sample data in JSON format.
• Perform create, read, update, and delete operations on the table.
• Run simple queries.
As you work through this tutorial, you can refer to the AWS SDK for JavaScript API Reference.
Tutorial Prerequisites
• Download and run DynamoDB on your computer. For more information, see Setting Up DynamoDB
Local (Downloadable Version) (p. 43).
Note
You use the downloadable version of DynamoDB in this tutorial. For information about how to
run the same code against the DynamoDB web service, see the Summary (p. 132).
• Set up an AWS access key to use the AWS SDKs. For more information, see Setting Up DynamoDB (Web
Service) (p. 48).
• Set up the AWS SDK for JavaScript:
• Go to http://nodejs.org and install Node.js.
• Go to https://aws.amazon.com/sdk-for-node-js and install the AWS SDK for JavaScript.
For more information, see the AWS SDK for JavaScript Getting Started Guide.
1. Copy and paste the following program into a file named MoviesCreateTable.js.
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
var AWS = require("aws-sdk");
AWS.config.update({
region: "us-west-2",
endpoint: "http://localhost:8000"
});
var params = {
TableName : "Movies",
KeySchema: [
{ AttributeName: "year", KeyType: "HASH"}, //Partition key
{ AttributeName: "title", KeyType: "RANGE" } //Sort key
],
AttributeDefinitions: [
{ AttributeName: "year", AttributeType: "N" },
{ AttributeName: "title", AttributeType: "S" }
],
ProvisionedThroughput: {
ReadCapacityUnits: 10,
WriteCapacityUnits: 10
}
};
Note
• You set the endpoint to indicate that you are creating the table in DynamoDB on your
computer.
• In the createTable call, you specify table name, primary key attributes, and its data
types.
• The ProvisionedThroughput parameter is required, but the downloadable version of
DynamoDB ignores it. (Provisioned throughput is beyond the scope of this exercise.)
2. To run the program, type the following command:
node MoviesCreateTable.js
To learn more about managing tables, see Working with Tables in DynamoDB (p. 333).
Topics
• Step 2.1: Download the Sample Data File (p. 118)
• Step 2.2: Load the Sample Data into the Movies Table (p. 119)
We use a sample data file that contains information about a few thousand movies from the Internet
Movie Database (IMDb). The movie data is in JSON format, as shown in the following example. For each
movie, there is a year, a title, and a JSON map named info.
[
{
"year" : ... ,
"title" : ... ,
"info" : { ... }
},
{
"year" : ...,
"title" : ...,
"info" : { ... }
},
...
• The year and title are used as the primary key attribute values for the Movies table.
• The rest of the info values are stored in a single attribute called info. This program illustrates how
you can store JSON in a DynamoDB attribute.
{
"year" : 2013,
"title" : "Turn It Down, Or Else!",
"info" : {
"directors" : [
"Alice Smith",
"Bob Jones"
],
"release_date" : "2013-01-18T00:00:00Z",
"rating" : 6.2,
"genres" : [
"Comedy",
"Drama"
],
"image_url" : "http://ia.media-imdb.com/images/N/
O9ERWAU7FS797AJ7LU8HN09AMUP908RLlo5JF90EWR7LJKQ7@@._V1_SX400_.jpg",
"plot" : "A rock band plays their music at high volumes, annoying the neighbors.",
"rank" : 11,
"running_time_secs" : 5215,
"actors" : [
"David Matthewman",
"Ann Thomas",
"Jonathan G. Neff"
]
}
}
Step 2.2: Load the Sample Data into the Movies Table
After you download the sample data, you can run the following program to populate the Movies table.
1. Copy and paste the following program into a file named MoviesLoadData.js:
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
var AWS = require("aws-sdk");
var fs = require('fs');
AWS.config.update({
region: "us-west-2",
endpoint: "http://localhost:8000"
});
node MoviesLoadData.js
To learn more about reading and writing data, see Working with Items in DynamoDB (p. 372).
Topics
• Step 3.1: Create a New Item (p. 120)
• Step 3.2: Read an Item (p. 121)
• Step 3.3: Update an Item (p. 122)
• Step 3.4: Increment an Atomic Counter (p. 124)
• Step 3.5: Update an Item (Conditionally) (p. 125)
• Step 3.6: Delete an Item (p. 126)
1. Copy and paste the following program into a file named MoviesItemOps01.js:
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
var AWS = require("aws-sdk");
AWS.config.update({
region: "us-west-2",
endpoint: "http://localhost:8000"
});
var params = {
TableName:table,
Item:{
"year": year,
"title": title,
"info":{
"plot": "Nothing happens at all.",
"rating": 0
}
}
};
}
});
Note
The primary key is required. This code adds an item that has a primary key (year,
title) and info attributes. The info attribute stores sample JSON that provides more
information about the movie.
2. To run the program, type the following command:
node MoviesItemOps01.js
{
year: 2015,
title: "The Big New Movie",
info: {
plot: "Nothing happens at all.",
rating: 0
}
}
You can use the get method to read the item from the Movies table. You must specify the primary key
values, so you can read any item from Movies if you know its year and title.
1. Copy and paste the following program into a file named MoviesItemOps02.js:
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
var AWS = require("aws-sdk");
AWS.config.update({
region: "us-west-2",
endpoint: "http://localhost:8000"
});
var params = {
TableName: table,
Key:{
"year": year,
"title": title
}
};
node MoviesItemOps02.js
{
year: 2015,
title: "The Big New Movie",
info: {
plot: "Nothing happens at all.",
rating: 0
}
}
To the following:
{
year: 2015,
title: "The Big New Movie",
info: {
plot: "Everything happens all at once.",
rating: 5.5,
actors: ["Larry", "Moe", "Curly"]
}
}
1. Copy and paste the following program into a file named MoviesItemOps03.js:
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
var AWS = require("aws-sdk");
AWS.config.update({
region: "us-west-2",
endpoint: "http://localhost:8000"
});
var params = {
TableName:table,
Key:{
"year": year,
"title": title
},
UpdateExpression: "set info.rating = :r, info.plot=:p, info.actors=:a",
ExpressionAttributeValues:{
":r":5.5,
":p":"Everything happens all at once.",
":a":["Larry", "Moe", "Curly"]
},
ReturnValues:"UPDATED_NEW"
};
Note
This program uses UpdateExpression to describe all updates you want to perform on the
specified item.
The ReturnValues parameter instructs DynamoDB to return only the updated attributes
("UPDATED_NEW").
2. To run the program, type the following command:
node MoviesItemOps03.js
The following program shows how to increment the rating for a movie. Each time you run it, the
program increments this attribute by one.
1. Copy and paste the following program into a file named MoviesItemOps04.js:
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
var AWS = require("aws-sdk");
AWS.config.update({
region: "us-west-2",
endpoint: "http://localhost:8000"
});
var params = {
TableName:table,
Key:{
"year": year,
"title": title
},
UpdateExpression: "set info.rating = info.rating + :val",
ExpressionAttributeValues:{
":val": 1
},
ReturnValues:"UPDATED_NEW"
};
node MoviesItemOps04.js
In this case, the item is updated only if there are more than three actors in the movie.
1. Copy and paste the following program into a file named MoviesItemOps05.js:
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
var AWS = require("aws-sdk");
AWS.config.update({
region: "us-west-2",
endpoint: "http://localhost:8000"
});
var params = {
TableName:table,
Key:{
"year": year,
"title": title
},
UpdateExpression: "remove info.actors[0]",
ConditionExpression: "size(info.actors) > :num",
ExpressionAttributeValues:{
":num": 3
},
ReturnValues:"UPDATED_NEW"
};
node MoviesItemOps05.js
This is because the movie has three actors in it, but the condition is checking for greater than three
actors.
3. Modify the program so that the ConditionExpression looks like this:
In the following example, you try to delete a specific movie item if its rating is 5 or less.
1. Copy and paste the following program into a file named MoviesItemOps06.js:
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
var AWS = require("aws-sdk");
AWS.config.update({
region: "us-west-2",
endpoint: "http://localhost:8000"
});
var params = {
TableName:table,
Key:{
"year": year,
"title": title
},
ConditionExpression:"info.rating <= :val",
ExpressionAttributeValues: {
":val": 5.0
}
};
node MoviesItemOps06.js
This is because the rating for this particular movie is greater than 5.
3. Modify the program to remove the condition from params.
var params = {
TableName:table,
Key:{
"title":title,
"year":year
}
};
4. Run the program again. Now, the delete succeeds because you removed the condition.
The primary key for the Movies table is composed of the following:
To find all movies released during a year, you need to specify only the year. You can also provide the
title to retrieve a subset of movies based on some condition (on the sort key); for example, to find
movies released in 2014 that have a title starting with the letter "A".
In addition to query, there is also a scan method that can retrieve all the table data.
To learn more about querying and scanning data, see Working with Queries (p. 455) and Working with
Scans (p. 473), respectively.
Topics
• Step 4.1: Query - All Movies Released in a Year (p. 128)
• Step 4.2: Query - All Movies Released in a Year with Certain Titles (p. 129)
• Step 4.3: Scan (p. 130)
1. Copy and paste the following program into a file named MoviesQuery01.js:
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
var AWS = require("aws-sdk");
AWS.config.update({
region: "us-west-2",
endpoint: "http://localhost:8000"
});
var params = {
TableName : "Movies",
KeyConditionExpression: "#yr = :yyyy",
ExpressionAttributeNames:{
"#yr": "year"
},
ExpressionAttributeValues: {
":yyyy": 1985
}
};
Note
ExpressionAttributeNames provides name substitution. We use this because year
is a reserved word in DynamoDB—you cannot use it directly in any expression, including
KeyConditionExpression. We use the expression attribute name #yr to address this.
ExpressionAttributeValues provides value substitution. We use this because you
cannot use literals in any expression, including KeyConditionExpression. We use the
expression attribute value :yyyy to address this.
2. To run the program, type the following command:
node MoviesQuery01.js
Note
The preceding program shows how to query a table by its primary key attributes. In DynamoDB,
you can optionally create one or more secondary indexes on a table, and query those indexes
in the same way that you query a table. Secondary indexes give your applications additional
flexibility by allowing queries on non-key attributes. For more information, see Improving Data
Access with Secondary Indexes (p. 493).
1. Copy and paste the following program into a file named MoviesQuery02.js:
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
var AWS = require("aws-sdk");
AWS.config.update({
region: "us-west-2",
endpoint: "http://localhost:8000"
});
console.log("Querying for movies from 1992 - titles A-L, with genres and lead actor");
var params = {
TableName : "Movies",
ProjectionExpression:"#yr, title, info.genres, info.actors[0]",
KeyConditionExpression: "#yr = :yyyy and title between :letter1 and :letter2",
ExpressionAttributeNames:{
"#yr": "year"
},
ExpressionAttributeValues: {
":yyyy": 1992,
":letter1": "A",
":letter2": "L"
}
};
node MoviesQuery02.js
The following program scans the entire Movies table, which contains approximately 5,000 items. The
scan specifies the optional filter to retrieve only the movies from the 1950s (approximately 100 items),
and discard all of the others.
1. Copy and paste the following program into a file named MoviesScan.js:
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
var AWS = require("aws-sdk");
AWS.config.update({
region: "us-west-2",
endpoint: "http://localhost:8000"
});
var params = {
TableName: "Movies",
node MoviesScan.js
Note
You can also use the Scan operation with any secondary indexes that you have created on the
table. For more information, see Improving Data Access with Secondary Indexes (p. 493).
1. Copy and paste the following program into a file named MoviesDeleteTable.js:
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
var AWS = require("aws-sdk");
AWS.config.update({
region: "us-west-2",
endpoint: "http://localhost:8000"
});
var params = {
TableName : "Movies"
};
node MoviesDeleteTable.js
Summary
In this tutorial, you created the Movies table in DynamoDB on your computer and performed basic
operations. The downloadable version of DynamoDB is useful during application development and
testing. However, when you're ready to run your application in a production environment, you must
modify your code so that it uses the Amazon DynamoDB web service.
AWS.config.update({endpoint: "https://dynamodb.aws-region.amazonaws.com"});
For example, if you want to use the us-west-2 Region, you set the following endpoint:
AWS.config.update({endpoint: "https://dynamodb.us-west-2.amazonaws.com"});
Instead of using DynamoDB on your computer, the program now uses the DynamoDB web service
endpoint in US West (Oregon).
Amazon DynamoDB is available in several Regions worldwide. For the complete list, see Regions and
Endpoints in the AWS General Reference. For more information about setting Regions and endpoints in
your code, see Setting the Region in the AWS SDK for JavaScript Developer Guide.
• Create a table named Movies using a utility program written in C#, and load sample data in JSON
format.
• Perform create, read, update, and delete operations on the table.
• Run simple queries.
The DynamoDB module of the AWS SDK for .NET offers several programming models for different use
cases. In this exercise, the C# code uses the document model, which provides a level of abstraction that is
often convenient. It also uses the low-level API, which handles nested attributes more effectively.
For information about the document model API, see .NET: Document Model (p. 273). For information
about the low-level API, see Working with Tables: .NET (p. 365).
Topics
• .NET and DynamoDB Tutorial Prerequisites (p. 133)
• Step 1: Create a DynamoDB Client (p. 135)
• Step 2: Create a Table Using the Low-Level API (p. 136)
• Step 3: Load Sample Data into the Movies Table (p. 139)
• Step 4: Add a New Movie to the Movies Table (p. 142)
• Step 5: Read and Display a Record from the Movies Table (p. 143)
• Step 6: Update the New Movie Record (p. 145)
• Step 7: Try to Conditionally Delete the Movie (p. 148)
• Step 8: Query the Movies Table (p. 149)
• Step 9: Scan the Movies Table (p. 152)
• Step 10: Delete the Movies Table (p. 153)
Before you begin, follow these steps to ensure that you have all the prerequisites needed to complete
the tutorial:
• Use a computer that is running a recent version of Windows and a current version of Microsoft Visual
Studio. If you don't have Visual Studio installed, you can download a free copy of the Community
edition from the Microsoft Visual Studio website.
• Download and run DynamoDB (Downloadable Version). For more information, see Setting Up
DynamoDB Local (Downloadable Version) (p. 43).
Note
You use the downloadable version of DynamoDB in this tutorial. For more information about
how to run the same code against the DynamoDB service in the cloud, see Step 1: Create a
DynamoDB Client (p. 135).
• Set up an AWS access key to use the AWS SDKs. For more information, see Setting Up DynamoDB (Web
Service) (p. 48).
• Set up a security profile for DynamoDB in Visual Studio. For step-by-step instructions, see .NET Code
Examples (p. 330).
• Open the getting started demo solution that is used in this tutorial in Visual Studio:
1. On the Build menu, choose Build Solution (or press Ctrl+Shift+B). The solution should build
successfully.
2. Make sure that the Solution Explorer pane is being displayed and pinned in Visual Studio. If it
isn't, you can find it in the View menu, or by pressing Ctrl+Alt+L.
3. In Solution Explorer, open the 00_Main.cs file. This is the file that controls the execution of the
demo program that is used in this tutorial.
Note
This tutorial shows how to use asynchronous methods rather than synchronous methods
because .NET core supports only asynchronous methods and also because the asynchronous
model is generally preferable when performance is crucial. For more information, see AWS
Asynchronous APIs for .NET.
To install the NuGet package for the DynamoDB module of the AWS SDK for .NET version 3 in your own
programs, open the NuGet Package Manager Console from the Tools menu in Visual Studio. Then enter
the following command at the PM> prompt:
In a similar way, you can use the NuGet Package Manager Console to load the Json.NET library into
your own projects in Visual Studio. At the PM> prompt, enter the following command:
Next Step
Step 1: Create a DynamoDB Client (p. 135)
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
using System;
using System.Net.Sockets;
using Amazon.DynamoDBv2;
namespace DynamoDB_intro
{
public static partial class Ddb_Intro
{
/*-----------------------------------------------------------------------------------
* If you are creating a client for the DynamoDB service, make sure your credentials
* are set up first, as explained in:
* https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/
SettingUp.DynamoWebService.html,
*
* If you are creating a client for DynamoDBLocal (for testing purposes),
* DynamoDB-Local should be started first. For most simple testing, you can keep
* data in memory only, without writing anything to disk. To do this, use the
* following command line:
*
* java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -inMemory
*
* For information about DynamoDBLocal, see:
* https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/
DynamoDBLocal.html.
*-----------------------------------------------------------------------------------
*/
/*--------------------------------------------------------------------------
* createClient
*--------------------------------------------------------------------------*/
public static bool createClient( bool useDynamoDBLocal )
{
if( useDynamoDBLocal )
{
operationSucceeded = false;
operationFailed = false;
// First, check to see whether anyone is listening on the DynamoDB local port
// (by default, this is port 8000, so if you are using a different port, modify
this accordingly)
bool localFound = false;
try
{
using (var tcp_client = new TcpClient())
{
else
{
try { client = new AmazonDynamoDBClient( ); }
catch( Exception ex )
{
Console.WriteLine( " FAILED to create a DynamoDB client; " + ex.Message );
operationFailed = true;
}
}
operationSucceeded = true;
return true;
}
}
}
Main calls this function with the useDynamoDBLocal parameter set to true. Therefore the local test
version of DynamoDB must already be running on your machine using the default port (8000), or the call
fails.
Setting the useDynamoDBLocal parameter to false creates a client for the DynamoDB service itself
rather than the local test program.
Next Step
Step 2: Create a Table Using the Low-Level API (p. 136)
In this step of the Microsoft .NET and DynamoDB Tutorial (p. 133), you create a table named Movies in
Amazon DynamoDB. The primary key for the table is composed of the following attributes:
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
using System;
using System.Collections.Generic;
using System.Threading.Tasks;
using Amazon.DynamoDBv2.Model;
namespace DynamoDB_intro
{
public static partial class Ddb_Intro
{
/*--------------------------------------------------------------------------
* CreatingTable_async
*--------------------------------------------------------------------------*/
public static async Task CreatingTable_async( string new_table_name,
List<AttributeDefinition> table_attributes,
List<KeySchemaElement> table_key_schema,
ProvisionedThroughput provisionedThroughput )
{
Console.WriteLine( " -- Creating a new table named {0}...", new_table_name );
if( await checkingTableExistence_async( new_table_name ) )
{
Console.WriteLine( " -- No need to create a new table..." );
return;
}
if( operationFailed )
return;
operationSucceeded = false;
Task<bool> newTbl = CreateNewTable_async( new_table_name,
table_attributes,
table_key_schema,
provisionedThroughput );
await newTbl;
}
/*--------------------------------------------------------------------------
* checkingTableExistence_async
*--------------------------------------------------------------------------*/
static async Task<bool> checkingTableExistence_async( string tblNm )
{
DescribeTableResponse descResponse;
operationSucceeded = false;
operationFailed = false;
ListTablesResponse tblResponse = await Ddb_Intro.client.ListTablesAsync();
if (tblResponse.TableNames.Contains(tblNm))
{
Console.WriteLine(" A table named {0} already exists in DynamoDB!", tblNm);
/*--------------------------------------------------------------------------
* CreateNewTable_async
*--------------------------------------------------------------------------*/
public static async Task<bool> CreateNewTable_async( string table_name,
List<AttributeDefinition>
table_attributes,
List<KeySchemaElement>
table_key_schema,
ProvisionedThroughput
provisioned_throughput )
{
CreateTableRequest request;
CreateTableResponse response;
operationSucceeded = false;
operationFailed = false;
try
{
Task<CreateTableResponse> makeTbl = Ddb_Intro.client.CreateTableAsync( request );
The DynamoDB_intro sample uses asynchronous methods rather than synchronous methods wherever
possible. This is because .NET core supports only asynchronous methods, and the asynchronous model
is generally preferable when performance is crucial. For more information, see AWS Asynchronous APIs
for .NET.
To learn more about managing tables, see Working with Tables in DynamoDB (p. 333).
Next Step
Step 3: Load Sample Data into the Movies Table (p. 139)
For each movie, moviedata.json defines a year name-value pair, a title name-value pair, and a
complex info object, as illustrated by the following example:
{
"year" : 2013,
"title" : "Turn It Down, Or Else!",
"info" : {
"directors" : [
"Alice Smith",
"Bob Jones"
],
"release_date" : "2013-01-18T00:00:00Z",
"rating" : 6.2,
"genres" : [
"Comedy",
"Drama"
],
"image_url" : "http://ia.media-imdb.com/images/N/
O9ERWAU7FS797AJ7LU8HN09AMUP908RLlo5JF90EWR7LJKQ7@@._V1_SX400_.jpg",
"plot" : "A rock band plays their music at high volumes, annoying the neighbors.",
"rank" : 11,
"running_time_secs" : 5215,
"actors" : [
"David Matthewman",
"Ann Thomas",
"Jonathan G. Neff"
]
}
}
Before loading the moviedata.json file, the Main function in DynamoDB_intro checks to
determine whether the Movies table exists and is still empty. If so, it waits on an asynchronous
LoadingData_async function that is implemented in the 03_LoadingData.cs file:
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
using System;
using System.IO;
using System.Threading.Tasks;
using Amazon.DynamoDBv2.DocumentModel;
using Newtonsoft.Json;
using Newtonsoft.Json.Linq;
namespace DynamoDB_intro
{
public static partial class Ddb_Intro
{
/*--------------------------------------------------------------------------
* LoadingData_async
*--------------------------------------------------------------------------*/
public static async Task LoadingData_async( Table table, string filePath )
{
JArray movieArray;
/*--------------------------------------------------------------------------
* ReadJsonMovieFile_async
*--------------------------------------------------------------------------*/
public static async Task<JArray> ReadJsonMovieFile_async( string JsonMovieFilePath )
{
StreamReader sr = null;
/*--------------------------------------------------------------------------
* LoadJsonMovieData_async
*--------------------------------------------------------------------------*/
public static async Task LoadJsonMovieData_async( Table moviesTable, JArray
moviesArray )
{
operationSucceeded = false;
operationFailed = false;
int n = moviesArray.Count;
Console.Write( " -- Starting to load {0:#,##0} movie records into the Movies
table asynchronously...\n" + "" +
" Wrote: ", n );
for( int i = 0, j = 99; i < n; i++ )
{
try
{
string itemJson = moviesArray[i].ToString();
Document doc = Document.FromJson(itemJson);
Task putItem = moviesTable.PutItemAsync(doc);
if( i >= j )
{
j++;
Console.Write( "{0,5:#,##0}, ", j );
if( j % 1000 == 0 )
Console.Write( "\n " );
j += 99;
}
await putItem;
}
catch( Exception ex )
{
Console.WriteLine( "\n ERROR: Could not write the movie record #{0:#,##0},
because:\n {1}",
i, ex.Message );
operationFailed = true;
break;
}
}
if( !operationFailed )
{
operationSucceeded = true;
Console.WriteLine( "\n -- Finished writing all movie records to DynamoDB!" );
}
}
}
When the data has been read successfully, LoadingData_async waits on LoadJsonMovieData_async
to load the movie records into the Movies table using the DynamoDB document-model
Table.PutItemAsync API. For information about the document model API, see .NET: Document
Model (p. 273).
Next Step
Step 4: Add a New Movie to the Movies Table (p. 142)
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
using System;
using System.Threading.Tasks;
using Amazon.DynamoDBv2.DocumentModel;
namespace DynamoDB_intro
{
public static partial class Ddb_Intro
{
/*--------------------------------------------------------------------------
* WritingNewMovie
*--------------------------------------------------------------------------*/
WritingNewMovie_async begins by checking to determine whether the new movie has already been
added to the Movies table. If it has not, it waits for the DynamoDB Table.PutItemAsyn method to
add the new movie record.
Next Step
Step 5: Read and Display a Record from the Movies Table (p. 143)
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
using System;
using System.Threading.Tasks;
using Amazon.DynamoDBv2.DocumentModel;
namespace DynamoDB_intro
{
public static partial class Ddb_Intro
{
/*--------------------------------------------------------------------------
* ReadingMovie_async
*--------------------------------------------------------------------------*/
public static async Task<bool> ReadingMovie_async( int year, string title, bool
report )
{
// Create Primitives for the HASH and RANGE portions of the primary key
Primitive hash = new Primitive(year.ToString(), true);
Primitive range = new Primitive(title, false);
operationSucceeded = false;
operationFailed = false;
try
{
Task<Document> readMovie = moviesTable.GetItemAsync(hash, range, token);
if( report )
Console.WriteLine( " -- Reading the {0} movie \"{1}\" from the Movies table...",
year, title );
movie_record = await readMovie;
if( movie_record == null )
{
if( report )
Console.WriteLine( " -- Sorry, that movie isn't in the Movies table." );
return ( false );
}
else
{
if( report )
Console.WriteLine( " -- Found it! The movie record looks like this:\n" +
movie_record.ToJsonPretty( ) );
operationSucceeded = true;
return ( true );
}
}
catch( Exception ex )
{
Console.WriteLine( " FAILED to get the movie, because: {0}.", ex.Message );
operationFailed = true;
}
return ( false );
}
}
Next Step
Step 6: Update the New Movie Record (p. 145)
Topics
• Change Plot and Rating, and Add Actors (p. 145)
• Increment the Movie Rating Atomically (p. 147)
• Try to Update Using a Condition That Fails (p. 147)
• For More Information (p. 147)
• Next Step (p. 148)
Setting ReturnValues to NONE specifies that no update information should be returned. However,
when Main then waits on UpdatingMovie_async, it sets the report parameter to true. This causes
UpdatingMovie_async to change ReturnValues to ALL_NEW, meaning that the updated item should
be returned in its entirety.
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
using System;
using System.Threading.Tasks;
using System.Collections.Generic;
using System.Text;
using Amazon.DynamoDBv2.Model;
namespace DynamoDB_intro
{
public static partial class Ddb_Intro
{
/*--------------------------------------------------------------------------
* UpdatingMovie_async
*--------------------------------------------------------------------------*/
public static async Task<bool> UpdatingMovie_async( UpdateItemRequest updateRequest,
bool report )
{
UpdateItemResponse updateResponse = null;
operationSucceeded = false;
operationFailed = false;
if( report )
{
Console.WriteLine( " -- Trying to update a movie item..." );
updateRequest.ReturnValues = "ALL_NEW";
}
try
{
updateResponse = await client.UpdateItemAsync( updateRequest );
Console.WriteLine( " -- SUCCEEDED in updating the movie item!" );
}
catch( Exception ex )
{
Console.WriteLine( " -- FAILED to update the movie item, because:\n
{0}.", ex.Message );
if( updateResponse != null )
Console.WriteLine( " -- The status code was " +
updateResponse.HttpStatusCode.ToString( ) );
operationFailed = true;return ( false );
}
if( report )
{
Console.WriteLine( " Here is the updated movie informtion:" );
Console.WriteLine( movieAttributesToJson( updateResponse.Attributes ) );
}
operationSucceeded = true;
return ( true );
}
}
}
Where the document model has a handy Document.ToJsonPretty( ) method for displaying
document content, working with low-level attribute values is a little more complicated. The
00b_DDB_Attributes.cs file can provide some examples of how to access and work with
AttributeValue objects.
To increment the rating value in the movie that you just created, the Main function makes the following
changes in the UpdateItemRequest that it used in the previous update:
To demonstrate this, the Main function makes the following changes to the UpdateItemRequest that
it just used to increment the movie rating:
The update can now occur only if there are more than three actors in the movie record being
updated. Because there are only three actors listed, the condition fails when Main waits on
UpdatingMovie_async, and the update does not occur.
Next Step
Step 7: Try to Conditionally Delete the Movie (p. 148)
The Main function then passes the Expression as one of the parameters of DeletingItem_async
and waits on it. DeletingItem_async is implemented in the 07_DeletingItem.cs file:
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
using System;
using System.Threading.Tasks;
using Amazon.DynamoDBv2.DocumentModel;
namespace DynamoDB_intro
{
public static partial class Ddb_Intro
{
/*--------------------------------------------------------------------------
* DeletingItem_async
*--------------------------------------------------------------------------*/
public static async Task<bool> DeletingItem_async( Table table, int year, string title,
Expression condition=null )
{
Document deletedItem = null;
operationSucceeded = false;
operationFailed = false;
// Create Primitives for the HASH and RANGE portions of the primary key
Primitive hash = new Primitive(year.ToString(), true);
Primitive range = new Primitive(title, false);
DeleteItemOperationConfig deleteConfig = new DeleteItemOperationConfig( );
deleteConfig.ConditionalExpression = condition;
deleteConfig.ReturnValues = ReturnValues.AllOldAttributes;
Console.WriteLine( " -- Trying to delete the {0} movie \"{1}\"...", year, title );
try
{
Task<Document> delItem = table.DeleteItemAsync( hash, range, deleteConfig );
deletedItem = await delItem;
}
catch( Exception ex )
{
Console.WriteLine( " FAILED to delete the movie item, for this reason:\n
{0}\n", ex.Message );
operationFailed = true;
return ( false );
}
Console.WriteLine( " -- SUCCEEDED in deleting the movie record that looks like
this:\n" +
deletedItem.ToJsonPretty( ) );
operationSucceeded = true;
return ( true );
}
}
Because the movie's rating is 6.5, which is higher than 5.0, the condition is not met, and the deletion
fails.
Then, when the Main function changes the rating threshold in the condition to 7.0 instead of 5.0, the
deletion succeeds.
Next Step
Step 8: Query the Movies Table (p. 149)
Topics
• Use a Simple Document Model Search to Query for 1985 Movies (p. 149)
• Use a QueryOperationConfig to Create a More Complex Query Search (p. 151)
• Use a Low-Level Query to Find 1992 Movies With Titles Between 'M...' and 'Tzz...' (p. 152)
• Next Step (p. 152)
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
using System;
using System.Text;
using System.Threading.Tasks;
using Amazon.DynamoDBv2.Model;
using Amazon.DynamoDBv2.DocumentModel;
using System.Collections.Generic;
namespace DynamoDB_intro
{
public static partial class Ddb_Intro
{
/*--------------------------------------------------------------------------
* SearchListing_async
*--------------------------------------------------------------------------*/
public static async Task<bool> SearchListing_async( Search search )
{
int i = 0;
List<Document> docList = new List<Document>( );
do
{
try
{
getNextBatch = search.GetNextSetAsync( );
docList = await getNextBatch;
}
catch( Exception ex )
{
Console.WriteLine( " FAILED to get the next batch of movies from Search!
Reason:\n " +
ex.Message );
operationFailed = true;
return ( false );
}
/*--------------------------------------------------------------------------
* ClientQuerying_async
*--------------------------------------------------------------------------*/
public static async Task<bool> ClientQuerying_async( QueryRequest qRequest )
{
operationSucceeded = false;
operationFailed = false;
QueryResponse qResponse;
try
{
Task<QueryResponse> clientQueryTask = client.QueryAsync( qRequest );
qResponse = await clientQueryTask;
}
catch( Exception ex )
{
Console.WriteLine( " The low-level query FAILED, because:\n {0}.",
ex.Message );
operationFailed = true;
return ( false );
}
Console.WriteLine( " -- The low-level query succeeded, and returned {0} movies!",
qResponse.Items.Count );
if( !pause( ) )
{
operationFailed = true;
return ( false );
}
Console.WriteLine( " Here are the movies retrieved:" +
"
--------------------------------------------------------------------------" );
foreach( Dictionary<string, AttributeValue> item in qResponse.Items )
showMovieAttrsShort( item );
Once again, it then creates a Search object by calling the Table.Query API, this time with the
QueryOperationConfig object as the only parameter.
And again, it waits on SearchListing_async to retrieve and display the query results.
Next Step
Step 9: Scan the Movies Table (p. 152)
Topics
• Use a Document Model Search to Scan for 1950s Movies (p. 152)
• Use a Low-Level Scan to retrieve 1960s Movies (p. 153)
• Next Step (p. 153)
To obtain a Search object for the scan, it passes the ScanOperationConfig object to Table.Scan.
Using the Search object, it then waits on SearchListing_async (implemented in 08_Querying.cs)
to retrieve and display the scan results.
Next Step
Step 10: Delete the Movies Table (p. 153)
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
namespace DynamoDB_intro
{
public static partial class Ddb_Intro
{
/*--------------------------------------------------------------------------
* DeletingTable_async
*--------------------------------------------------------------------------*/
public static async Task<bool> DeletingTable_async( string tableName )
{
operationSucceeded = false;
operationFailed = false;
• Create a table called Movies and load sample data in JSON format.
• Perform create, read, update, and delete operations on the table.
• Run simple queries.
As you work through this tutorial, you can refer to the AWS SDK for PHP Developer Guide. The Amazon
DynamoDB section in the AWS SDK for PHP API Reference describes the parameters and results for
DynamoDB operations.
Tutorial Prerequisites
• Download and run DynamoDB on your computer. For more information, see Setting Up DynamoDB
Local (Downloadable Version) (p. 43).
Note
You use the downloadable version of DynamoDB in this tutorial. For information about how to
run the same code against the DynamoDB service, see the Summary (p. 175).
• Set up an AWS access key to use the AWS SDKs. For more information, see Setting Up DynamoDB (Web
Service) (p. 48).
• Set up the AWS SDK for PHP:
• Go to http://php.net and install PHP.
• Go to https://aws.amazon.com/sdk-for-php and install the SDK for PHP.
For more information, see Getting Started in the AWS SDK for PHP Getting Started Guide.
1. Copy and paste the following program into a file named MoviesCreateTable.php:
<?php
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
require 'vendor/autoload.php';
date_default_timezone_set('UTC');
use Aws\DynamoDb\Exception\DynamoDbException;
$dynamodb = $sdk->createDynamoDb();
$params = [
'TableName' => 'Movies',
'KeySchema' => [
[
'AttributeName' => 'year',
'KeyType' => 'HASH' //Partition key
],
[
'AttributeName' => 'title',
'KeyType' => 'RANGE' //Sort key
]
],
'AttributeDefinitions' => [
[
'AttributeName' => 'year',
'AttributeType' => 'N'
],
[
'AttributeName' => 'title',
'AttributeType' => 'S'
],
],
'ProvisionedThroughput' => [
'ReadCapacityUnits' => 10,
'WriteCapacityUnits' => 10
]
];
try {
$result = $dynamodb->createTable($params);
echo 'Created table. Status: ' .
$result['TableDescription']['TableStatus'] ."\n";
?>
Note
• You set the endpoint to indicate that you are creating the table in DynamoDB on your
computer.
• In the createTable call, you specify table name, primary key attributes, and its data
types.
• The ProvisionedThroughput parameter is required, but the downloadable version of
DynamoDB ignores it. (Provisioned throughput is beyond the scope of this exercise.)
2. To run the program, type the following command:
php MoviesCreateTable.php
To learn more about managing tables, see Working with Tables in DynamoDB (p. 333).
Topics
• Step 2.1: Download the Sample Data File (p. 158)
• Step 2.2: Load the Sample Data into the Movies Table (p. 158)
This scenario uses a sample data file that contains information about a few thousand movies from the
Internet Movie Database (IMDb). The movie data is in JSON format, as shown in the following example.
For each movie, there is a year, a title, and a JSON map named info.
[
{
"year" : ... ,
"title" : ... ,
"info" : { ... }
},
{
"year" : ...,
"title" : ...,
"info" : { ... }
},
...
• The year and title are used as the primary key attribute values for the Movies table.
• The rest of the info values are stored in a single attribute called info. This program illustrates how
you can store JSON in a DynamoDB attribute.
{
"year" : 2013,
"title" : "Turn It Down, Or Else!",
"info" : {
"directors" : [
"Alice Smith",
"Bob Jones"
],
"release_date" : "2013-01-18T00:00:00Z",
"rating" : 6.2,
"genres" : [
"Comedy",
"Drama"
],
"image_url" : "http://ia.media-imdb.com/images/N/
O9ERWAU7FS797AJ7LU8HN09AMUP908RLlo5JF90EWR7LJKQ7@@._V1_SX400_.jpg",
"plot" : "A rock band plays their music at high volumes, annoying the neighbors.",
"rank" : 11,
"running_time_secs" : 5215,
"actors" : [
"David Matthewman",
"Ann Thomas",
"Jonathan G. Neff"
]
}
}
Step 2.2: Load the Sample Data into the Movies Table
After you download the sample data, you can run the following program to populate the Movies table.
1. Copy and paste the following program into a file named MoviesLoadData.php:
<?php
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
require 'vendor/autoload.php';
date_default_timezone_set('UTC');
use Aws\DynamoDb\Exception\DynamoDbException;
use Aws\DynamoDb\Marshaler;
$dynamodb = $sdk->createDynamoDb();
$marshaler = new Marshaler();
$tableName = 'Movies';
$year = $movie['year'];
$title = $movie['title'];
$info = $movie['info'];
$json = json_encode([
'year' => $year,
$params = [
'TableName' => $tableName,
'Item' => $marshaler->marshalJson($json)
];
try {
$result = $dynamodb->putItem($params);
echo "Added movie: " . $movie['year'] . " " . $movie['title'] . "\n";
} catch (DynamoDbException $e) {
echo "Unable to add movie:\n";
echo $e->getMessage() . "\n";
break;
}
?>
Note
The DynamoDB Marshaler class has methods for converting JSON documents and PHP
arrays to the DynamoDB format. In this program, $marshaler->marshalJson($json)
takes a JSON document and converts it into a DynamoDB item.
2. To run the program, type the following command:
php MoviesLoadData.php
To learn more about reading and writing data, see Working with Items in DynamoDB (p. 372).
Topics
• Step 3.1: Create a New Item (p. 159)
• Step 3.2: Read an Item (p. 161)
• Step 3.3: Update an Item (p. 162)
• Step 3.4: Increment an Atomic Counter (p. 164)
• Step 3.5: Update an Item (Conditionally) (p. 165)
• Step 3.6: Delete an Item (p. 167)
1. Copy and paste the following program into a file named MoviesItemOps01.php:
<?php
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
require 'vendor/autoload.php';
date_default_timezone_set('UTC');
use Aws\DynamoDb\Exception\DynamoDbException;
use Aws\DynamoDb\Marshaler;
$dynamodb = $sdk->createDynamoDb();
$marshaler = new Marshaler();
$tableName = 'Movies';
$year = 2015;
$title = 'The Big New Movie';
$item = $marshaler->marshalJson('
{
"year": ' . $year . ',
"title": "' . $title . '",
"info": {
"plot": "Nothing happens at all.",
"rating": 0
}
}
');
$params = [
'TableName' => 'Movies',
'Item' => $item
];
try {
$result = $dynamodb->putItem($params);
echo "Added item: $year - $title\n";
?>
Note
The primary key is required. This code adds an item that has primary key (year, title) and
info attributes. The info attribute stores a map that provides more information about the
movie.
2. To run the program, type the following command:
php MoviesItemOps01.php
{
year: 2015,
title: "The Big New Movie",
info: {
plot: "Nothing happens at all.",
rating: 0
}
}
You can use the getItem method to read the item from the Movies table. You must specify the primary
key values, so you can read any item from Movies if you know its year and title.
1. Copy and paste the following program into a file named MoviesItemOps02.php:
<?php
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
require 'vendor/autoload.php';
date_default_timezone_set('UTC');
use Aws\DynamoDb\Exception\DynamoDbException;
use Aws\DynamoDb\Marshaler;
$dynamodb = $sdk->createDynamoDb();
$marshaler = new Marshaler();
$tableName = 'Movies';
$year = 2015;
$title = 'The Big New Movie';
$key = $marshaler->marshalJson('
{
"year": ' . $year . ',
"title": "' . $title . '"
}
');
$params = [
'TableName' => $tableName,
'Key' => $key
];
try {
$result = $dynamodb->getItem($params);
print_r($result["Item"]);
?>
php MoviesItemOps02.php
{
year: 2015,
title: "The Big New Movie",
info: {
plot: "Nothing happens at all.",
rating: 0
}
}
To the following:
{
year: 2015,
title: "The Big New Movie",
info: {
plot: "Everything happens all at once.",
rating: 5.5,
1. Copy and paste the following program into a file named MoviesItemOps03.php:
<?php
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
require 'vendor/autoload.php';
date_default_timezone_set('UTC');
use Aws\DynamoDb\Exception\DynamoDbException;
use Aws\DynamoDb\Marshaler;
$dynamodb = $sdk->createDynamoDb();
$marshaler = new Marshaler();
$tableName = 'Movies';
$year = 2015;
$title = 'The Big New Movie';
$key = $marshaler->marshalJson('
{
"year": ' . $year . ',
"title": "' . $title . '"
}
');
$eav = $marshaler->marshalJson('
{
":r": 5.5 ,
":p": "Everything happens all at once.",
":a": [ "Larry", "Moe", "Curly" ]
}
');
$params = [
'TableName' => $tableName,
'Key' => $key,
'UpdateExpression' =>
'set info.rating = :r, info.plot=:p, info.actors=:a',
'ExpressionAttributeValues'=> $eav,
'ReturnValues' => 'UPDATED_NEW'
];
try {
$result = $dynamodb->updateItem($params);
echo "Updated item.\n";
print_r($result['Attributes']);
?>
Note
This program uses UpdateExpression to describe all updates you want to perform on the
specified item.
The ReturnValues parameter instructs DynamoDB to return only the updated attributes
(UPDATED_NEW).
2. To run the program, type the following command:
php MoviesItemOps03.php
The following program shows how to increment the rating for a movie. Each time you run it, the
program increments this attribute by one.
1. Copy and paste the following program into a file named MoviesItemOps04.php:
<?php
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
require 'vendor/autoload.php';
date_default_timezone_set('UTC');
use Aws\DynamoDb\Exception\DynamoDbException;
use Aws\DynamoDb\Marshaler;
$dynamodb = $sdk->createDynamoDb();
$marshaler = new Marshaler();
$tableName = 'Movies';
$year = 2015;
$title = 'The Big New Movie';
$key = $marshaler->marshalJson('
{
"year": ' . $year . ',
"title": "' . $title . '"
}
');
$eav = $marshaler->marshalJson('
{
":val": 1
}
');
$params = [
'TableName' => $tableName,
'Key' => $key,
'UpdateExpression' => 'set info.rating = info.rating + :val',
'ExpressionAttributeValues'=> $eav,
'ReturnValues' => 'UPDATED_NEW'
];
try {
$result = $dynamodb->updateItem($params);
echo "Updated item. ReturnValues are:\n";
print_r($result['Attributes']);
?>
php MoviesItemOps04.php
In this case, the item is updated only if there are more than three actors in the movie.
1. Copy and paste the following program into a file named MoviesItemOps05.php.
<?php
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
require 'vendor/autoload.php';
date_default_timezone_set('UTC');
use Aws\DynamoDb\Exception\DynamoDbException;
use Aws\DynamoDb\Marshaler;
$dynamodb = $sdk->createDynamoDb();
$marshaler = new Marshaler();
$tableName = 'Movies';
$year = 2015;
$title = 'The Big New Movie';
$key = $marshaler->marshalJson('
{
"year": ' . $year . ',
"title": "' . $title . '"
}
');
$eav = $marshaler->marshalJson('
{
":num": 3
}
');
$params = [
'TableName' => $tableName,
'Key' => $key,
'UpdateExpression' => 'remove info.actors[0]',
'ConditionExpression' => 'size(info.actors) > :num',
'ExpressionAttributeValues'=> $eav,
'ReturnValues' => 'UPDATED_NEW'
];
try {
$result = $dynamodb->updateItem($params);
echo "Updated item. ReturnValues are:\n";
print_r($result['Attributes']);
?>
php MoviesItemOps05.php
This is because the movie has three actors in it, but the condition is checking for greater than three
actors.
3. Modify the program so that the ConditionExpression looks like this:
In the following example, you try to delete a specific movie item if its rating is 5 or less.
1. Copy and paste the following program into a file named MoviesItemOps06.php:
<?php
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
require 'vendor/autoload.php';
date_default_timezone_set('UTC');
use Aws\DynamoDb\Exception\DynamoDbException;
use Aws\DynamoDb\Marshaler;
]);
$dynamodb = $sdk->createDynamoDb();
$marshaler = new Marshaler();
$tableName = 'Movies';
$year = 2015;
$title = 'The Big New Movie';
$key = $marshaler->marshalJson('
{
"year": ' . $year . ',
"title": "' . $title . '"
}
');
$eav = $marshaler->marshalJson('
{
":val": 5
}
');
$params = [
'TableName' => $tableName,
'Key' => $key,
'ConditionExpression' => 'info.rating <= :val',
'ExpressionAttributeValues'=> $eav
];
try {
$result = $dynamodb->deleteItem($params);
echo "Deleted item.\n";
?>
php MoviesItemOps06.php
This is because the rating for this particular move is greater than 5.
3. Modify the program to remove the condition:
$params = [
'TableName' => $tableName,
'Key' => $key
];
4. Run the program. Now, the delete succeeds because you removed the condition.
The primary key for the Movies table is composed of the following:
To find all movies released during a year, you need to specify only the year. You can also provide the
title to retrieve a subset of movies based on some condition (on the sort key). For example, to find
movies released in 2014 that have a title starting with the letter "A".
In addition to query, there is also a scan method that can retrieve all of the table data.
To learn more about querying and scanning data, see Working with Queries (p. 455) and Working with
Scans (p. 473), respectively.
Topics
• Step 4.1: Query - All Movies Released in a Year (p. 169)
• Step 4.2: Query - All Movies Released in a Year with Certain Titles (p. 171)
• Step 4.3: Scan (p. 172)
1. Copy and paste the following program into a file named MoviesQuery01.php:
<?php
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
require 'vendor/autoload.php';
date_default_timezone_set('UTC');
use Aws\DynamoDb\Exception\DynamoDbException;
use Aws\DynamoDb\Marshaler;
]);
$dynamodb = $sdk->createDynamoDb();
$marshaler = new Marshaler();
$tableName = 'Movies';
$eav = $marshaler->marshalJson('
{
":yyyy": 1985
}
');
$params = [
'TableName' => $tableName,
'KeyConditionExpression' => '#yr = :yyyy',
'ExpressionAttributeNames'=> [ '#yr' => 'year' ],
'ExpressionAttributeValues'=> $eav
];
try {
$result = $dynamodb->query($params);
?>
Note
php MoviesItemQuery01.php
Note
The preceding program shows how to query a table by its primary key attributes. In DynamoDB,
you can optionally create one or more secondary indexes on a table, and query those indexes
in the same way that you query a table. Secondary indexes give your applications additional
flexibility by allowing queries on non-key attributes. For more information, see Improving Data
Access with Secondary Indexes (p. 493).
1. Copy and paste the following program into a file named MoviesQuery02.php:
<?php
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
require 'vendor/autoload.php';
date_default_timezone_set('UTC');
use Aws\DynamoDb\Exception\DynamoDbException;
use Aws\DynamoDb\Marshaler;
$dynamodb = $sdk->createDynamoDb();
$marshaler = new Marshaler();
$tableName = 'Movies';
$eav = $marshaler->marshalJson('
{
":yyyy":1992,
":letter1": "A",
":letter2": "L"
}
');
$params = [
'TableName' => $tableName,
'ProjectionExpression' => '#yr, title, info.genres, info.actors[0]',
'KeyConditionExpression' =>
'#yr = :yyyy and title between :letter1 and :letter2',
'ExpressionAttributeNames'=> [ '#yr' => 'year' ],
'ExpressionAttributeValues'=> $eav
];
echo "Querying for movies from 1992 - titles A-L, with genres and lead actor\n";
try {
$result = $dynamodb->query($params);
?>
php MoviesQuery02.php
The following program scans the entire Movies table, which contains approximately 5,000 items. The
scan specifies the optional filter to retrieve only the movies from the 1950s (approximately 100 items),
and discard all of the others.
1. Copy and paste the following program into a file named MoviesScan.php:
<?php
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
require 'vendor/autoload.php';
date_default_timezone_set('UTC');
use Aws\DynamoDb\Exception\DynamoDbException;
use Aws\DynamoDb\Marshaler;
$dynamodb = $sdk->createDynamoDb();
$params = [
'TableName' => 'Movies',
'ProjectionExpression' => '#yr, title, info.rating',
'FilterExpression' => '#yr between :start_yr and :end_yr',
'ExpressionAttributeNames'=> [ '#yr' => 'year' ],
'ExpressionAttributeValues'=> $eav
];
try {
while (true) {
$result = $dynamodb->scan($params);
if (isset($result['LastEvaluatedKey'])) {
$params['ExclusiveStartKey'] = $result['LastEvaluatedKey'];
} else {
break;
}
}
?>
php MoviesScan.php
Note
You can also use the Scan operation with any secondary indexes that you have created on the
table. For more information, see Improving Data Access with Secondary Indexes (p. 493).
1. Copy and paste the following program into a file named MoviesDeleteTable.php:
<?php
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
require 'vendor/autoload.php';
date_default_timezone_set('UTC');
use Aws\DynamoDb\Exception\DynamoDbException;
$dynamodb = $sdk->createDynamoDb();
$params = [
'TableName' => 'Movies'
];
try {
$result = $dynamodb->deleteTable($params);
echo "Deleted table.\n";
?>
php MoviesDeleteTable.php
Summary
In this tutorial, you created the Movies table in DynamoDB on your computer and performed basic
operations. The downloadable version of DynamoDB is useful during application development and
testing. However, when you're ready to run your application in a production environment, you need to
modify your code so that it uses the Amazon DynamoDB web service.
Remove the endpoint parameter so that the code looks like this:
After you remove this line, the code can access the DynamoDB service in the Region specified by the
region config value. For example, the following line specifies that you want to use the US West
(Oregon) Region:
Instead of using the downloadable version of DynamoDB on your computer, the program now uses the
DynamoDB service endpoint in US West (Oregon).
DynamoDB is available in several Regions worldwide. For the complete list, see Regions and Endpoints in
the AWS General Reference. For more information about setting Regions and endpoints in your code, see
the boto: A Python interface to Amazon Web Services.
• Create a table called Movies and load sample data in JSON format.
• Perform create, read, update, and delete operations on the table.
• Run simple queries.
As you work through this tutorial, you can refer to the AWS SDK for Python (Boto) documentation. The
following sections are specific to DynamoDB:
• DynamoDB tutorial
• DynamoDB low-level client
Tutorial Prerequisites
• Download and run DynamoDB on your computer. For more information, see Setting Up DynamoDB
Local (Downloadable Version) (p. 43).
Note
You use the downloadable version of DynamoDB in this tutorial. In the Summary (p. 192),
we explain how to run the same code against the DynamoDB web service.
• Set up an AWS access key to use the AWS SDKs. For more information, see Setting Up DynamoDB (Web
Service) (p. 48).
• Install Python 2.6 or later. For more information, see https://www.python.org/downloads. For
instructions, see Quickstart in the Boto 3 documentation.
1. Copy and paste the following program into a file named MoviesCreateTable.py.
#
# Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# This file is licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
#
from __future__ import print_function # Python 2/3 compatibility
import boto3
table = dynamodb.create_table(
TableName='Movies',
KeySchema=[
{
'AttributeName': 'year',
'KeyType': 'HASH' #Partition key
},
{
'AttributeName': 'title',
'KeyType': 'RANGE' #Sort key
}
],
AttributeDefinitions=[
{
'AttributeName': 'year',
'AttributeType': 'N'
},
{
'AttributeName': 'title',
'AttributeType': 'S'
},
],
ProvisionedThroughput={
'ReadCapacityUnits': 10,
'WriteCapacityUnits': 10
}
)
Note
• You set the endpoint to indicate that you are creating the table in the downloadable
version of DynamoDB on your computer.
• In the create_table call, you specify table name, primary key attributes, and its data
types.
• The ProvisionedThroughput parameter is required. However, the downloadable
version of DynamoDB ignores it. (Provisioned throughput is beyond the scope of this
exercise.)
• These examples use the Python 3 style print function. The line from __future__
import print_function enables Python 3 printing in Python 2.6 and later.
2. To run the program, type the following command:
python MoviesCreateTable.py
To learn more about managing tables, see Working with Tables in DynamoDB (p. 333).
Topics
• Step 2.1: Download the Sample Data File (p. 178)
• Step 2.2: Load the Sample Data into the Movies Table (p. 178)
This scenario uses a sample data file that contains information about a few thousand movies from the
Internet Movie Database (IMDb). The movie data is in JSON format, as shown in the following example.
For each movie, there is a year, a title, and a JSON map named info.
[
{
"year" : ... ,
"title" : ... ,
"info" : { ... }
},
{
"year" : ...,
"title" : ...,
"info" : { ... }
},
...
• The year and title are used as the primary key attribute values for the Movies table.
• The rest of the info values are stored in a single attribute called info. This program illustrates how
you can store JSON in a DynamoDB attribute.
{
"year" : 2013,
"title" : "Turn It Down, Or Else!",
"info" : {
"directors" : [
"Alice Smith",
"Bob Jones"
],
"release_date" : "2013-01-18T00:00:00Z",
"rating" : 6.2,
"genres" : [
"Comedy",
"Drama"
],
"image_url" : "http://ia.media-imdb.com/images/N/
O9ERWAU7FS797AJ7LU8HN09AMUP908RLlo5JF90EWR7LJKQ7@@._V1_SX400_.jpg",
"plot" : "A rock band plays their music at high volumes, annoying the neighbors.",
"rank" : 11,
"running_time_secs" : 5215,
"actors" : [
"David Matthewman",
"Ann Thomas",
"Jonathan G. Neff"
]
}
}
Step 2.2: Load the Sample Data into the Movies Table
After you download the sample data, you can run the following program to populate the Movies table.
1. Copy and paste the following program into a file named MoviesLoadData.py:
#
# Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# This file is licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
#
from __future__ import print_function # Python 2/3 compatibility
import boto3
import json
import decimal
table = dynamodb.Table('Movies')
table.put_item(
Item={
'year': year,
'title': title,
'info': info,
}
)
python MoviesLoadData.py
To learn more about reading and writing data, see Working with Items in DynamoDB (p. 372).
Topics
• Step 3.1: Create a New Item (p. 179)
• Step 3.2: Read an Item (p. 181)
• Step 3.3: Update an Item (p. 182)
• Step 3.4: Increment an Atomic Counter (p. 183)
• Step 3.5: Update an Item (Conditionally) (p. 184)
• Step 3.6: Delete an Item (p. 186)
1. Copy and paste the following program into a file named MoviesItemOps01.py:
#
# Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# This file is licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
#
from __future__ import print_function # Python 2/3 compatibility
import boto3
import json
import decimal
table = dynamodb.Table('Movies')
response = table.put_item(
Item={
'year': year,
'title': title,
'info': {
'plot':"Nothing happens at all.",
'rating': decimal.Decimal(0)
}
}
)
print("PutItem succeeded:")
print(json.dumps(response, indent=4, cls=DecimalEncoder))
Note
• The primary key is required. This code adds an item that has primary key (year, title)
and info attributes. The info attribute stores sample JSON that provides more
information about the movie.
• The DecimalEncoder class is used to print out numbers stored using the Decimal class.
The Boto SDK uses the Decimal class to hold DynamoDB number values.
2. To run the program, type the following command:
python MoviesItemOps01.py
{
year: 2015,
title: "The Big New Movie",
info: {
plot: "Nothing happens at all.",
rating: 0
}
}
You can use the get_item method to read the item from the Movies table. You must specify the
primary key values, so you can read any item from Movies if you know its year and title.
1. Copy and paste the following program into a file named MoviesItemOps02.py.
#
# Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# This file is licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
#
from __future__ import print_function # Python 2/3 compatibility
import boto3
import json
import decimal
from boto3.dynamodb.conditions import Key, Attr
from botocore.exceptions import ClientError
table = dynamodb.Table('Movies')
try:
response = table.get_item(
Key={
'year': year,
'title': title
}
)
except ClientError as e:
print(e.response['Error']['Message'])
else:
item = response['Item']
print("GetItem succeeded:")
print(json.dumps(item, indent=4, cls=DecimalEncoder))
python MoviesItemOps02.py
{
year: 2015,
title: "The Big New Movie",
info: {
plot: "Nothing happens at all.",
rating: 0
}
}
To the following:
{
year: 2015,
title: "The Big New Movie",
info: {
plot: "Everything happens all at once.",
rating: 5.5,
actors: ["Larry", "Moe", "Curly"]
}
}
1. Copy and paste the following program into a file named MoviesItemOps03.py:
#
# Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# This file is licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
table = dynamodb.Table('Movies')
response = table.update_item(
Key={
'year': year,
'title': title
},
UpdateExpression="set info.rating = :r, info.plot=:p, info.actors=:a",
ExpressionAttributeValues={
':r': decimal.Decimal(5.5),
':p': "Everything happens all at once.",
':a': ["Larry", "Moe", "Curly"]
},
ReturnValues="UPDATED_NEW"
)
print("UpdateItem succeeded:")
print(json.dumps(response, indent=4, cls=DecimalEncoder))
Note
This program uses UpdateExpression to describe all updates you want to perform on the
specified item.
The ReturnValues parameter instructs DynamoDB to return only the updated attributes
(UPDATED_NEW).
2. To run the program, type the following command:
python MoviesItemOps03.py
The following program shows how to increment the rating for a movie. Each time you run it, the
program increments this attribute by one.
1. Copy and paste the following program into a file named MoviesItemOps04.py:
#
# Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# This file is licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
#
from __future__ import print_function # Python 2/3 compatibility
import boto3
import json
import decimal
table = dynamodb.Table('Movies')
response = table.update_item(
Key={
'year': year,
'title': title
},
UpdateExpression="set info.rating = info.rating + :val",
ExpressionAttributeValues={
':val': decimal.Decimal(1)
},
ReturnValues="UPDATED_NEW"
)
print("UpdateItem succeeded:")
print(json.dumps(response, indent=4, cls=DecimalEncoder))
python MoviesItemOps04.py
In this case, the item is updated only if there are more than three actors.
1. Copy and paste the following program into a file named MoviesItemOps05.py:
#
# Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# This file is licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
#
from __future__ import print_function # Python 2/3 compatibility
import boto3
from botocore.exceptions import ClientError
import json
import decimal
table = dynamodb.Table('Movies')
try:
response = table.update_item(
Key={
'year': year,
'title': title
},
UpdateExpression="remove info.actors[0]",
ConditionExpression="size(info.actors) > :num",
ExpressionAttributeValues={
':num': 3
},
ReturnValues="UPDATED_NEW"
)
except ClientError as e:
if e.response['Error']['Code'] == "ConditionalCheckFailedException":
print(e.response['Error']['Message'])
else:
raise
else:
print("UpdateItem succeeded:")
python MoviesItemOps05.py
This is because the movie has three actors in it, but the condition is checking for greater than three
actors.
3. Modify the program so that the ConditionExpression looks like this:
In the following example, you try to delete a specific movie item if its rating is 5 or less.
1. Copy and paste the following program into a file named MoviesItemOps06.py:
#
# Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# This file is licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
#
from __future__ import print_function # Python 2/3 compatibility
import boto3
from botocore.exceptions import ClientError
import json
import decimal
table = dynamodb.Table('Movies')
try:
response = table.delete_item(
Key={
'year': year,
'title': title
},
ConditionExpression="info.rating <= :val",
ExpressionAttributeValues= {
":val": decimal.Decimal(5)
}
)
except ClientError as e:
if e.response['Error']['Code'] == "ConditionalCheckFailedException":
print(e.response['Error']['Message'])
else:
raise
else:
print("DeleteItem succeeded:")
print(json.dumps(response, indent=4, cls=DecimalEncoder))
python MoviesItemOps06.py
This is because the rating for this particular move is greater than 5.
3. Modify the program to remove the condition in table.delete_item:
response = table.delete_item(
Key={
'year': year,
'title': title
}
)
4. Run the program. Now, the delete succeeds because you removed the condition.
The primary key for the Movies table is composed of the following:
To find all movies released during a year, you need to specify only the year. You can also provide the
title to retrieve a subset of movies based on some condition (on the sort key); for example, to find
movies released in 2014 that have a title starting with the letter "A".
In addition to query, there is also a scan method that can retrieve all the table data.
To learn more about querying and scanning data, see Working with Queries (p. 455) and Working with
Scans (p. 473), respectively.
Topics
• Step 4.1: Query - All Movies Released in a Year (p. 188)
• Step 4.2: Query - All Movies Released in a Year with Certain Titles (p. 189)
• Step 4.3: Scan (p. 190)
1. Copy and paste the following program into a file named MoviesQuery01.py.
#
# Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# This file is licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
#
from __future__ import print_function # Python 2/3 compatibility
import boto3
import json
import decimal
from boto3.dynamodb.conditions import Key, Attr
table = dynamodb.Table('Movies')
response = table.query(
KeyConditionExpression=Key('year').eq(1985)
)
for i in response['Items']:
print(i['year'], ":", i['title'])
Note
The Boto 3 SDK constructs a ConditionExpression for you when you use the Key and
Attr functions imported from boto3.dynamodb.conditions. You can also specify a
ConditionExpression as a string.
For a list of available conditions for DynamoDB, see the DynamoDB Conditions in AWS SDK
for Python (Boto 3) Getting Started.
For more information, see Condition Expressions (p. 390).
2. To run the program, type the following command:
python MoviesQuery01.py
Note
The preceding program shows how to query a table by its primary key attributes. In DynamoDB,
you can optionally create one or more secondary indexes on a table, and query those indexes
in the same way that you query a table. Secondary indexes give your applications additional
flexibility by allowing queries on non-key attributes. For more information, see Improving Data
Access with Secondary Indexes (p. 493).
1. Copy and paste the following program into a file named MoviesQuery02.py:
#
# Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# This file is licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
#
from __future__ import print_function # Python 2/3 compatibility
import boto3
import json
import decimal
from boto3.dynamodb.conditions import Key, Attr
table = dynamodb.Table('Movies')
print("Movies from 1992 - titles A-L, with genres and lead actor")
response = table.query(
ProjectionExpression="#yr, title, info.genres, info.actors[0]",
ExpressionAttributeNames={ "#yr": "year" }, # Expression Attribute Names for
Projection Expression only.
KeyConditionExpression=Key('year').eq(1992) & Key('title').between('A', 'L')
)
for i in response[u'Items']:
print(json.dumps(i, cls=DecimalEncoder))
python MoviesQuery02.py
The following program scans the entire Movies table, which contains approximately 5,000 items. The
scan specifies the optional filter to retrieve only the movies from the 1950s (approximately 100 items),
and discard all the others.
1. Copy and paste the following program into a file named MoviesScan.py:
#
# Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# This file is licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
#
from __future__ import print_function # Python 2/3 compatibility
import boto3
import json
import decimal
from boto3.dynamodb.conditions import Key, Attr
table = dynamodb.Table('Movies')
fe = Key('year').between(1950, 1959)
pe = "#yr, title, info.rating"
# Expression Attribute Names for Projection Expression only.
ean = { "#yr": "year", }
esk = None
response = table.scan(
FilterExpression=fe,
ProjectionExpression=pe,
ExpressionAttributeNames=ean
)
for i in response['Items']:
print(json.dumps(i, cls=DecimalEncoder))
for i in response['Items']:
print(json.dumps(i, cls=DecimalEncoder))
Note
python MoviesScan.py
Note
You can also use the Scan operation with any secondary indexes that you create on the table.
For more information, see Improving Data Access with Secondary Indexes (p. 493).
1. Copy and paste the following program into a file named MoviesDeleteTable.py:
#
# Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# This file is licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
#
from __future__ import print_function # Python 2/3 compatibility
import boto3
table = dynamodb.Table('Movies')
table.delete()
python MoviesDeleteTable.py
Summary
In this tutorial, you created the Movies table in the downloadable version of DynamoDB on your
computer and performed basic operations. The downloadable version of DynamoDB is useful during
application development and testing. However, when you're ready to run your application in a
production environment, you must modify your code so that it uses the Amazon DynamoDB web service.
dynamodb = boto3.resource('dynamodb',endpoint_url="http://localhost:8000")
dynamodb = boto3.resource('dynamodb',region_name='us-west-2')
Instead of using the downloadable version of DynamoDB on your computer, the program now uses the
DynamoDB service in US West (Oregon).
DynamoDB is available in several Regions worldwide. For the complete list, see Regions and Endpoints in
the AWS General Reference. For more information about setting Regions and endpoints in your code, see
AWS Region Selection in the AWS SDK for Java Developer Guide.
• Create a table called Movies and load sample data in JSON format.
• Perform create, read, update, and delete operations on the table.
• Run simple queries.
As you work through this tutorial, you can refer to the AWS SDK for Ruby API Reference. The DynamoDB
section describes the parameters and results for DynamoDB operations.
Tutorial Prerequisites
• Download and run DynamoDB on your computer. For more information, see Setting Up DynamoDB
Local (Downloadable Version) (p. 43).
Note
You use the downloadable version of DynamoDB in this tutorial. For information about how to
run the same code against the DynamoDB service, see the Summary (p. 210).
• Set up an AWS access key to use the AWS SDKs. For more information, see Setting Up DynamoDB (Web
Service) (p. 48).
• Set up the AWS SDK for Ruby:
• Go to https://www.ruby-lang.org/en/documentation/installation/ and install Ruby.
• Go to https://aws.amazon.com/sdk-for-ruby and install the AWS SDK for Ruby.
For more information, see Installation in the AWS SDK for Ruby API Reference.
1. Copy and paste the following program into a file named MoviesCreateTable.rb:
#
# Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# This file is licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
#
require "aws-sdk"
Aws.config.update({
region: "us-west-2",
endpoint: "http://localhost:8000"
})
dynamodb = Aws::DynamoDB::Client.new
params = {
table_name: "Movies",
key_schema: [
{
attribute_name: "year",
key_type: "HASH" #Partition key
},
{
attribute_name: "title",
key_type: "RANGE" #Sort key
}
],
attribute_definitions: [
{
attribute_name: "year",
attribute_type: "N"
},
{
attribute_name: "title",
attribute_type: "S"
},
],
provisioned_throughput: {
read_capacity_units: 10,
write_capacity_units: 10
}
}
begin
result = dynamodb.create_table(params)
puts "Created table. Status: " +
result.table_description.table_status;
Note
• You set the endpoint to indicate that you are creating the table in the downloadable
version of DynamoDB on your computer.
• In the create_table call, you specify table name, primary key attributes, and its data
types.
• The provisioned_throughput parameter is required. However, the downloadable
version of DynamoDB ignores it. (Provisioned throughput is beyond the scope of this
exercise.)
2. To run the program, type the following command:
ruby MoviesCreateTable.rb
To learn more about managing tables, see Working with Tables in DynamoDB (p. 333).
Topics
• Step 2.1: Download the Sample Data File (p. 196)
• Step 2.2: Load the Sample Data into the Movies Table (p. 196)
You use a sample data file that contains information about a few thousand movies from the Internet
Movie Database (IMDb). The movie data is in JSON format, as shown in the following example. For each
movie, there is a year, a title, and a JSON map named info.
[
{
"year" : ... ,
"title" : ... ,
"info" : { ... }
},
{
"year" : ...,
"title" : ...,
"info" : { ... }
},
...
• The year and title are used as the primary key attribute values for the Movies table.
• You store the rest of the info values in a single attribute called info. This program illustrates how
you can store JSON in a DynamoDB attribute.
{
"year" : 2013,
"title" : "Turn It Down, Or Else!",
"info" : {
"directors" : [
"Alice Smith",
"Bob Jones"
],
"release_date" : "2013-01-18T00:00:00Z",
"rating" : 6.2,
"genres" : [
"Comedy",
"Drama"
],
"image_url" : "http://ia.media-imdb.com/images/N/
O9ERWAU7FS797AJ7LU8HN09AMUP908RLlo5JF90EWR7LJKQ7@@._V1_SX400_.jpg",
"plot" : "A rock band plays their music at high volumes, annoying the neighbors.",
"rank" : 11,
"running_time_secs" : 5215,
"actors" : [
"David Matthewman",
"Ann Thomas",
"Jonathan G. Neff"
]
}
}
Step 2.2: Load the Sample Data into the Movies Table
After you download the sample data, you can run the following program to populate the Movies table.
1. Copy and paste the following program into a file named MoviesLoadData.rb:
#
# Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# This file is licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
#
require "aws-sdk"
require "json"
Aws.config.update({
region: "us-west-2",
endpoint: "http://localhost:8000"
})
dynamodb = Aws::DynamoDB::Client.new
table_name = 'Movies'
file = File.read('moviedata.json')
movies = JSON.parse(file)
movies.each{|movie|
params = {
table_name: table_name,
item: movie
}
begin
dynamodb.put_item(params)
puts "Added movie: #{movie["year"]} #{movie["title"]}"
ruby MoviesLoadData.rb
To learn more about reading and writing data, see Working with Items in DynamoDB (p. 372).
Topics
• Step 3.1: Create a New Item (p. 197)
• Step 3.2: Read an Item (p. 198)
• Step 3.3: Update an Item (p. 199)
• Step 3.4: Increment an Atomic Counter (p. 201)
• Step 3.5: Update an Item (Conditionally) (p. 202)
• Step 3.6: Delete an Item (p. 203)
1. Copy and paste the following program into a file named MoviesItemOps01.rb:
#
# Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# This file is licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
#
require "aws-sdk"
Aws.config.update({
region: "us-west-2",
endpoint: "http://localhost:8000"
})
dynamodb = Aws::DynamoDB::Client.new
table_name = 'Movies'
year = 2015
title = "The Big New Movie"
item = {
year: year,
title: title,
info: {
plot: "Nothing happens at all.",
rating: 0
}
}
params = {
table_name: table_name,
item: item
}
begin
dynamodb.put_item(params)
puts "Added item: #{year} - #{title}"
Note
The primary key is required. This code adds an item that has primary key (year, title) and
info attributes. The info attribute stores a map that provides more information about the
movie.
2. To run the program, type the following command:
ruby MoviesItemOps01.rb
{
year: 2015,
title: "The Big New Movie",
info: {
plot: "Nothing happens at all.",
rating: 0
}
}
You can use the get_item method to read the item from the Movies table. You must specify the
primary key values, so you can read any item from Movies if you know its year and title.
1. Copy and paste the following program into a file named MoviesItemOps02.rb:
#
# Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# This file is licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
#
require "aws-sdk"
Aws.config.update({
region: "us-west-2",
endpoint: "http://localhost:8000"
})
dynamodb = Aws::DynamoDB::Client.new
table_name = 'Movies'
year = 2015
title = "The Big New Movie"
params = {
table_name: table_name,
key: {
year: year,
title: title
}
}
begin
result = dynamodb.get_item(params)
printf "%i - %s\n%s\n%d\n",
result.item["year"],
result.item["title"],
result.item["info"]["plot"],
result.item["info"]["rating"]
ruby MoviesItemOps02.rb
{
year: 2015,
title: "The Big New Movie",
info: {
plot: "Nothing happens at all.",
rating: 0
}
}
To the following:
{
year: 2015,
title: "The Big New Movie",
info: {
plot: "Everything happens all at once.",
rating: 5.5,
actors: ["Larry", "Moe", "Curly"]
}
}
1. Copy and paste the following program into a file named MoviesItemOps03.rb:
#
# Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# This file is licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
#
require "aws-sdk"
Aws.config.update({
region: "us-west-2",
endpoint: "http://localhost:8000"
})
dynamodb = Aws::DynamoDB::Client.new
table_name = 'Movies'
year = 2015
title = "The Big New Movie"
params = {
table_name: table_name,
key: {
year: year,
title: title
},
update_expression: "set info.rating = :r, info.plot=:p, info.actors=:a",
expression_attribute_values: {
":r" => 5.5,
":p" => "Everything happens all at once.", # value
<Hash,Array,String,Numeric,Boolean,IO,Set,nil>
":a" => ["Larry", "Moe", "Curly"]
},
return_values: "UPDATED_NEW"
}
begin
dynamodb.update_item(params)
puts "Added item: #{year} - #{title}"
puts "#{error.message}"
end
Note
This program uses update_expression to describe all updates you want to perform on
the specified item.
The return_values parameter instructs DynamoDB to return only the updated attributes
(UPDATED_NEW).
2. To run the program, type the following command:
ruby MoviesItemOps03.rb
The following program shows how to increment the rating for a movie. Each time you run it, the
program increments this attribute by one.
1. Copy and paste the following program into a file named MoviesItemOps04.rb:
#
# Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# This file is licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
#
require "aws-sdk"
Aws.config.update({
region: "us-west-2",
endpoint: "http://localhost:8000"
})
dynamodb = Aws::DynamoDB::Client.new
table_name = 'Movies'
year = 2015
title = "The Big New Movie"
params = {
table_name: table_name,
key: {
year: year,
title: title
},
update_expression: "set info.rating = info.rating + :val",
expression_attribute_values: {
":val" => 1
},
return_values: "UPDATED_NEW"
}
begin
result = dynamodb.update_item(params)
puts "Updated item. ReturnValues are:"
result.attributes["info"].each do |key, value|
if key == "rating"
puts "#{key}: #{value.to_f}"
else
puts "#{key}: #{value}"
end
end
rescue Aws::DynamoDB::Errors::ServiceError => error
puts "Unable to update item:"
puts "#{error.message}"
end
ruby MoviesItemOps04.rb
In this case, the item is updated only if there are more than three actors.
1. Copy and paste the following program into a file named MoviesItemOps05.rb:
#
# Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# This file is licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
#
require "aws-sdk"
Aws.config.update({
region: "us-west-2",
endpoint: "http://localhost:8000"
})
dynamodb = Aws::DynamoDB::Client.new
table_name = 'Movies'
year = 2015
title = "The Big New Movie"
params = {
table_name: table_name,
key: {
year: year,
title: title
},
update_expression: "remove info.actors[0]",
condition_expression: "size(info.actors) > :num",
expression_attribute_values: {
":num" => 3
},
return_values: "UPDATED_NEW"
}
begin
result = dynamodb.update_item(params)
puts "Updated item. ReturnValues are:"
result.attributes["info"].each do |key, value|
if key == "rating"
puts "#{key}: #{value.to_f}"
else
puts "#{key}: #{value}"
end
end
ruby MoviesItemOps05.rb
This is because the movie has three actors in it, but the condition is checking for greater than three
actors.
3. Modify the program so that the ConditionExpression looks like this:
In the following example, you try to delete a specific movie item if its rating is 5 or less.
1. Copy and paste the following program into a file named MoviesItemOps06.rb:
#
# Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# This file is licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
#
require "aws-sdk"
Aws.config.update({
region: "us-west-2",
endpoint: "http://localhost:8000"
})
dynamodb = Aws::DynamoDB::Client.new
table_name = 'Movies'
year = 2015
title = "The Big New Movie"
params = {
table_name: table_name,
key: {
year: year,
title: title
},
condition_expression: "info.rating <= :val",
expression_attribute_values: {
":val" => 5
}
}
begin
dynamodb.delete_item(params)
puts "Deleted item."
ruby MoviesItemOps06.rb
This is because the rating for this particular move is greater than 5.
3. Modify the program to remove the condition:
params = {
table_name: "Movies",
key: {
year: year,
title: title
}
}
4. Run the program. Now, the delete succeeds because you removed the condition.
The primary key for the Movies table is composed of the following:
To find all movies released during a year, you need to specify only the year. You can also provide the
title to retrieve a subset of movies based on some condition (on the sort key); for example, to find
movies released in 2014 that have a title starting with the letter "A".
In addition to query, there is also a scan method that can retrieve all of the table data.
To learn more about querying and scanning data, see Working with Queries (p. 455) and Working with
Scans (p. 473), respectively.
Topics
• Step 4.1: Query - All Movies Released in a Year (p. 205)
• Step 4.2: Query - All Movies Released in a Year with Certain Titles (p. 206)
• Step 4.3: Scan (p. 207)
1. Copy and paste the following program into a file named MoviesQuery01.rb:
#
# Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# This file is licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
#
require "aws-sdk"
Aws.config.update({
region: "us-west-2",
endpoint: "http://localhost:8000"
})
dynamodb = Aws::DynamoDB::Client.new
table_name = "Movies"
params = {
table_name: table_name,
begin
result = dynamodb.query(params)
puts "Query succeeded."
result.items.each{|movie|
puts "#{movie["year"].to_i} #{movie["title"]}"
}
Note
ruby MoviesItemQuery01.rb
Note
The preceding program shows how to query a table by its primary key attributes. In DynamoDB,
you can optionally create one or more secondary indexes on a table, and query those indexes
in the same way that you query a table. Secondary indexes give your applications additional
flexibility by allowing queries on non-key attributes. For more information, see Improving Data
Access with Secondary Indexes (p. 493).
1. Copy and paste the following program into a file named MoviesQuery02.rb:
#
# Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# This file is licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
#
require "aws-sdk"
Aws.config.update({
region: "us-west-2",
endpoint: "http://localhost:8000"
})
dynamodb = Aws::DynamoDB::Client.new
table_name = "Movies"
params = {
table_name: table_name,
projection_expression: "#yr, title, info.genres, info.actors[0]",
key_condition_expression:
"#yr = :yyyy and title between :letter1 and :letter2",
expression_attribute_names: {
"#yr" => "year"
},
expression_attribute_values: {
":yyyy" => 1992,
":letter1" => "A",
":letter2" => "L"
}
}
puts "Querying for movies from 1992 - titles A-L, with genres and lead actor";
begin
result = dynamodb.query(params)
puts "Query succeeded."
result.items.each{|movie|
print "#{movie["year"].to_i}: #{movie["title"]} ... "
movie['info']['genres'].each{|gen|
print gen + " "
}
ruby MoviesQuery02.rb
The following program scans the Movies table, which contains approximately 5,000 items. The scan
specifies the optional filter to retrieve only the movies from the 1950s (approximately 100 items), and
discard all the others.
1. Copy and paste the following program into a file named MoviesScan.rb:
#
# Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# This file is licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
#
require "aws-sdk"
Aws.config.update({
region: "us-west-2",
endpoint: "http://localhost:8000"
})
dynamodb = Aws::DynamoDB::Client.new
table_name = "Movies"
params = {
table_name: table_name,
projection_expression: "#yr, title, info.rating",
filter_expression: "#yr between :start_yr and :end_yr",
expression_attribute_names: {"#yr"=> "year"},
expression_attribute_values: {
":start_yr" => 1950,
":end_yr" => 1959
}
}
begin
loop do
result = dynamodb.scan(params)
result.items.each{|movie|
puts "#{movie["year"].to_i}: " +
"#{movie["title"]} ... " +
"#{movie["info"]["rating"].to_f}"
}
break if result.last_evaluated_key.nil?
ruby MoviesScan.rb
Note
You can also use the scan method with any secondary indexes that you create on the table. For
more information, see Improving Data Access with Secondary Indexes (p. 493).
1. Copy and paste the following program into a file named MoviesDeleteTable.rb:
#
# Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# This file is licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
#
require "aws-sdk"
Aws.config.update({
region: "us-west-2",
endpoint: "http://localhost:8000"
})
dynamodb = Aws::DynamoDB::Client.new
params = {
table_name: "Movies"
}
begin
dynamodb.delete_table(params)
puts "Deleted table."
ruby MoviesDeleteTable.rb
Summary
In this tutorial, you created the Movies table in the downloadable version of DynamoDB on your
computer and performed basic operations. The downloadable version of DynamoDB is useful during
application development and testing. However, when you're ready to run your application in a
production environment, you must modify your code so that it uses the Amazon DynamoDB web service.
Aws.config.update({
region: "us-west-2",
endpoint: "http://localhost:8000"
})
Remove the endpoint parameter so that the code looks like this:
Aws.config.update({
region: "us-west-2"
]);
After you remove this line, the code can access the DynamoDB service in the Region specified by the
region config value.
Instead of using the version of DynamoDB on your computer, the program uses the DynamoDB service
endpoint in US West (Oregon).
DynamoDB is available in several Regions worldwide. For the complete list, see Regions and Endpoints in
the AWS General Reference. For more information, see the AWS SDK for Ruby Getting Started Guide.
Topics
• Overview of AWS SDK Support for DynamoDB (p. 211)
• Programmatic Interfaces (p. 213)
• DynamoDB Low-Level API (p. 216)
• Error Handling (p. 220)
• Higher-Level Programming Interfaces for DynamoDB (p. 225)
• Running the Code Examples In This Developer Guide (p. 322)
1. You write an application using an AWS SDK for your programming language.
2. Each AWS SDK provides one or more programmatic interfaces for working with DynamoDB. The
specific interfaces available depend on which programming language and AWS SDK you use.
3. The AWS SDK constructs HTTP(S) requests for use with the low-level DynamoDB API.
4. The AWS SDK sends the request to the DynamoDB endpoint.
5. DynamoDB executes the request. If the request is successful, DynamoDB returns an HTTP 200
response code (OK). If the request is unsuccessful, DynamoDB returns an HTTP error code and an error
message.
6. The AWS SDK processes the response and propagates it back to your application.
Each of the AWS SDKs provides important services to your application, including the following:
Programmatic Interfaces
Every AWS SDK provides one or more programmatic interfaces for working with DynamoDB. These
interfaces range from simple low-level DynamoDB wrappers to object-oriented persistence layers. The
available interfaces vary depending on the AWS SDK and programming language that you use.
The following section highlights some of the interfaces available, using the AWS SDK for Java as an
example. (Not all interfaces are available in all AWS SDKs.)
Topics
• Low-Level Interfaces (p. 213)
• Document Interfaces (p. 214)
• Object Persistence Interface (p. 215)
Low-Level Interfaces
Every language-specific AWS SDK provides a low-level interface for DynamoDB, with methods that
closely resemble low-level DynamoDB API requests.
In some cases, you will need to identify the data types of the attributes using Data Type
Descriptors (p. 219), such as S for string or N for number.
Note
A low-level interface is available in every language-specific AWS SDK.
The following Java program uses the low-level interface of the AWS SDK for Java. The program issues a
GetItem request for a song in the Music table, and prints the year that the song was released.
package com.amazonaws.codesamples;
import java.util.HashMap;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import com.amazonaws.services.dynamodbv2.model.AttributeValue;
import com.amazonaws.services.dynamodbv2.model.GetItemRequest;
import com.amazonaws.services.dynamodbv2.model.GetItemResult;
try {
GetItemResult result = client.getItem(request);
if (result && result.getItem() != null) {
AttributeValue year = result.getItem().get("Year");
System.out.println("The song was released in " + year.getN());
} else {
System.out.println("No matching song was found");
}
} catch (Exception e) {
System.err.println("Unable to retrieve data: ");
System.err.println(e.getMessage());
}
}
}
Document Interfaces
Many AWS SDKs provide a document interface, allowing you to perform data plane operations (create,
read, update, delete) on tables and indexes. With a document interface, you do not need to specify Data
Type Descriptors (p. 219); the data types are implied by the semantics of the data itself. These AWS
SDKs also provide methods to easily convert JSON documents to and from native DynamoDB data types.
Note
Document interfaces are available in the AWS SDKs for Java, .NET, Node.js, and JavaScript in
the browser.
The following Java program uses the document interface of the AWS SDK for Java. The program creates
a Table object that represents the Music table, and then asks that object to use GetItem to retrieve a
song. The program then prints the year that the song was released.
package com.amazonaws.codesamples.gsg;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import com.amazonaws.services.dynamodbv2.document.DynamoDB;
import com.amazonaws.services.dynamodbv2.document.GetItemOutcome;
import com.amazonaws.services.dynamodbv2.document.Table;
}
}
The following Java program uses DynamoDBMapper, the object persistence interface of the AWS SDK for
Java. The MusicItem class represents an item the Music table.
package com.amazonaws.codesamples;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBAttribute;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBHashKey;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBRangeKey;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBTable;
@DynamoDBTable(tableName="Music")
public class MusicItem {
private String artist;
private String songTitle;
private String albumTitle;
private int year;
@DynamoDBHashKey(attributeName="Artist")
public String getArtist() { return artist;}
public void setArtist(String artist) {this.artist = artist;}
@DynamoDBRangeKey(attributeName="SongTitle")
public String getSongTitle() { return songTitle;}
public void setSongTitle(String songTitle) {this.songTitle = songTitle;}
@DynamoDBAttribute(attributeName = "AlbumTitle")
public String getAlbumTitle() { return albumTitle;}
public void setAlbumTitle(String albumTitle) {this.albumTitle = albumTitle;}
@DynamoDBAttribute(attributeName = "Year")
public int getYear() { return year; }
public void setYear(int year) { this.year = year; }
}
You can then instantiate a MusicItem object, and retrieve a song using the load() method of
DynamoDBMapper. The program then prints the year that the song was released.
package com.amazonaws.codesamples;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBMapper;
try {
MusicItem result = mapper.load(keySchema);
if (result != null) {
System.out.println(
"The song was released in "+ result.getYear());
} else {
System.out.println("No matching song was found");
}
} catch (Exception e) {
System.err.println("Unable to retrieve data: ");
System.err.println(e.getMessage());
}
The DynamoDB low-level API is the protocol-level interface for Amazon DynamoDB. At this level, every
HTTP(S) request must be correctly formatted and carry a valid digital signature.
The AWS SDKs construct low-level DynamoDB API requests on your behalf and process the responses
from DynamoDB. This lets you focus on your application logic, instead of low-level details. However, you
can still benefit from a basic knowledge of how the low-level DynamoDB API works.
For more information about the low-level DynamoDB API, see Amazon DynamoDB API Reference.
Note
DynamoDB Streams has its own low-level API, which is separate from that of DynamoDB and is
fully supported by the AWS SDKs.
For more information, see Capturing Table Activity with DynamoDB Streams (p. 566). For the
low-level DynamoDB Streams API, see the Amazon DynamoDB Streams API Reference
The low-level DynamoDB API uses JavaScript Object Notation (JSON) as a wire protocol format. JSON
presents data in a hierarchy, so that both data values and data structure are conveyed simultaneously.
Name-value pairs are defined in the format name:value. The data hierarchy is defined by nested
brackets of name-value pairs.
DynamoDB uses JSON only as a transport protocol, not as a storage format. The AWS SDKs use JSON
to send data to DynamoDB, and DynamoDB responds with JSON, but DynamoDB does not store data
persistently in JSON format.
Note
For more information about JSON, see the Introducing JSON at the JSON.org website.
Request Format
The DynamoDB low-level API accepts HTTP(S) POST requests as input. The AWS SDKs construct these
requests for you.
Suppose that you have a table named Pets, with a key schema consisting of AnimalType (partition key)
and Name (sort key). Both of these attributes are of type string. To retrieve an item from Pets, the AWS
SDK constructs a request as shown following:
POST / HTTP/1.1
Host: dynamodb.<region>.<domain>;
Accept-Encoding: identity
Content-Length: <PayloadSizeBytes>
User-Agent: <UserAgentString>
Content-Type: application/x-amz-json-1.0
Authorization: AWS4-HMAC-SHA256 Credential=<Credential>, SignedHeaders=<Headers>,
Signature=<Signature>
X-Amz-Date: <Date>
X-Amz-Target: DynamoDB_20120810.GetItem
{
"TableName": "Pets",
"Key": {
"AnimalType": {"S": "Dog"},
"Name": {"S": "Fido"}
}
}
• The Authorization header contains information required for DynamoDB to authenticate the
request. For more information, see Signing AWS API Requests and Signature Version 4 Signing Process
in the Amazon Web Services General Reference.
• The X-Amz-Target header contains the name of a DynamoDB operation: GetItem. (This is also
accompanied by the low-level API version, in this case 20120810.)
• The payload (body) of the request contains the parameters for the operation, in JSON format. For the
GetItem operation, the parameters are TableName and Key.
Response Format
Upon receipt of the request, DynamoDB processes it and returns a response. For the request shown
above, the HTTP(S) response payload contains the results from the operation, as in this example:
HTTP/1.1 200 OK
x-amzn-RequestId: <RequestId>
x-amz-crc32: <Checksum>
Content-Type: application/x-amz-json-1.0
Content-Length: <PayloadSizeBytes>
Date: <Date>
{
"Item": {
At this point, the AWS SDK returns the response data to your application for further processing.
Note
If DynamoDB cannot process a request, it returns an HTTP error code and message. The AWS
SDK propagates these to your application, in the form of exceptions. For more information, see
Error Handling (p. 220).
The examples in Request Format (p. 218) and Response Format (p. 218) show examples of how
data type descriptors are used. The GetItem request specifies S for the Pets key schema attributes
(AnimalType and Name), which are of type string. The GetItem response contains a Pets item with
attributes of type string (S), number (N), map (M), and list (L).
• S – String
• N – Number
• B – Binary
• BOOL – Boolean
• NULL – Null
• M – Map
• L – List
• SS – String Set
• NS – Number Set
• BS – Binary Set
Note
For detailed descriptions of DynamoDB data types, see Data Types (p. 12).
Numeric Data
Different programming languages offer different levels of support for JSON. In some cases, you might
decide to use a third party library for validating and parsing JSON documents.
Some third party libraries build upon the JSON number type, providing their own types such as int,
long or double. However, the native number data type in DynamoDB does not map exactly to these
other data types, so these type distinctions can cause conflicts. In addition, many JSON libraries do
not handle fixed-precision numeric values, and they automatically infer a double data type for digit
sequences that contain a decimal point.
To solve these problems, DynamoDB provides a single numeric type with no data loss. To avoid
unwanted implicit conversions to a double value, DynamoDB uses strings for the data transfer of numeric
values. This approach provides flexibility for updating attribute values while maintaining proper sorting
semantics, such as putting the values "01", "2", and "03" in the proper sequence.
If number precision is important to your application, you should convert numeric values to strings before
you pass them to DynamoDB.
Binary Data
DynamoDB supports binary attributes. However, JSON does not natively support encoding binary
data. To send binary data in a request, you will need to encode it in Base64 format. Upon receiving the
request, DynamoDB decodes the Base64 data back to binary.
The Base64 encoding scheme used by DynamoDB is described at RFC 4648 at the Internet Engineering
Task Force (IETF) website.
Error Handling
This section describes runtime errors and how to handle them. It also describes error messages and codes
that are specific to DynamoDB.
Topics
• Error Components (p. 220)
• Error Messages and Codes (p. 221)
• Error Handling in Your Application (p. 224)
• Error Retries and Exponential Backoff (p. 224)
• Batch Operations and Error Handling (p. 225)
Error Components
When your program sends a request, DynamoDB attempts to process it. If the request is successful,
DynamoDB returns an HTTP success status code (200 OK), along with the results from the requested
operation.
If the request is unsuccessful, DynamoDB returns an error. Each error has three components:
The AWS SDKs take care of propagating errors to your application, so that you can take
appropriate action. For example, in a Java program, you can write try-catch logic to handle a
ResourceNotFoundException.
If you are not using an AWS SDK, you will need to parse the content of the low-level response from
DynamoDB. The following is an example of such a response:
{"__type":"com.amazonaws.dynamodb.v20120810#ResourceNotFoundException",
"message":"Requested resource not found: Table: tablename not found"}
AccessDeniedException
The client did not correctly sign the request. If you are using an AWS SDK, requests are signed for
you automatically; otherwise, go to the Signature Version 4 Signing Process in the AWS General
Reference.
OK to retry? No
ConditionalCheckFailedException
You specified a condition that evaluated to false. For example, you might have tried to perform a
conditional update on an item, but the actual value of the attribute did not match the expected
value in the condition.
OK to retry? No
IncompleteSignatureException
The request signature did not include all of the required components. If you are using an AWS SDK,
requests are signed for you automatically; otherwise, go to the Signature Version 4 Signing Process
in the AWS General Reference.
OK to retry? No
ItemCollectionSizeLimitExceededException
For a table with a local secondary index, a group of items with the same partition key value has
exceeded the maximum size limit of 10 GB. For more information on item collections, see Item
Collections (p. 539).
OK to retry? Yes
LimitExceededException
There are too many concurrent control plane operations. The cumulative number of tables and
indexes in the CREATING, DELETING, or UPDATING state cannot exceed 50.
OK to retry? Yes
MissingAuthenticationTokenException
Message: Request must contain a valid (registered) AWS Access Key ID.
The request did not include the required authorization header, or it was malformed. See DynamoDB
Low-Level API (p. 216).
OK to retry? No
ProvisionedThroughputExceededException
Message: You exceeded your maximum allowed provisioned throughput for a table or for one or more
global secondary indexes. To view performance metrics for provisioned throughput vs. consumed
throughput, open the Amazon CloudWatch console.
Example: Your request rate is too high. The AWS SDKs for DynamoDB automatically retry requests
that receive this exception. Your request is eventually successful, unless your retry queue is too large
to finish. Reduce the frequency of requests, using Error Retries and Exponential Backoff (p. 224).
OK to retry? Yes
RequestLimitExceeded
Message: Throughput exceeds the current throughput limit for your account. To request a limit
increase, contact AWS Support at https://aws.amazon.com/support.
OK to retry? Yes
ResourceInUseException
Example: You tried to re-create an existing table, or delete a table currently in the CREATING state.
OK to retry? No
ResourceNotFoundException
Example: Table that is being requested does not exist, or is too early in the CREATING state.
OK to retry? No
ThrottlingException
This exception might be returned if you perform any of the following operations too rapidly:
CreateTable, UpdateTable, DeleteTable.
OK to retry? Yes
UnrecognizedClientException
The request signature is incorrect. The most likely cause is an invalid AWS access key ID or secret key.
OK to retry? Yes
ValidationException
This error can occur for several reasons, such as a required parameter that is missing, a value that is
out of range, or mismatched data types. The error message contains details about the specific part
of the request that caused the error.
OK to retry? No
OK to retry? Yes
Note
You might encounter internal server errors while working with items. These are expected
during the lifetime of a table. Any failed requests can be retried immediately.
OK to retry? Yes
The AWS SDKs perform their own retries and error checking. If you encounter an error while using one of
the AWS SDKs, the error code and description can help you troubleshoot it.
You should also see a Request ID in the response. The Request ID can be helpful if you need to work
with AWS Support to diagnose an issue.
The following Java code snippet attempts to delete an item from a DynamoDB table, and performs
rudimentary error handling. (In this case, it simply informs the user that the request failed.)
try {
Item item = table.getItem("year", 1978, "title", "Superman");
if (item != null) {
System.out.println("Result: " + item);
} else {
//No such item exists in the table
System.out.println("Item not found");
}
In this code snippet, the try-catch construct handles two different kinds of exceptions:
Each AWS SDK implements retry logic automatically. You can modify the retry parameters to your
needs. For example, consider a Java application that requires a fail-fast strategy, with no retries allowed
in case of an error. With the AWS SDK for Java, you could use the ClientConfiguration class and
provide a maxErrorRetry value of 0 to turn off the retries. For more information, see the AWS SDK
documentation for your programming language.
If you're not using an AWS SDK, you should retry original requests that receive server
errors (5xx). However, client errors (4xx, other than a ThrottlingException or a
ProvisionedThroughputExceededException) indicate that you need to revise the request itself to
correct the problem before trying again.
In addition to simple retries, each AWS SDK implements an exponential backoff algorithm for better
flow control. The concept behind exponential backoff is to use progressively longer waits between retries
for consecutive error responses. For example, up to 50 milliseconds before the first retry, up to 100
milliseconds before the second, up to 200 milliseconds before third, and so on. However, after a minute,
if the request has not succeeded, the problem might be the request size exceeding your provisioned
throughput, and not the request rate. Set the maximum number of retries to stop around one minute. If
the request is not successful, investigate your provisioned throughput options.
Note
The AWS SDKs implement automatic retry logic and exponential backoff.
Most exponential backoff algorithms use jitter (randomized delay) to prevent successive collisions.
Because you aren't trying to avoid such collisions in these cases, you do not need to use this random
number. However, if you use concurrent clients, jitter can help your requests succeed faster. For more
information, see the blog post for Exponential Backoff and Jitter.
A batch operation can tolerate the failure of individual requests in the batch. For example, consider a
BatchGetItem request to read five items. Even if some of the underlying GetItem requests fail, this
does not cause the entire BatchGetItem operation to fail. On the other hand, if all of the five reads
operations fail, then the entire BatchGetItem will fail.
The batch operations return information about individual requests that fail, so that you can diagnose
the problem and retry the operation. For BatchGetItem, the tables and primary keys in question are
returned in the UnprocessedKeys parameter of the request. For BatchWriteItem, similar information
is returned in UnprocessedItems.
The most likely cause of a failed read or a failed write is throttling. For BatchGetItem, one or more
of the tables in the batch request does not have enough provisioned read capacity to support the
operation. For BatchWriteItem, one or more of the tables does not have enough provisioned write
capacity.
If DynamoDB returns any unprocessed items, you should retry the batch operation on those items.
However, we strongly recommend that you use an exponential backoff algorithm. If you retry the batch
operation immediately, the underlying read or write requests can still fail due to throttling on the
individual tables. If you delay the batch operation using exponential backoff, the individual requests in
the batch are much more likely to succeed.
many developers experience a sense of disconnect, or "impedance mismatch", when they need to map
complex data types to items in a database table. With a low-level database interface, developers must
write methods for reading or writing object data to database tables, and vice-versa. The amount of extra
code required for each combination of object type and database table can seem overwhelming.
To simplify development, the AWS SDKs for Java and .NET provide additional interfaces with higher
levels of abstraction. The higher-level interfaces for DynamoDB let you define the relationships between
objects in your program and the database tables that store those objects' data. After you define this
mapping, you call simple object methods such as save, load, or delete, and the underlying low-level
DynamoDB operations are automatically invoked on your behalf. This allows you to write object-centric
code, rather than database-centric code.
The higher-level programming interfaces for DynamoDB are available in the AWS SDKs for Java
and .NET.
Java
.NET
Java: DynamoDBMapper
Topics
• Supported Data Types (p. 228)
• Java Annotations for DynamoDB (p. 229)
• The DynamoDBMapper Class (p. 234)
• Optional Configuration Settings for DynamoDBMapper (p. 241)
• Example: CRUD Operations (p. 242)
• Example: Batch Write Operations (p. 244)
• Example: Query and Scan (p. 251)
• Example: Transaction Operations (p. 260)
• Optimistic Locking With Version Number (p. 267)
• Mapping Arbitrary Data (p. 269)
The AWS SDK for Java provides a DynamoDBMapper class, allowing you to map your client-side classes to
DynamoDB tables. To use DynamoDBMapper, you define the relationship between items in a DynamoDB
table and their corresponding object instances in your code. The DynamoDBMapper class enables you
to access your tables, perform various create, read, update and delete (CRUD) operations, and execute
queries.
Note
The DynamoDBMapper class does not allow you to create, update, or delete tables. To perform
those tasks, use the low-level SDK for Java interface instead. For more information, see Working
with Tables: Java (p. 360).
The SDK for Java provides a set of annotation types, so that you can map your classes to tables. For
example, consider a ProductCatalog table that has Id as the partition key.
ProductCatalog(Id, ...)
You can map a class in your client application to the ProductCatalog table as shown in the following
Java code. This code snippet defines a plain old Java object (POJO) named CatalogItem, which uses
annotations to map object fields to DynamoDB attribute names:
Example
package com.amazonaws.codesamples;
import java.util.Set;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBAttribute;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBHashKey;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBIgnore;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBTable;
@DynamoDBTable(tableName="ProductCatalog")
public class CatalogItem {
@DynamoDBHashKey(attributeName="Id")
public Integer getId() { return id; }
public void setId(Integer id) {this.id = id; }
@DynamoDBAttribute(attributeName="Title")
public String getTitle() {return title; }
public void setTitle(String title) { this.title = title; }
@DynamoDBAttribute(attributeName="ISBN")
public String getISBN() { return ISBN; }
public void setISBN(String ISBN) { this.ISBN = ISBN; }
@DynamoDBAttribute(attributeName="Authors")
public Set<String> getBookAuthors() { return bookAuthors; }
public void setBookAuthors(Set<String> bookAuthors) { this.bookAuthors = bookAuthors; }
@DynamoDBIgnore
public String getSomeProp() { return someProp; }
public void setSomeProp(String someProp) { this.someProp = someProp; }
}
In the preceding code, the @DynamoDBTable annotation maps the CatalogItem class to the
ProductCatalog table. You can store individual class instances as items in the table. In the class
definition, the @DynamoDBHashKey annotation maps the Id property to the primary key.
By default, the class properties map to the same name attributes in the table. The properties Title and
ISBN map to the same name attributes in the table.
The @DynamoDBAttribute annotation is optional when the name of the DynamoDB attribute
matches the name of the property declared in the class. When they differ, use this annotation with
the attributeName() parameter to specify which DynamoDB attribute this property corresponds to. In
the preceding example, the @DynamoDBAttribute annotation is added to each property to ensure
that the property names match exactly with the tables created in Creating Tables and Loading Sample
Data (p. 323), and to be consistent with the attribute names used in other code examples in this guide.
Your class definition can have properties that don't map to any attributes in the table. You identify these
properties by adding the @DynamoDBIgnore annotation. In the preceding example, the SomeProp
property is marked with the @DynamoDBIgnore annotation. When you upload a CatalogItem instance
to the table, your DynamoDBMapper instance does not include SomeProp property. In addition, the
mapper does not return this attribute when you retrieve an item from the table.
After you have defined your mapping class, you can use DynamoDBMapper methods to write an instance
of that class to a corresponding item in the Catalog table. The following code snippet demonstrates this
technique:
mapper.save(item);
The following code snippet shows how to retrieve the item and access some of its attributes:
partitionKey.setId(102);
DynamoDBQueryExpression<CatalogItem> queryExpression = new
DynamoDBQueryExpression<CatalogItem>()
.withHashKeyValues(partitionKey);
DynamoDBMapper offers an intuitive, natural way of working with DynamoDB data within Java. It also
provides a number of built-in features such as optimistic locking, ACID transactions, auto-generated
partition key and sort key values, and object versioning.
DynamoDB supports the following primitive Java data types and primitive wrapper classes.
• String
• Boolean, boolean
• Byte, byte
• Date (as ISO_8601 millisecond-precision string, shifted to UTC)
• Calendar (as ISO_8601 millisecond-precision string, shifted to UTC)
• Long, long
• Integer, int
• Double, double
• Float, float
• BigDecimal
• BigInteger
Note
For more information on the DynamoDB naming rules and the various supported data types see
Naming Rules and Data Types (p. 11).
DynamoDB supports the Java Set, List, and Map collection types.
The following table summarizes how the preceding Java types map to the DynamoDB types.
The DynamoDBTypeConverter interface lets you map your own arbitrary data types to a data type that
is natively supported by DynamoDB. For more information, see Mapping Arbitrary Data (p. 269).
For the corresponding Javadoc documentation, see Annotation Types Summary in the AWS SDK for Java
API Reference.
Note
In the following annotations, only DynamoDBTable and the DynamoDBHashKey are required.
Topics
• DynamoDBAttribute (p. 230)
• DynamoDBAutoGeneratedKey (p. 230)
• DynamoDBDocument (p. 230)
• DynamoDBHashKey (p. 231)
• DynamoDBIgnore (p. 232)
• DynamoDBIndexHashKey (p. 232)
• DynamoDBIndexRangeKey (p. 232)
• DynamoDBRangeKey (p. 232)
• DynamoDBTable (p. 233)
• DynamoDBTypeConverted (p. 233)
• DynamoDBTyped (p. 233)
DynamoDBAttribute
Maps a property to a table attribute. By default, each class property maps to an item attribute with the
same name. However, if the names are not the same, you can use this annotation to map a property to
the attribute. In the following Java snippet, the DynamoDBAttribute maps the BookAuthors property
to the Authors attribute name in the table.
@DynamoDBAttribute(attributeName = "Authors")
public List<String> getBookAuthors() { return BookAuthors; }
public void setBookAuthors(List<String> BookAuthors) { this.BookAuthors = BookAuthors; }
The DynamoDBMapper uses Authors as the attribute name when saving the object to the table.
DynamoDBAutoGeneratedKey
Marks a partition key or sort key property as being auto-generated. DynamoDBMapper will generate a
random UUID when saving these attributes. Only String properties can be marked as auto-generated
keys.
@DynamoDBTable(tableName="AutoGeneratedKeysExample")
public class AutoGeneratedKeys {
private String id;
private String payload;
@DynamoDBHashKey(attributeName = "Id")
@DynamoDBAutoGeneratedKey
public String getId() { return id; }
public void setId(String id) { this.id = id; }
@DynamoDBAttribute(attributeName="payload")
public String getPayload() { return this.payload; }
public void setPayload(String payload) { this.payload = payload; }
DynamoDBDocument
Indicates that a class can be serialized as a DynamoDB document.
For example, suppose you wanted to map a JSON document to a DynamoDB attribute of type Map (M).
The following code snippet defines an item containing a nested attribute (Pictures) of type Map.
@DynamoDBHashKey(attributeName="Id")
public Integer getId() { return id;}
public void setId(Integer id) {this.id = id;}
@DynamoDBAttribute(attributeName="Pictures")
public Pictures getPictures() { return pictures;}
public void setPictures(Pictures pictures) {this.pictures = pictures;}
@DynamoDBDocument
public static class Pictures {
private String frontView;
private String rearView;
private String sideView;
@DynamoDBAttribute(attributeName = "FrontView")
public String getFrontView() { return frontView; }
public void setFrontView(String frontView) { this.frontView = frontView; }
@DynamoDBAttribute(attributeName = "RearView")
public String getRearView() { return rearView; }
public void setRearView(String rearView) { this.rearView = rearView; }
@DynamoDBAttribute(attributeName = "SideView")
public String getSideView() { return sideView; }
public void setSideView(String sideView) { this.sideView = sideView; }
}
}
You could then save a new ProductCatalog item, with Pictures, as shown in the following snippet:
item.setId(123);
mapper.save(item);
The resulting ProductCatalog item would look like this (in JSON format):
{
"Id" : 123
"Pictures" : {
"SideView" : "http://example.com/products/123_left_side.jpg",
"RearView" : "http://example.com/products/123_rear.jpg",
"FrontView" : "http://example.com/products/123_front.jpg"
}
}
DynamoDBHashKey
Maps a class property to the partition key of the table. The property must be one of the scalar string,
number or binary types; it cannot be a collection type.
Assume that you have a table, ProductCatalog, that has Id as the primary key. The following Java code
snippet defines a CatalogItem class and maps its Id property to the primary key of the ProductCatalog
table using the @DynamoDBHashKey tag.
@DynamoDBTable(tableName="ProductCatalog")
public class CatalogItem {
private Integer Id;
@DynamoDBHashKey(attributeName="Id")
public Integer getId() {
return Id;
}
public void setId(Integer Id) {
this.Id = Id;
}
// Additional properties go here.
}
DynamoDBIgnore
Indicates to the DynamoDBMapper instance that the associated property should be ignored. When saving
data to the table, the DynamoDBMapper does not save this property to the table.
Applied to the getter method or the class field for a non-modeled property. If the annotation is applied
directly to the class field, the corresponding getter and setter must be declared in the same class.
DynamoDBIndexHashKey
Maps a class property to the partition key of a global secondary index. The property must be one of the
scalar string, number or binary types; it cannot be a collection type.
Use this annotation if you need to Query a global secondary index. You must specify the index name
(globalSecondaryIndexName). If the name of the class property is different from the index partition
key, you must also specify the name of that index attribute (attributeName).
DynamoDBIndexRangeKey
Maps a class property to the sort key of a global secondary index or a local secondary index. The
property must be one of the scalar string, number or binary types; it cannot be a collection type.
Use this annotation if you need to Query a local secondary index or a global secondary index
and want to refine your results using the index sort key. You must specify the index name (either
globalSecondaryIndexName or localSecondaryIndexName). If the name of the class
property is different from the index sort key, you must also specify the name of that index attribute
(attributeName).
DynamoDBRangeKey
Maps a class property to the sort key of the table. The property must be one of the scalar string, number
or binary types; it cannot be a collection type.
If the primary key is composite (partition key and sort key), you can use this tag to map your class
field to the sort key. For example, assume that you have a Reply table that stores replies for forum
threads. Each thread can have many replies. So the primary key of this table is both the ThreadId and
ReplyDateTime. The ThreadId is the partition key and ReplyDateTime is the sort key. The following Java
code snippet defines a Reply class and maps it to the Reply table. It uses both the @DynamoDBHashKey
and @DynamoDBRangeKey tags to identify class properties that map to the primary key.
@DynamoDBTable(tableName="Reply")
@DynamoDBHashKey(attributeName="Id")
public Integer getId() { return id; }
public void setId(Integer id) { this.id = id; }
@DynamoDBRangeKey(attributeName="ReplyDateTime")
public String getReplyDateTime() { return replyDateTime; }
public void setReplyDateTime(String replyDateTime) { this.replyDateTime =
replyDateTime; }
DynamoDBTable
Identifies the target table in DynamoDB. For example, the following Java code snippet defines a class
Developer and maps it to the People table in DynamoDB.
@DynamoDBTable(tableName="People")
public class Developer { ...}
The @DynamoDBTable annotation can be inherited. Any new class that inherits from the Developer
class also maps to the People table. For example, assume that you create a Lead class that inherits from
the Developer class. Because you mapped the Developer class to the People table, the Lead class
objects are also stored in the same table.
The @DynamoDBTable can also be overridden. Any new class that inherits from the Developer class by
default maps to the same People table. However, you can override this default mapping. For example, if
you create a class that inherits from the Developer class, you can explicitly map it to another table by
adding the @DynamoDBTable annotation as shown in the following Java code snippet.
@DynamoDBTable(tableName="Managers")
public class Manager extends Developer { ...}
DynamoDBTypeConverted
Annotation to mark a property as using a custom type-converter. May be annotated on a user-defined
annotation to pass additional properties to the DynamoDBTypeConverter.
The DynamoDBTypeConverter interface lets you map your own arbitrary data types to a data type that
is natively supported by DynamoDB. For more information, see Mapping Arbitrary Data (p. 269).
DynamoDBTyped
Annotation to override the standard attribute type binding. Standard types do not require the
annotation if applying the default attribute binding for that type.
DynamoDBVersionAttribute
Identifies a class property for storing an optimistic locking version number. DynamoDBMapper
assigns a version number to this property when it saves a new item, and increments it each time you
update the item. Only number scalar types are supported. For more information about data type,
see Data Types (p. 12). For more information about versioning, see Optimistic Locking With Version
Number (p. 267).
For the corresponding Javadoc documentation, see DynamoDBMapper in the AWS SDK for Java API
Reference.
Topics
• save (p. 234)
• load (p. 235)
• delete (p. 235)
• query (p. 235)
• queryPage (p. 236)
• scan (p. 237)
• scanPage (p. 237)
• parallelScan (p. 237)
• batchSave (p. 238)
• batchLoad (p. 238)
• batchDelete (p. 238)
• batchWrite (p. 239)
• transactionWrite (p. 239)
• transactionLoad (p. 240)
• count (p. 240)
• generateCreateTableRequest (p. 240)
• createS3Link (p. 240)
• getS3ClientCache (p. 241)
save
Saves the specified object to the table. The object that you wish to save is the only required parameter
for this method. You can provide optional configuration parameters using the DynamoDBMapperConfig
object.
If an item that has the same primary key does not exist, this method creates a new item in the table.
If an item that has the same primary key exists, it updates the existing item. If the partition key and
sort key are of type String, and annotated with @DynamoDBAutoGeneratedKey, then they are
given a random universally unique identifier (UUID) if left uninitialized. Version fields annotated with
@DynamoDBVersionAttribute will be incremented by one. Additionally, if a version field is updated or
a key generated, the object passed in is updated as a result of the operation.
By default, only attributes corresponding to mapped class properties are updated; any additional existing
attributes on an item are unaffected. However, if you specify SaveBehavior.CLOBBER, you can force
the item to be completely overwritten.
If you have versioning enabled, then the client-side and server-side item versions must match.
However, the version does not need to match if the SaveBehavior.CLOBBER option is used. For more
information about versioning, see Optimistic Locking With Version Number (p. 267).
load
Retrieves an item from a table. You must provide the primary key of the item that you wish to retrieve.
You can provide optional configuration parameters using the DynamoDBMapperConfig object. For
example, you can optionally request strongly consistent reads to ensure that this method retrieves only
the latest item values as shown in the following Java statement.
By default, DynamoDB returns the item that has values that are eventually consistent. For information
about the eventual consistency model of DynamoDB, see Read Consistency (p. 15).
delete
Deletes an item from the table. You must pass in an object instance of the mapped class.
If you have versioning enabled, then the client-side and server-side item versions must match.
However, the version does not need to match if the SaveBehavior.CLOBBER option is used. For more
information about versioning, see Optimistic Locking With Version Number (p. 267).
query
Queries a table or a secondary index. You can query a table or an index only if it has a composite primary
key (partition key and sort key). This method requires you to provide a partition key value and a query
filter that is applied on the sort key. A filter expression includes a condition and a value.
Assume that you have a table, Reply, that stores forum thread replies. Each thread subject can have 0 or
more replies. The primary key of the Reply table consists of the Id and ReplyDateTime fields, where Id is
the partition key and ReplyDateTime is the sort key of the primary key.
Now, assume that you have created a mapping between a Reply class and the corresponding Reply table
in DynamoDB. The following Java code snippet uses DynamoDBMapper to find all replies in the past two
weeks for a specific thread subject.
Example
By default, the query method returns a "lazy-loaded" collection. It initially returns only one page of
results, and then makes a service call for the next page if needed. To obtain all the matching items, you
only need to iterate over the latestReplies collection.
To query an index, you must first model the index as a mapper class. Suppose that the Reply table has a
global secondary index named PostedBy-Message-Index. The partition key for this index is PostedBy, and
the sort key is Message. The class definition for an item in the index would look like this:
@DynamoDBTable(tableName="Reply")
public class PostedByMessage {
private String postedBy;
private String message;
@DynamoDBIndexHashKey(globalSecondaryIndexName = "PostedBy-Message-Index",
attributeName = "PostedBy")
public String getPostedBy() { return postedBy; }
public void setPostedBy(String postedBy) { this.postedBy = postedBy; }
@DynamoDBIndexRangeKey(globalSecondaryIndexName = "PostedBy-Message-Index",
attributeName = "Message")
public String getMessage() { return message; }
public void setMessage(String message) { this.message = message; }
The @DynamoDBTable annotation indicates that this index is associated with the Reply table. The
@DynamoDBIndexHashKey annotation denotes the partition key (PostedBy) of the index, and
@DynamoDBIndexRangeKey denotes the sort key (Message) of the index.
Now you can use DynamoDBMapper to query the index, retrieving a subset of messages that were
posted by a particular user. You must specify withIndexName so that DynamoDB knows which index
to query. In the following code snippet, we are querying a global secondary index. Because global
secondary indexes support eventually consistent reads, but not strongly consistent reads, we must
specify withConsistentRead(false).
queryPage
Queries a table or secondary index and returns a single page of matching results. As with the query
method, you must specify a partition key value and a query filter that is applied on the sort key attribute.
However, queryPage will only return the first "page" of data - that is, the amount of data that will fit
within 1 MB
scan
Scans an entire table or a secondary index. You can optionally specify a FilterExpression to filter the
result set.
Assume that you have a table, Reply, that stores forum thread replies. Each thread subject can have 0 or
more replies. The primary key of the Reply table consists of the Id and ReplyDateTime fields, where Id is
the partition key and ReplyDateTime is the sort key of the primary key.
If you have mapped a Java class to the Reply table, you can use the DynamoDBMapper to scan the table.
For example, the following Java code snippet scans the entire Reply table, returning only the replies for
a particular year.
Example
By default, the scan method returns a "lazy-loaded" collection. It initially returns only one page of
results, and then makes a service call for the next page if needed. To obtain all the matching items, you
only need to iterate over the replies collection.
To scan an index, you must first model the index as a mapper class. Suppose that the Reply table has a
global secondary index named PostedBy-Message-Index. The partition key for this index is PostedBy, and
the sort key is Message. A mapper class for this index is shown in the query (p. 235) section, where we
use the @DynamoDBIndexHashKey and @DynamoDBIndexRangeKey annotations to specify the index
partition key and sort key.
The following code snippet scans PostedBy-Message-Index. It does not use a scan filter, so all of the items
in the index are returned to you.
scanPage
Scans a table or secondary index and returns a single page of matching results. As with the scan
method, you can optionally specify a FilterExpression to filter the result set. However, scanPage
will only return the first "page" of data - that is, the amount of data that will fit within 1 MB
parallelScan
Performs a parallel scan of an entire table or secondary index. You specify a number of logical segments
for the table, along with a scan expression to filter the results. The parallelScan divides the scan
task among multiple workers, one for each logical segment; the workers process the data in parallel and
return the results.
The following Java code snippet performs a parallel scan on the Product table.
int numberOfThreads = 4;
For a Java code sample illustrating usage of parallelScan, see Example: Query and Scan (p. 251).
batchSave
Saves objects to one or more tables using one or more calls to the AmazonDynamoDB.batchWriteItem
method. This method does not provide transaction guarantees.
The following Java code snippet saves two items (books) to the ProductCatalog table.
mapper.batchSave(Arrays.asList(book1, book2));
batchLoad
Retrieves multiple items from one or more tables using their primary keys.
The following Java code snippet retrieves two items from two different tables.
batchDelete
Deletes objects from one or more tables using one or more calls to the
AmazonDynamoDB.batchWriteItem method. This method does not provide transaction guarantees.
The following Java code snippet deletes two items (books) from the ProductCatalog table.
batchWrite
Saves objects to and deletes objects from one or more tables using one or more calls to the
AmazonDynamoDB.batchWriteItem method. This method does not provide transaction guarantees or
support versioning (conditional puts or deletes).
The following Java code snippet writes a new item to the Forum table, writes a new item to the Thread
table, and deletes an item from the ProductCatalog table.
mapper.batchWrite(objectsToWrite, objectsToDelete);
transactionWrite
Saves objects to and deletes objects from one or more tables using one call to the
AmazonDynamoDB.transactWriteItems method.
For more information about DynamoDB transactions and the provided atomicity, consistency, isolation,
and durability (ACID) guarantees see Amazon DynamoDB Transactions.
Note
This method does not support:
The following Java code snippet writes a new item to each of the Forum and Thread tables,
transactionally.
transactionWriteRequest.addPut(s3Forum);
transactionWriteRequest.addPut(s3ForumThread);
mapper.transactionWrite(transactionWriteRequest);
transactionLoad
Loads objects from one or more tables using one call to the AmazonDynamoDB.transactGetItems
method.
For more information about DynamoDB transactions and the provided atomicity, consistency, isolation,
and durability (ACID) guarantees see Amazon DynamoDB Transactions.
The following Java code snippet loads one item from each of the Forum and Thread tables,
transactionally.
count
Evaluates the specified scan expression and returns the count of matching items. No item data is
returned.
generateCreateTableRequest
Parses a POJO class that represents a DynamoDB table, and returns a CreateTableRequest for that
table.
createS3Link
Creates a link to an object in Amazon S3. You must specify a bucket name and a key name, which
uniquely identifies the object in the bucket.
To use createS3Link, your mapper class must define getter and setter methods. The following code
snippet illustrates this by adding a new attribute and getter/setter methods to the CatalogItem class:
@DynamoDBTable(tableName="ProductCatalog")
public class CatalogItem {
...
....
@DynamoDBAttribute(attributeName = "ProductImage")
public S3Link getProductImage() {
return productImage;
}
...
}
The following Java code defines a new item to be written to the Product table. The item includes a link
to a product image; the image data is uploaded to Amazon S3.
item.id = 150;
item.title = "Book 150 Title";
item.getProductImage().uploadFrom(new File("/file/path/book_150_cover.jpg"));
mapper.save(item);
The S3Link class provides many other methods for manipulating objects in Amazon S3. For more
information, see the Javadocs for S3Link.
getS3ClientCache
Returns the underlying S3ClientCache for accessing Amazon S3. An S3ClientCache is a smart Map
for AmazonS3Client objects. If you have multiple clients, then an S3ClientCache can help you keep the
clients organized by region, and can create new Amazon S3 clients on demand.
.withPaginationLoadingStrategy(DynamoDBMapperConfig.PaginationLoadingStrategy.EAGER_LOADING)
.build();
For more information, see DynamoDBMapperConfig in the AWS SDK for Java API Reference.
If you do not specify a read consistency setting for your mapper instance, the default is EVENTUAL.
If you do not specify a pagination loading strategy for your mapper instance, the default is
LAZY_LOADING.
• A DynamoDBMapperConfig.SaveBehavior enumeration value - Specifies how the mapper instance
should deal with attributes during save operations:
• UPDATE—during a save operation, all modeled attributes are updated, and unmodeled attributes are
unaffected. Primitive number types (byte, int, long) are set to 0. Object types are set to null.
• CLOBBER—clears and replaces all attributes, included unmodeled ones, during a save operation. This
is done by deleting the item and re-creating it. Versioned field constraints are also disregarded.
If you do not specify the save behavior for your mapper instance, the default is UPDATE.
Note
DynamoDBMapper transactional operations do not support
DynamoDBMapperConfig.SaveBehavior enumeration.
• A DynamoDBMapperConfig.TableNameOverride object—Instructs the mapper instance to ignore
the table name specified by a class's DynamoDBTable annotation, and instead use a different table
name that you supply. This is useful when partitioning your data into multiple tables at run time.
You can override the default configuration object for DynamoDBMapper per operation, as needed.
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package com.amazonaws.codesamples.datamodeling;
import java.io.IOException;
import java.util.Arrays;
import java.util.HashSet;
import java.util.Set;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBAttribute;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBHashKey;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBMapper;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBMapperConfig;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBTable;
@DynamoDBTable(tableName = "ProductCatalog")
public static class CatalogItem {
private Integer id;
private String title;
private String ISBN;
private Set<String> bookAuthors;
// Partition key
@DynamoDBHashKey(attributeName = "Id")
public Integer getId() {
return id;
}
@DynamoDBAttribute(attributeName = "Title")
public String getTitle() {
return title;
}
@DynamoDBAttribute(attributeName = "ISBN")
public String getISBN() {
return ISBN;
}
@DynamoDBAttribute(attributeName = "Authors")
public Set<String> getBookAuthors() {
return bookAuthors;
}
@Override
public String toString() {
return "Book [ISBN=" + ISBN + ", bookAuthors=" + bookAuthors + ", id=" + id +
", title=" + title + "]";
}
}
For more information about the tables used in this example, see Creating Tables and Loading
Sample Data (p. 323). For step-by-step instructions to test the following sample, see Java Code
Examples (p. 328).
Example
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package com.amazonaws.codesamples.datamodeling;
import java.text.SimpleDateFormat;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashSet;
import java.util.List;
import java.util.Set;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBAttribute;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBHashKey;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBMapper;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBMapperConfig;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBRangeKey;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBTable;
testBatchSave(mapper);
testBatchDelete(mapper);
testBatchWrite(mapper);
System.out.println("Example complete!");
}
catch (Throwable t) {
System.err.println("Error running the DynamoDBMapperBatchWriteExample: " + t);
t.printStackTrace();
}
}
@DynamoDBTable(tableName = "ProductCatalog")
public static class Book {
private int id;
private String title;
private String ISBN;
private int price;
private int pageCount;
private String productCategory;
private boolean inPublication;
// Partition key
@DynamoDBHashKey(attributeName = "Id")
public int getId() {
return id;
}
@DynamoDBAttribute(attributeName = "Title")
public String getTitle() {
return title;
}
@DynamoDBAttribute(attributeName = "ISBN")
public String getISBN() {
return ISBN;
}
@DynamoDBAttribute(attributeName = "Price")
public int getPrice() {
return price;
}
@DynamoDBAttribute(attributeName = "PageCount")
public int getPageCount() {
return pageCount;
}
@DynamoDBAttribute(attributeName = "ProductCategory")
@DynamoDBAttribute(attributeName = "InPublication")
public boolean getInPublication() {
return inPublication;
}
@Override
public String toString() {
return "Book [ISBN=" + ISBN + ", price=" + price + ", product category=" +
productCategory + ", id=" + id
+ ", title=" + title + "]";
}
@DynamoDBTable(tableName = "Reply")
public static class Reply {
private String id;
private String replyDateTime;
private String message;
private String postedBy;
// Partition key
@DynamoDBHashKey(attributeName = "Id")
public String getId() {
return id;
}
// Sort key
@DynamoDBRangeKey(attributeName = "ReplyDateTime")
public String getReplyDateTime() {
return replyDateTime;
}
@DynamoDBAttribute(attributeName = "Message")
public String getMessage() {
return message;
}
@DynamoDBAttribute(attributeName = "PostedBy")
public String getPostedBy() {
return postedBy;
}
@DynamoDBTable(tableName = "Thread")
public static class Thread {
private String forumName;
private String subject;
private String message;
private String lastPostedDateTime;
private String lastPostedBy;
private Set<String> tags;
private int answered;
private int views;
private int replies;
// Partition key
@DynamoDBHashKey(attributeName = "ForumName")
public String getForumName() {
return forumName;
}
// Sort key
@DynamoDBRangeKey(attributeName = "Subject")
public String getSubject() {
return subject;
}
@DynamoDBAttribute(attributeName = "Message")
public String getMessage() {
return message;
}
@DynamoDBAttribute(attributeName = "LastPostedDateTime")
public String getLastPostedDateTime() {
return lastPostedDateTime;
}
@DynamoDBAttribute(attributeName = "LastPostedBy")
public String getLastPostedBy() {
return lastPostedBy;
}
@DynamoDBAttribute(attributeName = "Tags")
@DynamoDBAttribute(attributeName = "Answered")
public int getAnswered() {
return answered;
}
@DynamoDBAttribute(attributeName = "Views")
public int getViews() {
return views;
}
@DynamoDBAttribute(attributeName = "Replies")
public int getReplies() {
return replies;
}
@DynamoDBTable(tableName = "Forum")
public static class Forum {
private String name;
private String category;
private int threads;
// Partition key
@DynamoDBHashKey(attributeName = "Name")
public String getName() {
return name;
}
@DynamoDBAttribute(attributeName = "Category")
public String getCategory() {
return category;
}
@DynamoDBAttribute(attributeName = "Threads")
public int getThreads() {
return threads;
}
The example then executes the follow query and scan operations using a DynamoDBMapper instance.
The ProductCatalog table has Id as its primary key. It does not have a sort key as part of its primary
key. Therefore, you cannot query the table. You can get an item using its id value.
• Execute the following queries against the Reply table.
The Reply table's primary key is composed of Id and ReplyDateTime attributes. The ReplyDateTime is a
sort key. Therefore, you can query this table.
• Find replies to a forum thread posted in the last 15 days
• Find replies to a forum thread posted in a specific date range
• Scan ProductCatalog table to find books whose price is less than a specified value.
For performance reasons, you should use query instead of the scan operation. However, there are
times you might need to scan a table. Suppose there was a data entry error and one of the book
prices was set to less than 0. This example scans the ProductCategory table to find book items
(ProductCategory is book) and price is less than 0.
• Perform a parallel scan of the ProductCatalog table to find bicycles of a specific type.
Note
This code sample assumes that you have already loaded data into DynamoDB for your account
by following the instructions in the Creating Tables and Loading Sample Data (p. 323) section.
For step-by-step instructions to run the following example, see Java Code Examples (p. 328).
Example
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package com.amazonaws.codesamples.datamodeling;
import java.text.SimpleDateFormat;
import java.util.Date;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.TimeZone;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBAttribute;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBHashKey;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBMapper;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBQueryExpression;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBRangeKey;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBScanExpression;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBTable;
import com.amazonaws.services.dynamodbv2.model.AttributeValue;
// Scan a table and find book items priced less than specified
// value.
FindBooksPricedLessThanSpecifiedValue(mapper, "20");
// Scan a table with multiple threads and find bicycle items with a
// specified bicycle type
int numberOfThreads = 16;
FindBicyclesOfSpecificTypeWithMultipleThreads(mapper, numberOfThreads, "Road");
System.out.println("Example complete!");
}
catch (Throwable t) {
System.err.println("Error running the DynamoDBMapperQueryScanExample: " + t);
t.printStackTrace();
}
}
System.out.println(
"FindRepliesPostedWithinTimePeriod: Find replies for thread Message = 'DynamoDB
Thread 2' posted within a period.");
long startDateMilli = (new Date()).getTime() - (14L * 24L * 60L * 60L * 1000L); //
Two
//
weeks
//
ago.
long endDateMilli = (new Date()).getTime() - (7L * 24L * 60L * 60L * 1000L); // One
//
week
//
ago.
SimpleDateFormat dateFormatter = new SimpleDateFormat("yyyy-MM-
dd'T'HH:mm:ss.SSS'Z'");
dateFormatter.setTimeZone(TimeZone.getTimeZone("UTC"));
String startDate = dateFormatter.format(startDateMilli);
String endDate = dateFormatter.format(endDateMilli);
System.out.println("FindBicyclesOfSpecificTypeWithMultipleThreads: Scan
ProductCatalog With Multiple Threads.");
Map<String, AttributeValue> eav = new HashMap<String, AttributeValue>();
eav.put(":val1", new AttributeValue().withS("Bicycle"));
eav.put(":val2", new AttributeValue().withS(bicycleType));
@DynamoDBTable(tableName = "ProductCatalog")
public static class Book {
private int id;
private String title;
private String ISBN;
private int price;
private int pageCount;
@DynamoDBHashKey(attributeName = "Id")
public int getId() {
return id;
}
@DynamoDBAttribute(attributeName = "Title")
public String getTitle() {
return title;
}
@DynamoDBAttribute(attributeName = "ISBN")
public String getISBN() {
return ISBN;
}
@DynamoDBAttribute(attributeName = "Price")
public int getPrice() {
return price;
}
@DynamoDBAttribute(attributeName = "PageCount")
public int getPageCount() {
return pageCount;
}
@DynamoDBAttribute(attributeName = "ProductCategory")
public String getProductCategory() {
return productCategory;
}
@DynamoDBAttribute(attributeName = "InPublication")
public boolean getInPublication() {
return inPublication;
}
@Override
public String toString() {
return "Book [ISBN=" + ISBN + ", price=" + price + ", product category=" +
productCategory + ", id=" + id
+ ", title=" + title + "]";
}
@DynamoDBTable(tableName = "ProductCatalog")
public static class Bicycle {
private int id;
private String title;
private String description;
private String bicycleType;
private String brand;
private int price;
private List<String> color;
private String productCategory;
@DynamoDBHashKey(attributeName = "Id")
public int getId() {
return id;
}
@DynamoDBAttribute(attributeName = "Title")
public String getTitle() {
return title;
}
@DynamoDBAttribute(attributeName = "Description")
public String getDescription() {
return description;
}
@DynamoDBAttribute(attributeName = "BicycleType")
public String getBicycleType() {
return bicycleType;
}
@DynamoDBAttribute(attributeName = "Brand")
public String getBrand() {
return brand;
}
@DynamoDBAttribute(attributeName = "Price")
@DynamoDBAttribute(attributeName = "Color")
public List<String> getColor() {
return color;
}
@DynamoDBAttribute(attributeName = "ProductCategory")
public String getProductCategory() {
return productCategory;
}
@Override
public String toString() {
return "Bicycle [Type=" + bicycleType + ", color=" + color + ", price=" + price
+ ", product category="
+ productCategory + ", id=" + id + ", title=" + title + "]";
}
@DynamoDBTable(tableName = "Reply")
public static class Reply {
private String id;
private String replyDateTime;
private String message;
private String postedBy;
// Partition key
@DynamoDBHashKey(attributeName = "Id")
public String getId() {
return id;
}
// Range key
@DynamoDBRangeKey(attributeName = "ReplyDateTime")
public String getReplyDateTime() {
return replyDateTime;
}
@DynamoDBAttribute(attributeName = "Message")
public String getMessage() {
return message;
}
@DynamoDBAttribute(attributeName = "PostedBy")
public String getPostedBy() {
return postedBy;
}
@DynamoDBTable(tableName = "Thread")
public static class Thread {
private String forumName;
private String subject;
private String message;
private String lastPostedDateTime;
private String lastPostedBy;
private Set<String> tags;
private int answered;
private int views;
private int replies;
// Partition key
@DynamoDBHashKey(attributeName = "ForumName")
public String getForumName() {
return forumName;
}
// Range key
@DynamoDBRangeKey(attributeName = "Subject")
public String getSubject() {
return subject;
}
@DynamoDBAttribute(attributeName = "Message")
public String getMessage() {
return message;
}
@DynamoDBAttribute(attributeName = "LastPostedDateTime")
public String getLastPostedDateTime() {
return lastPostedDateTime;
}
@DynamoDBAttribute(attributeName = "LastPostedBy")
@DynamoDBAttribute(attributeName = "Tags")
public Set<String> getTags() {
return tags;
}
@DynamoDBAttribute(attributeName = "Answered")
public int getAnswered() {
return answered;
}
@DynamoDBAttribute(attributeName = "Views")
public int getViews() {
return views;
}
@DynamoDBAttribute(attributeName = "Replies")
public int getReplies() {
return replies;
}
@DynamoDBTable(tableName = "Forum")
public static class Forum {
private String name;
private String category;
private int threads;
@DynamoDBHashKey(attributeName = "Name")
public String getName() {
return name;
}
@DynamoDBAttribute(attributeName = "Category")
public String getCategory() {
return category;
}
@DynamoDBAttribute(attributeName = "Threads")
public int getThreads() {
return threads;
}
• transactionWrite to add, update and delete multiple items from one or more tables in one
transaction.
• transactionLoad to retrieve multiple items from one or more tables in one transaction.
Example
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
import java.io.IOException;
import java.util.Arrays;
import java.util.HashSet;
import java.util.Set;
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBAttribute;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBHashKey;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBMapper;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBMapperConfig;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBTable;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBRangeKey;
import com.amazonaws.services.dynamodbv2.datamodeling.TransactionWriteRequest;
import com.amazonaws.services.dynamodbv2.datamodeling.TransactionLoadRequest;
import com.amazonaws.services.dynamodbv2.model.TransactionCanceledException;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBTransactionWriteExpression;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBTransactionLoadExpression;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBMappingException;
import com.amazonaws.services.dynamodbv2.model.ResourceNotFoundException;
import com.amazonaws.services.dynamodbv2.model.InternalServerErrorException;
testPutAndUpdateInTransactionWrite();
testPutWithConditionalUpdateInTransactionWrite();
testPutWithConditionCheckInTransactionWrite();
testMixedOperationsInTransactionWrite();
testTransactionLoadWithSave();
testTransactionLoadWithTransactionWrite();
System.out.println("Example complete");
}
catch (Throwable t) {
System.err.println("Error running the DynamoDBMapperTransactionWriteExample: "
+ t);
t.printStackTrace();
}
}
// Read DynamoDB Forum item and Thread item at the same time in a serializable
manner
TransactionLoadRequest transactionLoadRequest = new TransactionLoadRequest();
.withProjectionExpression("Subject, Message");
transactionLoadRequest.addLoad(dynamodbForumThread, loadExpressionForThread);
// Loaded objects are guaranteed to be in same order as the order in which they
are
// added to TransactionLoadRequest
List<Object> loadedObjects = executeTransactionLoad(transactionLoadRequest);
Forum loadedDynamoDBForum = (Forum) loadedObjects.get(0);
System.out.println("Forum: " + loadedDynamoDBForum.name);
System.out.println("Threads: " + loadedDynamoDBForum.threads);
Thread loadedDynamodbForumThread = (Thread) loadedObjects.get(1);
System.out.println("Subject: " + loadedDynamodbForumThread.subject);
System.out.println("Message: " + loadedDynamodbForumThread.message);
}
// Update Forum item for DynamoDB and add a thread to DynamoDB Forum, in
// an ACID manner using transactionWrite
dynamodbForum.threads = 1;
Thread dynamodbForumThread = new Thread();
dynamodbForumThread.forumName = "DynamoDB New Forum";
dynamodbForumThread.subject = "Sample Subject 2";
dynamodbForumThread.message = "Sample Question 2";
TransactionWriteRequest transactionWriteRequest = new TransactionWriteRequest();
transactionWriteRequest.addPut(dynamodbForumThread);
transactionWriteRequest.addUpdate(dynamodbForum);
executeTransactionWrite(transactionWriteRequest);
// Read DynamoDB Forum item and Thread item at the same time in a serializable
manner
TransactionLoadRequest transactionLoadRequest = new TransactionLoadRequest();
.withProjectionExpression("Subject, Message");
transactionLoadRequest.addLoad(dynamodbForumThread, loadExpressionForThread);
// Loaded objects are guaranteed to be in same order as the order in which they
are
// added to TransactionLoadRequest
List<Object> loadedObjects = executeTransactionLoad(transactionLoadRequest);
Forum loadedDynamoDBForum = (Forum) loadedObjects.get(0);
System.out.println("Forum: " + loadedDynamoDBForum.name);
System.out.println("Threads: " + loadedDynamoDBForum.threads);
Thread loadedDynamodbForumThread = (Thread) loadedObjects.get(1);
System.out.println("Subject: " + loadedDynamodbForumThread.subject);
System.out.println("Message: " + loadedDynamodbForumThread.message);
}
s3Forum.threads = 0;
mapper.save(s3Forum);
// Update Forum item for S3 and Create new Forum item for DynamoDB using
transactionWrite
s3Forum.category = "Amazon Web Services";
Forum dynamodbForum = new Forum();
dynamodbForum.name = "DynamoDB Forum";
dynamodbForum.category = "Amazon Web Services";
dynamodbForum.threads = 0;
TransactionWriteRequest transactionWriteRequest = new TransactionWriteRequest();
transactionWriteRequest.addUpdate(s3Forum);
transactionWriteRequest.addPut(dynamodbForum);
executeTransactionWrite(transactionWriteRequest);
}
.withConditionExpression("attribute_exists(Category)");
.withConditionExpression("attribute_exists(Subject)");
transactionWriteRequest.addUpdate(dynamodbForum);
executeTransactionWrite(transactionWriteRequest);
}
.withConditionExpression("attribute_exists(Subject)");
}
return loadedObjects;
}
private static void executeTransactionWrite(TransactionWriteRequest
transactionWriteRequest) {
try {
mapper.transactionWrite(transactionWriteRequest);
} catch (DynamoDBMappingException ddbme) {
System.err.println("Client side error in Mapper, fix before retrying. Error: "
+ ddbme.getMessage());
} catch (ResourceNotFoundException rnfe) {
System.err.println("One of the tables was not found, verify table exists before
retrying. Error: " + rnfe.getMessage());
} catch (InternalServerErrorException ise) {
System.err.println("Internal Server Error, generally safe to retry with back-
off. Error: " + ise.getMessage());
} catch (TransactionCanceledException tce) {
System.err.println("Transaction Canceled, implies a client issue, fix before
retrying. Error: " + tce.getMessage());
} catch (Exception ex) {
System.err.println("An exception occured, investigate and configure retry
strategy. Error: " + ex.getMessage());
}
}
@DynamoDBTable(tableName = "Thread")
public static class Thread {
private String forumName;
private String subject;
private String message;
private String lastPostedDateTime;
private String lastPostedBy;
private Set<String> tags;
private int answered;
private int views;
private int replies;
// Partition key
@DynamoDBHashKey(attributeName = "ForumName")
public String getForumName() {
return forumName;
}
// Sort key
@DynamoDBRangeKey(attributeName = "Subject")
public String getSubject() {
return subject;
}
@DynamoDBAttribute(attributeName = "Message")
public String getMessage() {
return message;
}
@DynamoDBAttribute(attributeName = "LastPostedDateTime")
public String getLastPostedDateTime() {
return lastPostedDateTime;
}
@DynamoDBAttribute(attributeName = "LastPostedBy")
public String getLastPostedBy() {
return lastPostedBy;
}
@DynamoDBAttribute(attributeName = "Tags")
public Set<String> getTags() {
return tags;
}
@DynamoDBAttribute(attributeName = "Answered")
public int getAnswered() {
return answered;
}
@DynamoDBAttribute(attributeName = "Views")
public int getViews() {
return views;
}
@DynamoDBAttribute(attributeName = "Replies")
public int getReplies() {
return replies;
}
@DynamoDBTable(tableName = "Forum")
public static class Forum {
private String name;
private String category;
private int threads;
// Partition key
@DynamoDBHashKey(attributeName = "Name")
public String getName() {
return name;
@DynamoDBAttribute(attributeName = "Category")
public String getCategory() {
return category;
}
@DynamoDBAttribute(attributeName = "Threads")
public int getThreads() {
return threads;
}
• DynamoDB global tables use a “last writer wins” reconciliation between concurrent updates.
If you use Global Tables, last writer policy wins. So in this case, the locking strategy does not
work as expected.
• DynamoDBMapper transactional operations do not support optimistic locking.
With optimistic locking, each item has an attribute that acts as a version number. If you retrieve an item
from a table, the application records the version number of that item. You can update the item, but only
if the version number on the server side has not changed. If there is a version mismatch, it means that
someone else has modified the item before you did; the update attempt fails, because you have a stale
version of the item. If this happens, you simply try again by retrieving the item and then attempting to
update it. Optimistic locking prevents you from accidentally overwriting changes that were made by
others; it also prevents others from accidentally overwriting your changes.
To support optimistic locking, the AWS SDK for Java provides the @DynamoDBVersionAttribute
annotation. In the mapping class for your table, you designate one property to store the version number,
and mark it using this annotation. When you save an object, the corresponding item in the DynamoDB
table will have an attribute that stores the version number. The DynamoDBMapper assigns a version
number when you first save the object, and it automatically increments the version number each time
you update the item. Your update or delete requests will succeed only if the client-side object version
matches the corresponding version number of the item in the DynamoDB table.
• You use optimistic locking with @DynamoDBVersionAttribute and the version value on the server is
different from the value on the client side.
• You specify your own conditional constraints while saving data by using DynamoDBMapper with
DynamoDBSaveExpression and these constraints failed.
For example, the following Java code snippet defines a CatalogItem class that has several properties.
The Version property is tagged with the @DynamoDBVersionAttribute annotation.
Example
@DynamoDBTable(tableName="ProductCatalog")
public class CatalogItem {
@DynamoDBHashKey(attributeName="Id")
public Integer getId() { return id; }
public void setId(Integer Id) { this.id = Id; }
@DynamoDBAttribute(attributeName="Title")
public String getTitle() { return title; }
public void setTitle(String title) { this.title = title; }
@DynamoDBAttribute(attributeName="ISBN")
public String getISBN() { return ISBN; }
public void setISBN(String ISBN) { this.ISBN = ISBN;}
@DynamoDBAttribute(attributeName = "Authors")
public Set<String> getBookAuthors() { return bookAuthors; }
public void setBookAuthors(Set<String> bookAuthors) { this.bookAuthors = bookAuthors; }
@DynamoDBIgnore
public String getSomeProp() { return someProp;}
public void setSomeProp(String someProp) {this.someProp = someProp;}
@DynamoDBVersionAttribute
public Long getVersion() { return version; }
public void setVersion(Long version) { this.version = version;}
}
You can apply the @DynamoDBVersionAttribute annotation to nullable types provided by the
primitive wrappers classes that provide a nullable type, such as Long and Integer.
• save — For a new item, the DynamoDBMapper assigns an initial version number 1. If you
retrieve an item, update one or more of its properties and attempt to save the changes, the save
operation succeeds only if the version number on the client-side and the server-side match. The
DynamoDBMapper increments the version number automatically.
• delete — The delete method takes an object as parameter and the DynamoDBMapper
performs a version check before deleting the item. The version check can be disabled if
DynamoDBMapperConfig.SaveBehavior.CLOBBER is specified in the request.
The internal implementation of optimistic locking within DynamoDBMapper uses conditional update
and conditional delete support provided by DynamoDB.
You can also set locking behavior for a specific operation only. For example, the
following Java snippet uses the DynamoDBMapper to save a catalog item. It specifies
DynamoDBMapperConfig.SaveBehavior by adding the optional DynamoDBMapperConfig parameter
to the save method.
Example
For example, consider the following CatalogItem class that defines a property, Dimension, that is of
DimensionType. This property stores the item dimensions, as height, width, and thickness. Assume that
you decide to store these item dimensions as a string (such as 8.5x11x.05) in DynamoDB. The following
example provides converter code that converts the DimensionType object to a string and a string to the
DimensionType.
Note
This code sample assumes that you have already loaded data into DynamoDB for your account
by following the instructions in the Creating Tables and Loading Sample Data (p. 323) section.
For step-by-step instructions to run the following example, see Java Code Examples (p. 328).
Example
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
import java.io.IOException;
import java.util.Arrays;
import java.util.HashSet;
import java.util.Set;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBAttribute;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBHashKey;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBMapper;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBTypeConverted;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBTypeConverter;
import com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBTable;
bookRetrieved.getDimensions().setHeight("9.0");
bookRetrieved.getDimensions().setLength("12.0");
bookRetrieved.getDimensions().setThickness("2.0");
mapper.save(bookRetrieved);
@DynamoDBTable(tableName = "ProductCatalog")
public static class Book {
private int id;
private String title;
private String ISBN;
private Set<String> bookAuthors;
private DimensionType dimensionType;
// Partition key
@DynamoDBHashKey(attributeName = "Id")
public int getId() {
return id;
}
@DynamoDBAttribute(attributeName = "Title")
public String getTitle() {
return title;
}
@DynamoDBAttribute(attributeName = "ISBN")
public String getISBN() {
return ISBN;
}
@DynamoDBAttribute(attributeName = "Authors")
public Set<String> getBookAuthors() {
return bookAuthors;
}
@DynamoDBTypeConverted(converter = DimensionTypeConverter.class)
@DynamoDBAttribute(attributeName = "Dimensions")
public DimensionType getDimensions() {
return dimensionType;
}
@DynamoDBAttribute(attributeName = "Dimensions")
public void setDimensions(DimensionType dimensionType) {
this.dimensionType = dimensionType;
}
@Override
public String toString() {
return "Book [ISBN=" + ISBN + ", bookAuthors=" + bookAuthors + ",
dimensionType= "
+ dimensionType.getHeight() + " X " + dimensionType.getLength() + " X " +
dimensionType.getThickness()
+ ", Id=" + id + ", Title=" + title + "]";
}
}
return length;
}
@Override
public String convert(DimensionType object) {
DimensionType itemDimensions = (DimensionType) object;
String dimension = null;
try {
if (itemDimensions != null) {
dimension = String.format("%s x %s x %s", itemDimensions.getLength(),
itemDimensions.getHeight(),
itemDimensions.getThickness());
}
}
catch (Exception e) {
e.printStackTrace();
}
return dimension;
}
@Override
public DimensionType unconvert(String s) {
return itemDimension;
}
}
}
The AWS SDK for .NET provides document model classes that wrap some of the low-level DynamoDB
operations, to further simplify your coding. In the document model, the primary classes are Table
and Document. The Table class provides data operation methods such as PutItem, GetItem, and
DeleteItem. It also provides the Query and the Scan methods. The Document class represents a single
item in a table.
Working with Items in DynamoDB Using the AWS SDK for .NET
Document Model
Topics
• Putting an Item - Table.PutItem Method (p. 274)
• Specifying Optional Parameters (p. 276)
To perform data operations using the document model, you must first call the Table.LoadTable
method, which creates an instance of the Table class that represents a specific table. The following C#
snippet creates a Table object that represents the ProductCatalog table in DynamoDB.
Example
Note
In general, you use the LoadTable method once at the beginning of your application because it
makes a DescribeTable call that adds to the round trip to DynamoDB.
You can then use the table object to perform various data operations. Each of these data operations have
two types of overloads; one that takes the minimum required parameters and another that also takes
operation specific optional configuration information. For example, to retrieve an item, you must provide
the table's primary key value in which case you can use the following GetItem overload:
Example
// Get the item from a table that has a primary key that is composed of only a partition
key.
Table.GetItem(Primitive partitionKey);
// Get the item from a table whose primary key is composed of both a partition key and sort
key.
Table.GetItem(Primitive partitionKey, Primitive sortKey);
You can also pass optional parameters to these methods. For example, the preceding GetItem returns
the entire item including all its attributes. You can optionally specify a list of attributes to retrieve. In this
case, you use the following GetItem overload that takes in the operation specific configuration object
parameter:
Example
You can use the configuration object to specify several optional parameters such as request a specific
list of attributes or specify the page size (number of items per page). Each data operation method has
its own configuration class. For example, the GetItemOperationConfig class enables you to provide
options for the GetItem operation and the PutItemOperationConfig class enables you to provide
optional parameters for the PutItem operation.
The following sections discuss each of the data operations that are supported by the Table class.
1. Execute the Table.LoadTable method that provides the table name in which you want to put an
item.
2. Create a Document object that has a list of attribute names and their values.
3. Execute Table.PutItem by providing the Document instance as a parameter.
The following C# code snippet demonstrates the preceding tasks. The example uploads an item to the
ProductCatalog table.
Example
table.PutItem(book);
In the preceding example, the Document instance creates an item that has Number, String, String
Set, Boolean, and Null attributes. (Null is used to indicate that the QuantityOnHand for this product is
unknown.) For Boolean and Null, use the constructor methods DynamoDBBool and DynamoDBNull.
In DynamoDB, the List and Map data types can contain elements composed of other data types. Here is
how to map these data types to the document model API:.
You can modify the preceding example to add a List attribute to the item. To do this, use a
DynamoDBList constructor, as shown in the following code snippet:
Example
table.PutItem(book);
To add a Map attribute to the book, you define another Document. The following code snippet illustrates
how to do this.
Example
pictures.Add("FrontView", "http://example.com/products/101_front.jpg" );
pictures.Add("RearView", "http://example.com/products/101_rear.jpg" );
book.Add("Pictures", pictures);
table.PutItem(book);
These examples are based on the item shown in Specifying Item Attributes (p. 383). The document
model lets you create complex nested attributes, such as the ProductReviews attribute shown in the
case study.
• The ConditionalExpression parameter to make this a conditional put request. The example
creates an expression that specifies the ISBN attribute must have a specific value that has to be
present in the item that you are replacing.
Example
table.PutItem(book, config);
Example
The GetItem operation returns all the attributes of the item and performs an eventually consistent read
(see Read Consistency (p. 15)) by default.
Example
When you retrieve an item using the document model API, you can access individual elements within the
Document object is returned:
Example
int id = doc["Id"].AsInt();
string title = doc["Title"].AsString();
List<string> authors = doc["Authors"].AsListOfString();
bool inStock = doc["InStock"].AsBoolean();
DynamoDBNull quantityOnHand = doc["QuantityOnHand"].AsDynamoDBNull();
For attributes that are of type List or Map, here is how to map these attributes to the document model
API:
The following code snippet shows how to retrieve a List (RelatedItems) and a Map (Pictures) from the
Document object:
Example
Example
• The ConditionalExpression parameter to ensure that the book item being deleted has a specific
value for the ISBN attribute.
• The ReturnValues parameter to request that the Delete method return the item that it deleted.
Example
You can use the UpdateItem operation to update existing attribute values, add new attributes to
the existing collection, or delete attributes from the existing collection. You provide these updates by
creating a Document instance that describes the updates you wish to perform.
• If the item does not exist, UpdateItem adds a new item using the primary key that is specified in the
input.
• If the item exists, UpdateItem applies the updates as follows:
• Replaces the existing attribute values with the values in the update.
• If an attribute that you provide in the input does not exist, it adds a new attribute to the item.
• If the input attribute value is null, it deletes the attributes, if it is present.
Note
This mid-level UpdateItem operation does not support the Add action (see UpdateItem)
supported by the underlying DynamoDB operation.
Note
The PutItem operation (Putting an Item - Table.PutItem Method (p. 274)) can also perform
an update. If you call PutItem to upload an item and the primary key exists, the PutItem
operation replaces the entire item. Note that, if there are attributes in the existing item and
those attributes are not specified on the Document that is being put, the PutItem operation
deletes those attributes. However, UpdateItem only updates the specified input attributes. Any
other existing attributes of that item will remain unchanged.
The following are the steps to update an item using the AWS SDK for .NET document model.
1. Execute the Table.LoadTable method by providing the name of the table in which you want to
perform the update operation.
2. Create a Document instance by providing all the updates that you wish to perform.
You must provide the primary key either in the Document instance or explicitly as a parameter.
The following C# code snippet demonstrates the preceding tasks. The code sample updates an item
in the Book table. The UpdateItem operation updates the existing Authors attribute, deletes the
PageCount attribute, and adds a new attribute XYZ. The Document instance includes the primary key of
the book to update.
Example
table.Update(book);
The following C# code snippet updates a book item price to 25. It specifies the two following optional
parameters:
• The ConditionalExpression parameter that identifies the Price attribute with value 20 that you
expect to be present.
• The ReturnValues parameter to request the UpdateItem operation to return the item that is
updated.
Example
1. Create a Table object by executing the Table.LoadTable method by providing the name of the table
in which you want to perform the batch operation.
2. Execute the CreateBatchWrite method on the table instance you created in the preceding step and
create DocumentBatchWrite object.
3. Use DocumentBatchWrite object methods to specify documents you wish to upload or delete.
4. Call the DocumentBatchWrite.Execute method to execute the batch operation.
When using the document model API, you can specify any number of operations in a batch. However,
note that DynamoDB limits the number of operations in a batch and the total size of the batch in a
batch operation. For more information about the specific limits, see BatchWriteItem. If the document
model API detects your batch write request exceeded the number of allowed write requests or the
HTTP payload size of a batch exceeded the limit allowed by BatchWriteItem, it breaks the batch in
to several smaller batches. Additionally, if a response to a batch write returns unprocessed items, the
document model API will automatically send another batch request with those unprocessed items.
The following C# code snippet demonstrates the preceding steps. The code snippet uses batch write
operation to perform two writes; upload a book item and delete another book item.
batchWrite.AddDocumentToPut(book1);
// specify delete item using overload that takes PK.
batchWrite.AddKeyToDelete(12345);
batchWrite.Execute();
For a working example, see Example: Batch Operations Using AWS SDK for .NET Document Model
API (p. 285).
You can use the batch write operation to perform put and delete operations on multiple tables. The
following are the steps to put or delete multiple items from multiple table using the AWS SDK for .NET
document model.
1. You create DocumentBatchWrite instance for each table in which you want to put or delete multiple
items as described in the preceding procedure.
2. Create an instance of the MultiTableDocumentBatchWrite and add the individual
DocumentBatchWrite objects in it.
3. Execute the MultiTableDocumentBatchWrite.Execute method.
The following C# code snippet demonstrates the preceding steps. The code snippet uses batch write
operation to perform the following write operations:
superBatch.Execute();
For step-by-step instructions to test the following sample, see .NET Code Examples (p. 330).
Example
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
using System;
using System.Collections.Generic;
using System.Linq;
using Amazon.DynamoDBv2;
using Amazon.DynamoDBv2.DocumentModel;
using Amazon.Runtime;
namespace com.amazonaws.codesamples
{
class MidlevelItemCRUD
{
private static AmazonDynamoDBClient client = new AmazonDynamoDBClient();
private static string tableName = "ProductCatalog";
// The sample uses the following id PK value to add book item.
private static int sampleBookId = 555;
{
try
{
Table productCatalog = Table.LoadTable(client, tableName);
CreateBookItem(productCatalog);
RetrieveBook(productCatalog);
// Couple of sample updates.
UpdateMultipleAttributes(productCatalog);
UpdateBookPriceConditionally(productCatalog);
// Delete.
DeleteBook(productCatalog);
Console.WriteLine("To continue, press Enter");
Console.ReadLine();
}
catch (AmazonDynamoDBException e) { Console.WriteLine(e.Message); }
catch (AmazonServiceException e) { Console.WriteLine(e.Message); }
catch (Exception e) { Console.WriteLine(e.Message); }
}
productCatalog.PutItem(book);
}
// Optional parameters.
UpdateItemOperationConfig config = new UpdateItemOperationConfig
{
// Get updated item in response.
ReturnValues = ReturnValues.AllNewAttributes
};
Document updatedBook = productCatalog.UpdateItem(book, config);
Console.WriteLine("UpdateMultipleAttributes: Printing item after updates ...");
PrintDocument(updatedBook);
}
// Optional parameters.
UpdateItemOperationConfig config = new UpdateItemOperationConfig
{
ConditionalExpression = expr,
ReturnValues = ReturnValues.AllNewAttributes
};
Document updatedBook = productCatalog.UpdateItem(book, config);
Console.WriteLine("UpdateBookPriceConditionally: Printing item whose price was
conditionally updated");
PrintDocument(updatedBook);
}
Example: Batch Write Using AWS SDK for .NET Document Model
The following C# code example illustrates single table and multi-table batch write operations. The
example performs the following tasks:
• To illustrate a single table batch write, it adds two items to the ProductCatalog table.
• To illustrate a multi-table batch write, it adds an item to both the Forum and Thread tables and
deletes and item from the Thread table.
If you followed the steps in Creating Tables and Loading Sample Data (p. 323), you already have
the ProductCatalog, Forum and Thread tables created. You can also create these sample tables
programmatically. For more information, see Creating Example Tables and Uploading Data Using the
AWS SDK for .NET (p. 902). For step-by-step instructions to test the following example, see .NET Code
Examples (p. 330).
Example
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
using System;
using System.Collections.Generic;
using Amazon.DynamoDBv2;
using Amazon.DynamoDBv2.DocumentModel;
using Amazon.Runtime;
namespace com.amazonaws.codesamples
{
class MidLevelBatchWriteItem
{
private static AmazonDynamoDBClient client = new AmazonDynamoDBClient();
static void Main(string[] args)
{
try
{
SingleTableBatchWrite();
MultiTableBatchWrite();
}
catch (AmazonDynamoDBException e) { Console.WriteLine(e.Message); }
catch (AmazonServiceException e) { Console.WriteLine(e.Message); }
catch (Exception e) { Console.WriteLine(e.Message); }
batchWrite.AddDocumentToPut(book1);
// Specify delete item using overload that takes PK.
batchWrite.AddKeyToDelete(12345);
Console.WriteLine("Performing batch write in SingleTableBatchWrite()");
batchWrite.Execute();
}
superBatch.Execute();
}
}
The Query method provides two overloads. The minimum required parameters to the Query method are
a partition key value and a sort key filter. You can use the following overload to provide these minimum
required parameters.
Example
For example, the following C# code snippet queries for all forum replies that were posted in the last 15
days.
Example
This creates a Search object. You can now call the Search.GetNextSet method iteratively to retrieve
one page of results at a time as shown in the following C# code snippet. The code prints the attribute
values for each item that the query returns.
Example
You can also specify optional parameters for Query, such as specifying a list of attributes to retrieve,
strongly consistent reads, page size, and the number of items returned per page. For a complete list of
parameters, see Query. To specify optional parameters, you must use the following overload in which
you provide the QueryOperationConfig object.
Example
Query(QueryOperationConfig config);
Assume that you want to execute the query in the preceding example (retrieve forum replies posted in
the last 15 days). However, assume that you want to provide optional query parameters to retrieve only
specific attributes and also request a strongly consistent read. The following C# code snippet constructs
the request using the QueryOperationConfig object.
Example
The following C# code example uses the Table.Query method to execute the following sample queries:
This query is executed twice. In the first Table.Query call, the example provides only the required
query parameters. In the second Table.Query call, you provide optional query parameters to request
a strongly consistent read and a list of attributes to retrieve.
This query uses the Between query operator to find replies posted in between two dates.
• Get a product from the ProductCatalog table.
Because the ProductCatalog table has a primary key that is only a partition key, you can only get items;
you cannot query the table. The example retrieves a specific product item using the item Id.
Example
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
using System;
using System.Collections.Generic;
using System.Linq;
using Amazon.DynamoDBv2;
using Amazon.DynamoDBv2.DocumentModel;
using Amazon.Runtime;
using Amazon.SecurityToken;
namespace com.amazonaws.codesamples
{
class MidLevelQueryAndScan
{
private static AmazonDynamoDBClient client = new AmazonDynamoDBClient();
// Get Example.
Table productCatalogTable = Table.LoadTable(client, "ProductCatalog");
int productId = 101;
GetProduct(productCatalogTable, productId);
// Use Query overloads that takes the minimum required query parameters.
Search search = table.Query(filter);
do
{
documentList = search.GetNextSet();
Example
Scan(ScanFilter filter);
For example, assume that you maintain a table of forum threads tracking information such as thread
subject (primary), the related message, forum Id to which the thread belongs, Tags, and other
information. Assume that the subject is the primary key.
Example
This is a simplified version of forums and threads that you see on AWS forums (see Discussion Forums).
The following C# code snippet queries all threads in a specific forum (ForumId = 101) that are tagged
"sortkey". Because the ForumId is not a primary key, the example scans the table. The ScanFilter
includes two conditions. Query returns all the threads that satisfy both of the conditions.
Example
You can also specify optional parameters to Scan, such as a specific list of attributes to retrieve or
whether to perform a strongly consistent read. To specify optional parameters, you must create a
ScanOperationConfig object that includes both the required and optional parameters and use the
following overload.
Example
Scan(ScanOperationConfig config);
The following C# code snippet executes the same preceding query (find forum threads in which the
ForumId is 101 and the Tag attribute contains the "sortkey" keyword). However, this time assume that
you want to add an optional parameter to retrieve only a specific attribute list. In this case, you must
create a ScanOperationConfig object by providing all the parameters, required and optional as shown
in the following code example.
Example
You can pass the ScanFilter parameter when passing in only the required parameters.
• Table.Scan that takes the ScanOperationConfig object as a parameter.
You must use the ScanOperationConfig parameter if you want to pass any optional parameters to
the Scan method.
Example
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
using System;
using System.Collections.Generic;
using System.Linq;
using Amazon.DynamoDBv2;
using Amazon.DynamoDBv2.DocumentModel;
namespace com.amazonaws.codesamples
{
class MidLevelScanOnly
{
private static AmazonDynamoDBClient client = new AmazonDynamoDBClient();
}
}
The AWS SDK for .NET provides an object persistence model that enables you to map your client-
side classes to the DynamoDB tables. Each object instance then maps to an item in the corresponding
tables. To save your client-side objects to the tables the object persistence model provides the
DynamoDBContext class, an entry point to DynamoDB. This class provides you a connection to
DynamoDB and enables you to access tables, perform various CRUD operations, and execute queries.
The object persistence model provides a set of attributes to map client-side classes to tables, and
properties/fields to table attributes.
Note
The object persistence model does not provide an API to create, update, or delete tables. It
provides only data operations. You can use only the AWS SDK for .NET low-level API to create,
update, and delete tables. For more information, see Working with Tables: .NET (p. 365).
To show you how the object persistence model works, let's walk through an example. We'll start with the
ProductCatalog table. It has Id as the primary key.
ProductCatalog(Id, ...)
Suppose you have a Book class with Title, ISBN, and Authors properties. You can map the Book class to
the ProductCatalog table by adding the attributes defined by the object persistence model, as shown in
the following C# code snippet.
Example
[DynamoDBTable("ProductCatalog")]
public class Book
{
[DynamoDBHashKey]
public int Id { get; set; }
[DynamoDBProperty("Authors")]
[DynamoDBIgnore]
public string CoverPage { get; set; }
}
In the preceding example, the DynamoDBTable attribute maps the Book class to the ProductCatalog
table.
The object persistence model supports both the explicit and default mapping between class properties
and table attributes.
• Explicit mapping—To map a property to a primary key, you must use the DynamoDBHashKey
and DynamoDBRangeKey object persistence model attributes. Additionally, for the non-primary
key attributes, if a property name in your class and the corresponding table attribute to which
you want to map it are not the same, then you must define the mapping by explicitly adding the
DynamoDBProperty attribute.
In the preceding example, Id property maps to the primary key with the same name and the
BookAuthors property maps to the Authors attribute in the ProductCatalog table.
• Default mapping—By default, the object persistence model maps the class properties to the
attributes with the same name in the table.
In the preceding example, the properties Title and ISBN map to the attributes with the same name
in the ProductCatalog table.
You don't have to map every single class property. You identify these properties by adding the
DynamoDBIgnore attribute. When you save a Book instance to the table, the DynamoDBContext does
not include the CoverPage property. It also does not return this property when you retrieve the book
instance.
You can map properties of .NET primitive types such as int and string. You can also map any arbitrary
data types as long as you provide an appropriate converter to map the arbitrary data to one of the
DynamoDB types. To learn about mapping arbitrary types, see Mapping Arbitrary Data with DynamoDB
Using the AWS SDK for .NET Object Persistence Model (p. 305).
The object persistence model supports optimistic locking. During an update operation this ensures you
have the latest copy of the item you are about to update. For more information, see Optimistic Locking
Using Version Number with DynamoDB Using the AWS SDK for .NET Object Persistence Model (p. 304).
DynamoDB Attributes
This section describes the attributes the object persistence model offers so you can map your classes and
properties to DynamoDB tables and attributes.
Note
In the following attributes, only DynamoDBTable and DynamoDBHashKey are required.
DynamoDBGlobalSecondaryIndexHashKey
Maps a class property to the partition key of a global secondary index. Use this attribute if you need to
Query a global secondary index.
DynamoDBGlobalSecondaryIndexRangeKey
Maps a class property to the sort key of a global secondary index. Use this attribute if you need to Query
a global secondary index and want to refine your results using the index sort key.
DynamoDBHashKey
Maps a class property to the partition key of the table's primary key. The primary key attributes cannot
be a collection type.
The following C# code examples maps the Book class to the ProductCatalog table, and the Id property
to the table's primary key partition key.
[DynamoDBTable("ProductCatalog")]
public class Book {
[DynamoDBHashKey]
public int Id { get; set; }
DynamoDBIgnore
Indicates that the associated property should be ignored. If you don't want to save any of your class
properties you can add this attribute to instruct DynamoDBContext not to include this property when
saving objects to the table.
DynamoDBLocalSecondaryIndexRangeKey
Maps a class property to the sort key of a local secondary index. Use this attribute if you need to Query a
local secondary index and want to refine your results using the index sort key.
DynamoDBProperty
Maps a class property to a table attribute. If the class property maps to the same name table attribute,
then you don't need to specify this attribute. However, if the names are not the same, you can use
this tag to provide the mapping. In the following C# statement the DynamoDBProperty maps the
BookAuthors property to the Authors attribute in the table.
[DynamoDBProperty("Authors")]
public List<string> BookAuthors { get; set; }
DynamoDBContext uses this mapping information to create the Authors attribute when saving object
data to the corresponding table.
DynamoDBRenamable
Specifies an alternative name for a class property. This is useful if you are writing a custom converter
for mapping arbitrary data to a DynamoDB table where the name of a class property is different from a
table attribute.
DynamoDBRangeKey
Maps a class property to the sort key of the table's primary key. If the table has a composite
primary key (partition key and sort key), then you must specify both the DynamoDBHashKey and
DynamoDBRangeKey attributes in your class mapping.
For example, the sample table Reply has a primary key made of the Id partition key and Replenishment
sort key. The following C# code example maps the Reply class to the Reply table. The class definition also
indicates that two of its properties map to the primary key.
For more information about sample tables, see Creating Tables and Loading Sample Data (p. 323).
[DynamoDBTable("Reply")]
DynamoDBTable
Identifies the target table in DynamoDB to which the class maps. For example, the following C# code
example maps the Developer class to the People table in DynamoDB.
[DynamoDBTable("People")]
public class Developer { ...}
• The DynamoDBTable attribute can be inherited. In the preceding example, if you add a new class,
Lead, that inherits from the Developer class, it also maps to the People table. Both the Developer
and Lead objects are stored in the People table.
• The DynamoDBTable attribute can also be overridden. In the following C# code example, the Manager
class inherits from the Developer class, however the explicit addition of the DynamoDBTable
attribute maps the class to another table (Managers).
[DynamoDBTable("Managers")]
public class Manager : Developer { ...}
You can add the optional parameter, LowerCamelCaseProperties, to request DynamoDB to lower
case the first letter of the property name when storing the objects to a table as shown in the following
C# snippet.
[DynamoDBTable("People", LowerCamelCaseProperties=true)]
public class Developer {
string DeveloperName;
...}
When saving instances of the Developer class, DynamoDBContext saves the DeveloperName property
as the developerName.
DynamoDBVersion
Identifies a class property for storing the item version number. To more information about versioning,
see Optimistic Locking Using Version Number with DynamoDB Using the AWS SDK for .NET Object
Persistence Model (p. 304).
DynamoDBContext Class
The DynamoDBContext class is the entry point to the DynamoDB database. It provides a connection to
DynamoDB and enables you to access your data in various tables, perform various CRUD operations, and
execute queries. The DynamoDBContext class provides the following methods:
CreateMultiTableBatchGet
Creates a MultiTableBatchGet object, composed of multiple individual BatchGet objects. Each of
these BatchGet objects can be used for retrieving items from a single DynamoDB table.
To retrieve the items from the table(s), use the ExecuteBatchGet method, passing the
MultiTableBatchGet object as a parameter.
CreateMultiTableBatchWrite
Creates a MultiTableBatchWrite object, composed of multiple individual BatchWrite objects. Each
of these BatchWrite objects can be used for writing or deleting items in a single DynamoDB table.
To write to the table(s), use the ExecuteBatchWrite method, passing the MultiTableBatchWrite
object as a parameter.
CreateBatchGet
Creates a BatchGet object that you can use to retrieve multiple items from a table. For more
information, see Batch Get: Getting Multiple Items (p. 310).
CreateBatchWrite
Creates a BatchWrite object that you can use to put multiple items into a table, or to delete multiple
items from a table. For more information, see Batch Write: Putting and Deleting Multiple Items
(p. 308).
Delete
Deletes an item from the table. The method requires the primary key of the item you want to delete.
You can provide either the primary key value or a client-side object containing a primary key value as a
parameter to this method.
• If you specify a client-side object as a parameter and you have enabled optimistic locking, the delete
succeeds only if the client-side and the server-side versions of the object match.
• If you specify only the primary key value as a parameter, the delete succeeds regardless of whether
you have enabled optimistic locking or not.
Note
To perform this operation in the background, use the DeleteAsync method instead.
Dispose
Disposes of all managed and unmanaged resources.
ExecuteBatchGet
Reads data from one or more tables, processing all of the BatchGet objects in a
MultiTableBatchGet.
Note
To perform this operation in the background, use the ExecuteBatchGetAsync method
instead.
ExecuteBatchWrite
Writes or deletes data in one or more tables, processing all of the BatchWrite objects in a
MultiTableBatchWrite.
Note
To perform this operation in the background, use the ExecuteBatchWriteAsync method
instead.
FromDocument
Given an instance of a Document, the FromDocument method returns an instance of a client-side class.
This is helpful if you want to use the document model classes along with the object persistence model to
perform any data operations. For more information about the document model classes provided by the
AWS SDK for .NET, see .NET: Document Model (p. 273).
Suppose you have a Document object named doc, containing a representation of a Forum item. (To
see how to construct this object, see the description for the ToDocument method below.) You can use
FromDocument to retrieve the Forum item from the Document as shown in the following C# code
snippet.
Example
forum101 = context.FromDocument<Forum>(101);
Note
If your Document object implements the IEnumerable interface, you can use the
FromDocuments method instead. This will allow you to iterate over all of the class instances in
the Document.
FromQuery
Executes a Query operation, with the query parameters defined in a QueryOperationConfig object.
Note
To perform this operation in the background, use the FromQueryAsync method instead.
FromScan
Executes a Scan operation, with the scan parameters defined in a ScanOperationConfig object.
Note
To perform this operation in the background, use the FromScanAsync method instead.
GetTargetTable
Retrieves the target table for the specified type. This is useful if you are writing a custom converter for
mapping arbitrary data to a DynamoDB table and need to determine which table is associated with a
custom data type.
Load
Retrieves an item from a table. The method requires only the primary key of the item you want to
retrieve.
By default, DynamoDB returns the item with values that are eventually consistent. For information on
the eventual consistency model, see Read Consistency (p. 15).
Note
To perform this operation in the background, use the LoadAsync method instead.
Query
Queries a table based on query parameters you provide.
You can query a table only if it has a composite primary key (partition key and sort key). When querying,
you must specify a partition key and a condition that applies to the sort key.
Suppose you have a client-side Reply class mapped to the Reply table in DynamoDB. The following
C# code snippet queries the Reply table to find forum thread replies posted in the past 15 days. The
Reply table has a primary key that has the Id partition key and the ReplyDateTime sort key. For more
information about the Reply table, see Creating Tables and Loading Sample Data (p. 323).
Example
The Query method returns a "lazy-loaded" IEnumerable collection. It initially returns only one page of
results, and then makes a service call for the next page if needed. To obtain all the matching items, you
only need to iterate over the IEnumerable.
If your table has a simple primary key (partition key), then you cannot use the Query method. Instead,
you can use the Load method and provide the partition key to retrieve the item.
Note
To perform this operation in the background, use the QueryAsync method instead.
Save
Saves the specified object to the table. If the primary key specified in the input object does not exist
in the table, the method adds a new item to the table. If primary key exists, the method updates the
existing item.
If you have optimistic locking configured, the update succeeds only if the client and the server side
versions of the item match. For more information, see Optimistic Locking Using Version Number with
DynamoDB Using the AWS SDK for .NET Object Persistence Model (p. 304).
Note
To perform this operation in the background, use the SaveAsync method instead.
Scan
Performs an entire table scan.
You can filter scan result by specifying a scan condition. The condition can be evaluated on any
attributes in the table. Suppose you have a client-side class Book mapped to the ProductCatalog table in
DynamoDB. The following C# snippet scans the table and returns only the book items priced less than 0.
Example
The Scan method returns a "lazy-loaded" IEnumerable collection. It initially returns only one page of
results, and then makes a service call for the next page if needed. To obtain all the matching items, you
only need to iterate over the IEnumerable.
For performance reasons you should query your tables and avoid a table scan.
Note
To perform this operation in the background, use the ScanAsync method instead.
ToDocument
Returns an instance of the Document document model class from your class instance.
This is helpful if you want to use the document model classes along with the object persistence model to
perform any data operations. For more information about the document model classes provided by the
AWS SDK for .NET, see .NET: Document Model (p. 273).
Suppose you have a client-side class mapped to the sample Forum table. You can then use a
DynamoDBContext to get an item, as a Document object, from the Forum table as shown in the
following C# code snippet.
Example
• ConsistentRead—When retrieving data using the Load, Query or Scan operations you can optionally
add this parameter to request the latest values for the data.
• IgnoreNullValues—This parameter informs DynamoDBContext to ignore null values on attributes
during a Save operation. If this parameter is false (or if it is not set), then a null value is interpreted as
directives to delete the specific attribute.
• SkipVersionCheck— - This parameter informs DynamoDBContext to not compare versions when
saving or deleting an item. For more information about versioning, see Optimistic Locking Using
Version Number with DynamoDB Using the AWS SDK for .NET Object Persistence Model (p. 304).
• TableNamePrefix— - Prefixes all table names with a specific string. If this parameter is null (or if it is
not set), then no prefix is used.
The following C# snippet creates a new DynamoDBContext by specifying two of the preceding optional
parameters.
Example
DynamoDBContext includes these optional parameters with each request you send using this context.
Instead of setting these parameters at the DynamoDBContext level, you can specify them for individual
operations you execute using DynamoDBContext as shown in the following C# code snippet. The
example loads a specific book item. The Load method of DynamoDBContext specifies the preceding
optional parameters.
Example
...
DynamoDBContext context = new DynamoDBContext(client);
Book bookItem = context.Load<Book>(productId,new DynamoDBContextConfig{ ConsistentRead =
true, SkipVersionCheck = true });
In this case DynamoDBContext includes these parameters only when sending the Get request.
• bool
• byte
• char
• DateTime
• decimal
• double
• float
• Int16
• Int32
• Int64
• SByte
• string
• UInt16
• UInt32
• UInt64
The object persistence model also supports the .NET collection types. DynamoDBContext is able to
convert concrete collection types and simple Plain Old CLR Objects (POCOs).
The following table summarizes the mapping of the preceding .NET types to the DynamoDB types.
The object persistence model also supports arbitrary data types. However, you must provide converter
code to map the complex types to the DynamoDB types.
The optimistic locking feature of the object persistence model provides the DynamoDBVersion tag that
you can use to enable optimistic locking. To use this feature you add a property to your class for storing
the version number. You add the DynamoDBVersion attribute on the property. When you first save the
object, the DynamoDBContext assigns a version number and increments this value each time you update
the item.
Your update or delete request succeeds only if the client-side object version matches the corresponding
version number of the item on the server-side. If your application has a stale copy, it must get the latest
version from the server before it can update or delete that item.
The following C# code snippet defines a Book class with object persistence attributes mapping it to the
ProductCatalog table. The VersionNumber property in the class decorated with the DynamoDBVersion
attribute stores the version number value.
Example
[DynamoDBTable("ProductCatalog")]
public class Book
{
[DynamoDBHashKey] //Partition key
public int Id { get; set; }
[DynamoDBProperty]
public string Title { get; set; }
[DynamoDBProperty]
public string ISBN { get; set; }
[DynamoDBProperty("Authors")]
public List<string> BookAuthors { get; set; }
[DynamoDBVersion]
public int? VersionNumber { get; set; }
}
Note
You can apply the DynamoDBVersion attribute only to a nullable numeric primitive type (such
as int?).
• For a new item, DynamoDBContext assigns initial version number 0. If you retrieve an existing
item, and then update one or more of its properties and attempt to save the changes, the save
operation succeeds only if the version number on the client-side and the server-side match. The
DynamoDBContext increments the version number. You don't need to set the version number.
• The Delete method provides overloads that can take either a primary key value or an object as
parameter as shown in the following C# code snippet.
Example
// Load a book.
Book book = context.Load<ProductCatalog>(111);
// Do other operations.
// Delete 1 - Pass in the book object.
context.Delete<ProductCatalog>(book);
If you provide an object as the parameter, then the delete succeeds only if the object version matches
the corresponding server-side item version. However, if you provide a primary key value as the
parameter, the DynamoDBContext is unaware of any version numbers and it deletes the item without
making the version check.
Note that the internal implementation of optimistic locking in the object persistence model code uses
the conditional update and the conditional delete API actions in DynamoDB.
Instead of setting the property at the context level, you can disable optimistic locking for a specific
operation as shown in the following C# code snippet. The code example uses the context to delete
a book item. The Delete method sets the optional SkipVersionCheck property to true, disabling
version check.
Example
You can create any types on the client-side, however the data stored in the tables is one of the
DynamoDB types and during query and scan any data comparisons made are against the data stored in
DynamoDB.
The following C# code example defines a Book class with Id, Title, ISBN, and Dimension properties.
The Dimension property is of the DimensionType that describes Height, Width, and Thickness
properties. The example code provides the converter methods, ToEntry and FromEntry to convert
data between the DimensionType and the DynamoDB string types. For example, when saving a Book
instance, the converter creates a book Dimension string such as "8.5x11x.05", and when you retrieve a
book, it converts the string to a DimensionType instance.
The example maps the Book type to the ProductCatalog table. For illustration, it saves a sample Book
instance, retrieves it, updates its dimensions and saves the updated Book again.
For step-by-step instructions on how to test the following sample, see .NET Code Examples (p. 330).
Example
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
using System;
using System.Collections.Generic;
using Amazon.DynamoDBv2;
using Amazon.DynamoDBv2.DataModel;
using Amazon.DynamoDBv2.DocumentModel;
using Amazon.Runtime;
using Amazon.SecurityToken;
namespace com.amazonaws.codesamples
{
class HighLevelMappingArbitraryData
{
private static AmazonDynamoDBClient client = new AmazonDynamoDBClient();
// 1. Create a book.
DimensionType myBookDimensions = new DimensionType()
{
Length = 8M,
Height = 11M,
Thickness = 0.5M
};
context.Save(myBook);
bookRetrieved.Dimensions.Height += 1;
bookRetrieved.Dimensions.Length += 1;
bookRetrieved.Dimensions.Thickness += 0.2M;
// Update the book.
context.Save(bookRetrieved);
{
DimensionType bookDimensions = value as DimensionType;
if (bookDimensions == null) throw new ArgumentOutOfRangeException();
Note
When using object persistence model, you can specify any number of operations in a batch.
However, note that DynamoDB limits the number of operations in a batch and the total
size of the batch in a batch operation. For more information about the specific limits, see
BatchWriteItem. If the API detects your batch write request exceeded the allowed number of
write requests or exceeded the maximum allowed HTTP payload size, it breaks the batch in to
several smaller batches. Additionally, if a response to a batch write returns unprocessed items,
the API will automatically send another batch request with those unprocessed items.
Suppose that you have defined a C# class Book class that maps to the ProductCatalog table in
DynamoDB. The following C# code snippet uses the BatchWrite object to upload two items and delete
one item from the ProductCatalog table.
Example
bookBatch.Execute();
• Create one instance of the BatchWrite class for each type and specify the items you want to put or
delete as described in the preceding section.
• Create an instance of MultiTableBatchWrite using one of the following methods:
• Execute the Combine method on one of the BatchWrite objects that you created in the preceding
step.
• Create an instance of the MultiTableBatchWrite type by providing a list of BatchWrite objects.
• Execute the CreateMultiTableBatchWrite method of DynamoDBContext and pass in your list
of BatchWrite objects.
• Call the Execute method of MultiTableBatchWrite, which performs the specified put and delete
operations on various tables.
Suppose that you have defined Forum and Thread C# classes that map to the Forum and Thread tables
in DynamoDB. Also, suppose that the Thread class has versioning enabled. Because versioning is not
supported when using batch operations, you must explicitly disable versioning as shown in the following
C# code snippet. The code snippet uses the MultiTableBatchWrite object to perform a multi-table
update.
Example
// Create BatchWrite objects for each of the Forum and Thread classes.
var forumBatch = context.CreateBatchWrite<Forum>();
threadBatch.AddPutItem(newThread);
For a working example, see Example: Batch Write Operation Using the AWS SDK for .NET Object
Persistence Model (p. 313).
Note
DynamoDB batch API limits the number of writes in batch and also limits the size of the batch.
For more information, see BatchWriteItem. When using the .NET object persistence model API,
you can specify any number of operations. However, if either the number of operations in a
batch or size exceed the limit, the .NET API breaks the batch write request into smaller batches
and sends multiple batch write requests to DynamoDB.
The following C# code sample retrieves three items from the ProductCatalog table. The items in the
result are not necessarily in the same order in which you specified the primary keys.
Example
bookBatch.Execute();
// Process result.
Console.WriteLine(bookBatch.Results.Count);
Book book1 = bookBatch.Results[0];
Book book2 = bookBatch.Results[1];
Book book3 = bookBatch.Results[2];
• For each type, create an instance of the CreateBatchGet type and provide the primary key values
you want to retrieve from each table.
• Create an instance of the MultiTableBatchGet class using one of the following methods:
• Execute the Combine method on one of the BatchGet objects you created in the preceding step.
• Create an instance of the MultiBatchGet type by providing a list of BatchGet objects.
• Execute the CreateMultiTableBatchGet method of DynamoDBContext and pass in your list of
BatchGet objects.
• Call the Execute method of MultiTableBatchGet which returns the typed results in the individual
BatchGet objects.
The following C# code snippet retrieves multiple items from the Order and OrderDetail tables using the
CreateBatchGet method.
Example
Console.WriteLine(orderBatch.Results.Count);
Console.WriteLine(orderDetailBatch.Results.Count);
Example: CRUD Operations Using the AWS SDK for .NET Object
Persistence Model
The following C# code example declares a Book class with Id, title, ISBN, and Authors properties. It uses
the object persistence attributes to map these properties to the ProductCatalog table in DynamoDB.
The code example then uses the DynamoDBContext to illustrate typical CRUD operations. The example
creates a sample Book instance and saves it to the ProductCatalog table. The example then retrieves
the book item, and updates its ISBN and Authors properties. Note that the update replaces the existing
authors list. The example finally deletes the book item.
For more information about the ProductCatalog table used in this example, see Creating Tables and
Loading Sample Data (p. 323). For step-by-step instructions to test the following sample, see .NET
Code Examples (p. 330).
Note
The following example does not work with .NET core as it does not support synchronous
methods. For more information, see AWS Asynchronous APIs for .NET.
Example
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
using System;
using System.Collections.Generic;
using Amazon.DynamoDBv2;
using Amazon.DynamoDBv2.DataModel;
using Amazon.Runtime;
namespace com.amazonaws.codesamples
{
class HighLevelItemCRUD
{
private static AmazonDynamoDBClient client = new AmazonDynamoDBClient();
// Retrieve the updated book. This time add the optional ConsistentRead
parameter using DynamoDBContextConfig object.
Book updatedBook = context.Load<Book>(bookID, new DynamoDBContextConfig
{
ConsistentRead = true
});
[DynamoDBTable("ProductCatalog")]
public class Book
{
[DynamoDBHashKey] //Partition key
public int Id
{
get; set;
}
[DynamoDBProperty]
public string Title
{
get; set;
}
[DynamoDBProperty]
public string ISBN
{
get; set;
}
[DynamoDBProperty("Authors")] //String Set datatype
public List<string> BookAuthors
{
get; set;
}
}
}
Example: Batch Write Operation Using the AWS SDK for .NET
Object Persistence Model
The following C# code example declares Book, Forum, Thread, and Reply classes and maps them to the
DynamoDB tables using the object persistence model attributes.
The code example then uses the DynamoDBContext to illustrate the following batch write operations.
• BatchWrite object to put and delete book items from the ProductCatalog table.
• MultiTableBatchWrite object to put and delete items from the Forum and the Thread tables.
For more information about the tables used in this example, see Creating Tables and Loading
Sample Data (p. 323). For step-by-step instructions to test the following sample, see .NET Code
Examples (p. 330).
Note
The following example does not work with .NET core as it does not support synchronous
methods. For more information, see AWS Asynchronous APIs for .NET.
Example
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
using System;
using System.Collections.Generic;
using Amazon.DynamoDBv2;
using Amazon.DynamoDBv2.DataModel;
using Amazon.Runtime;
using Amazon.SecurityToken;
namespace com.amazonaws.codesamples
{
class HighLevelBatchWriteItem
{
private static AmazonDynamoDBClient client = new AmazonDynamoDBClient();
{
Id = 903,
InPublication = true,
ISBN = "903-11-11-1111",
PageCount = "200",
Price = 10,
ProductCategory = "Book",
Title = "My book4 in batch write"
};
[DynamoDBTable("Reply")]
public class Reply
{
[DynamoDBHashKey] //Partition key
public string Id
{
get; set;
}
{
get; set;
}
// Explicit property mapping with object persistence model attributes.
[DynamoDBProperty("LastPostedBy")]
public string PostedBy
{
get; set;
}
// Property to store version number for optimistic locking.
[DynamoDBVersion]
public int? Version
{
get; set;
}
}
[DynamoDBTable("Thread")]
public class Thread
{
// PK mapping.
[DynamoDBHashKey] //Partition key
public string ForumName
{
get; set;
}
[DynamoDBRangeKey] //Sort key
public String Subject
{
get; set;
}
// Implicit mapping.
public string Message
{
get; set;
}
public string LastPostedBy
{
get; set;
}
public int Views
{
get; set;
}
public int Replies
{
get; set;
}
public bool Answered
{
get; set;
}
public DateTime LastPostedDateTime
{
get; set;
}
// Explicit mapping (property and table attribute names are different.
[DynamoDBProperty("Tags")]
public List<string> KeywordTags
{
get; set;
}
// Property to store version number for optimistic locking.
[DynamoDBVersion]
public int? Version
{
get; set;
}
}
[DynamoDBTable("Forum")]
public class Forum
{
[DynamoDBHashKey] //Partition key
public string Name
{
get; set;
}
// All the following properties are explicitly mapped,
// only to show how to provide mapping.
[DynamoDBProperty]
public int Threads
{
get; set;
}
[DynamoDBProperty]
public int Views
{
get; set;
}
[DynamoDBProperty]
public string LastPostBy
{
get; set;
}
[DynamoDBProperty]
public DateTime LastPostDateTime
{
get; set;
}
[DynamoDBProperty]
public int Messages
{
get; set;
}
}
[DynamoDBTable("ProductCatalog")]
public class Book
{
[DynamoDBHashKey] //Partition key
public int Id
{
get; set;
}
public string Title
{
get; set;
}
public string ISBN
{
get; set;
}
public int Price
{
get; set;
}
public string PageCount
{
get; set;
}
public string ProductCategory
{
get; set;
}
public bool InPublication
{
get; set;
}
}
}
The example then executes the following query and scan operations using DynamoDBContext.
The ProductCatalog table has Id as its primary key. It does not have a sort key as part of its primary
key. Therefore, you cannot query the table. You can get an item using its Id value.
• Execute the following queries against the Reply table (the Reply table's primary key is composed of Id
and ReplyDateTime attributes. The ReplyDateTime is a sort key. Therefore, you can query this table).
• Find replies to a forum thread posted in the last 15 days.
• Find replies to a forum thread posted in a specific date range.
• Scan ProductCatalog table to find books whose price is less than zero.
For performance reasons, you should use a query operation instead of a scan operation. However,
there are times you might need to scan a table. Suppose there was a data entry error and one of the
book prices is set to less than 0. This example scans the ProductCategory table to find book items (the
ProductCategory is book) at price of less than 0.
For instructions about creating a working sample, see .NET Code Examples (p. 330).
Note
The following example does not work with .NET core because it does not support synchronous
methods. For more information, see AWS Asynchronous APIs for .NET.
Example
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
namespace com.amazonaws.codesamples
{
class HighLevelQueryAndScan
{
private static AmazonDynamoDBClient client = new AmazonDynamoDBClient();
// Scan table.
FindProductsPricedLessThanZero(context);
Console.WriteLine("To continue, press Enter");
Console.ReadLine();
}
catch (AmazonDynamoDBException e) { Console.WriteLine(e.Message); }
catch (AmazonServiceException e) { Console.WriteLine(e.Message); }
catch (Exception e) { Console.WriteLine(e.Message); }
}
string threadSubject)
{
string forumId = forumName + "#" + threadSubject;
Console.WriteLine("\nFindRepliesPostedWithinTimePeriod: Printing result.....");
[DynamoDBTable("Reply")]
public class Reply
{
[DynamoDBHashKey] //Partition key
public string Id
{
get; set;
}
[DynamoDBTable("Thread")]
public class Thread
{
// PK mapping.
[DynamoDBTable("Forum")]
public class Forum
{
[DynamoDBHashKey]
public string Name
{
get; set;
}
// All the following properties are explicitly mapped,
// only to show how to provide mapping.
[DynamoDBProperty]
public int Threads
{
get; set;
}
[DynamoDBProperty]
public int Views
{
get; set;
}
[DynamoDBProperty]
public string LastPostBy
{
get; set;
}
[DynamoDBProperty]
public DateTime LastPostDateTime
{
get; set;
}
[DynamoDBProperty]
public int Messages
{
get; set;
}
}
[DynamoDBTable("ProductCatalog")]
public class Book
{
[DynamoDBHashKey] //Partition key
public int Id
{
get; set;
}
public string Title
{
get; set;
}
public string ISBN
{
get; set;
}
public int Price
{
get; set;
}
public string PageCount
{
get; set;
}
public string ProductCategory
{
get; set;
}
public bool InPublication
{
get; set;
}
}
}
The code examples in this Developer Guide provide more in-depth coverage of DynamoDB operations,
using the following programming languages:
Before you can begin with this exercise, you need to create an AWS account, get your access key and
secret key, and set up the AWS Command Line Interface on your computer. If you haven't done this
already, see Setting Up DynamoDB (Web Service) (p. 48).
Note
If you are using the downloadable version of DynamoDB, you need to use the AWS CLI to create
the tables and sample data. You also need to specify the --endpoint-url parameter with
each AWS CLI command. For more information, see Setting the Local Endpoint (p. 47).
These tables and their data are used as examples throughout this Developer Guide.
Note
If you are an application developer, we recommend that you also read Getting Started with
DynamoDB, where you use the downloadable version of DynamoDB. This lets you learn about
the DynamoDB low-level API for free, without having to pay any fees for throughput, storage, or
data transfer. For more information, see Getting Started with DynamoDB SDK (p. 74).
Topics
• Step 1: Create Example Tables (p. 323)
• Step 2: Load Data into Tables (p. 325)
• Step 3: Query the Data (p. 326)
• Step 4: (Optional) Clean up (p. 328)
• Summary (p. 328)
You can create a ProductCatalog table, where each item is uniquely identified by a single, numeric
attribute: Id.
You can model this application by creating three tables: Forum, Thread, and Reply.
The Reply table has a global secondary index named PostedBy-Message-Index. This index facilitates
queries on two non-key attributes of the Reply table.
• In the Partition key box, type ForumName. Set the data type to String.
• Choose Add sort key.
• In the Sort key box, type Subject. Set the data type to String.
4. When the settings are as you want them, choose Create.
• In the Partition key box, type Id. Set the data type to String.
• Choose Add sort key.
• In the Sort key box, type ReplyDateTime. Set the data type to String.
c. In the Table settings section, clear Use default settings.
d. In the Secondary indexes section, choose Add index.
e. In the Add index window, do the following:
You will download a .zip archive that contains JSON files with sample data for each table. For each file,
you will use the AWS CLI to load the data into DynamoDB. Each successful data load will produce the
following output:
API Version 2012-08-10
325
Amazon DynamoDB Developer Guide
Load Sample Data
{
"UnprocessedItems": {}
}
• sampledata.zip
2. Extract the .json data files from the archive.
3. Copy the .json data files to your current directory.
Repeat this procedure for each of the other tables you created:
• Forum
• Thread
• Reply
to this:
Take some time to explore your other tables using the DynamoDB console:
• ProductCatalog
• Forum
• Thread
However, if you don't want to keep these tables, you should delete them to avoid being charged for
resources you don't need.
Repeat this procedure for each of the other tables you created:
• Forum
• Thread
• Reply
Summary
In this exercise, you used the DynamoDB console to create several tables in DynamoDB. You then used
the AWS CLI to load data into the tables, and performed some basic operations on the data using the
DynamoDB console.
The DynamoDB console and the AWS CLI are helpful for getting started quickly. However, you probably
want to learn more about how DynamoDB works, and how to write application programs with
DynamoDB. The rest of this Developer Guide addresses those topics.
This Developer Guide contains Java code snippets and ready-to-run programs. You can find these code
examples in the following sections:
You can get started quickly by using Eclipse with the AWS Toolkit for Eclipse. In addition to a full-
featured IDE, you also get the AWS SDK for Java with automatic updates, and preconfigured templates
for building AWS applications.
If this is your first time using the AWS Toolkit for Eclipse, choose Configure AWS Accounts to
set up your AWS credentials.
6. Choose Finish to create the project.
7. From the Eclipse menu, choose File, New, and then Class.
8. In Java Class, type a name for your class in Name (use the same name as the code sample that you
want to run), and then choose Finish to create the class.
9. Copy the code sample from the documentation page you are reading into the Eclipse editor.
10. To run the code, choose Run in the Eclipse menu.
The SDK for Java provides thread-safe clients for working with DynamoDB. As a best practice, your
applications should create one client and reuse the client between threads.
The following is an example of an AWS credentials file named ~/.aws/credentials, where the tilde
character (~) represents your home directory:
[default]
aws_access_key_id = AWS access key ID goes here
aws_secret_access_key = Secret key goes here
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import com.amazonaws.regions.Regions;
...
// This client will default to US West (Oregon)
AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard()
.withRegion(Regions.US_WEST_2)
.build();
You can use the withRegion method to run your code against Amazon DynamoDB in any region where
it is available. For a complete list, see AWS Regions and Endpoints in the Amazon Web Services General
Reference.
If you want to run the code examples using DynamoDB locally on your computer, you need to set the
endpoint, like this:
This Developer Guide contains .NET code snippets and ready-to-run programs. You can find these code
examples in the following sections:
You can get started quickly by using the AWS SDK for .NET with the Toolkit for Visual Studio.
If this is your first time using Toolkit for Visual Studio, choose Use a new profile to set up your AWS
credentials.
6. In your Visual Studio project, choose the tab for your program's source code (Program.cs). Copy
the code sample from the documentation page you are reading into the Visual Studio editor,
replacing any other code that you see in the editor.
7. If you see error messages of the form The type or namespace name...could not be found,
you need to install the AWS SDK assembly for DynamoDB as follows:
a. In Solution Explorer, open the context (right-click) menu for your project, and then choose
Manage NuGet Packages.
b. In NuGet Package Manager, choose Browse.
c. In the search box, type AWSSDK.DynamoDBv2 and wait for the search to complete.
d. Choose AWSSDK.DynamoDBv2, and then choose Install.
e. When the installation is complete, choose the Program.cs tab to return to your program.
8. To run the code, choose the Start button in the Visual Studio toolbar.
The AWS SDK for .NET provides thread-safe clients for working with DynamoDB. As a best practice, your
applications should create one client and reuse the client between threads.
The Toolkit for Visual Studio supports multiple sets of credentials from any number of accounts. Each
set is referred to as a profile. Visual Studio adds entries to the project's App.config file so that your
application can find the AWS credentials at runtime.
The following example shows the default App.config file that is generated when you create a new
project using Toolkit for Visual Studio:
At runtime, the program uses the default set of AWS credentials, as specified by the AWSProfileName
entry. The AWS credentials themselves are kept in the SDK Store, in encrypted form. The Toolkit for
Visual Studio provides a graphical user interface for managing your credentials, all from within Visual
Studio. For more information, see Specifying Credentials in the AWS Toolkit for Visual Studio User Guide.
Note
By default, the code samples access DynamoDB in the US West (Oregon) region. You can change
the region by modifying the AWSRegion entry in the App.config file. You can set AWSRegion
to any AWS region where Amazon DynamoDB is available. For a complete list, see AWS Regions
and Endpoints in the Amazon Web Services General Reference.
The following code snippet instantiates a new AmazonDynamoDBClient. The client is modified so that
the code runs against DynamoDB in a different region.
For a complete list of regions, see AWS Regions and Endpoints in the Amazon Web Services General
Reference.
If you want to run the code examples using DynamoDB locally on your computer, you need to set the
endpoint:
Topics
• Working with Tables in DynamoDB (p. 333)
• Working with Items in DynamoDB (p. 372)
• Working with Queries (p. 455)
• Working with Scans (p. 473)
• Improving Data Access with Secondary Indexes (p. 493)
• Capturing Table Activity with DynamoDB Streams (p. 566)
This section also provides more information about throughput capacity, using DynamoDB auto scaling or
manually setting provisioned throughput.
Topics
• Basic Operations for Tables (p. 333)
• Considerations When Changing Read/Write Capacity Mode (p. 338)
• Managing Throughput Settings on Provisioned Tables (p. 339)
• DynamoDB Item Sizes (p. 343)
• Managing Throughput Capacity Automatically with DynamoDB Auto Scaling (p. 343)
• Tagging for DynamoDB (p. 357)
• Working with Tables: Java (p. 360)
• Working with Tables: .NET (p. 365)
Creating a Table
Use the CreateTable operation to create a table. You must provide the following information:
• Table name. The name must conform to the DynamoDB naming rules, and must be unique for the
current AWS account and Region. For example, you could create a People table in US East (N. Virginia)
and another People table in EU (Ireland). However, these two tables would be entirely different from
each other. For more information, see Naming Rules and Data Types (p. 11).
• Primary key. The primary key can consist of one attribute (partition key) or two attributes (partition
key and sort key). You need to provide the attribute names, data types, and the role of each attribute:
HASH (for a partition key) and RANGE (for a sort key). For more information, see Primary Key (p. 5).
• Throughput settings (for provisioned tables). If using provisioned mode, you must specify
the initial read and write throughput settings for the table. You can modify these settings later,
or enable DynamoDB auto scaling to manage the settings for you. For more information, see
Managing Throughput Settings on Provisioned Tables (p. 339) and Managing Throughput Capacity
Automatically with DynamoDB Auto Scaling (p. 343).
The CreateTable operation returns metadata for the table, as shown following:
{
"TableDescription": {
"TableArn": "arn:aws:dynamodb:us-east-1:123456789012:table/Music",
"AttributeDefinitions": [
{
"AttributeName": "Artist",
"AttributeType": "S"
},
{
"AttributeName": "SongTitle",
"AttributeType": "S"
}
],
"ProvisionedThroughput": {
"NumberOfDecreasesToday": 0,
"WriteCapacityUnits": 5,
"ReadCapacityUnits": 10
},
"TableSizeBytes": 0,
"TableName": "Music",
"TableStatus": "CREATING",
"TableId": "12345678-0123-4567-a123-abcdefghijkl",
"KeySchema": [
{
"KeyType": "HASH",
"AttributeName": "Artist"
},
{
"KeyType": "RANGE",
"AttributeName": "SongTitle"
}
],
"ItemCount": 0,
"CreationDateTime": 1542397215.37
}
}
The TableStatus element indicates the current state of the table (CREATING). It might take
a while to create the table, depending on the values you specify for ReadCapacityUnits and
WriteCapacityUnits. Larger values for these require DynamoDB to allocate more resources for the
table.
The CreateTable operation returns metadata for the table, as shown following:
{
"TableDescription": {
"TableArn": "arn:aws:dynamodb:us-east-1:123456789012:table/Music",
"AttributeDefinitions": [
{
"AttributeName": "Artist",
"AttributeType": "S"
},
{
"AttributeName": "SongTitle",
"AttributeType": "S"
}
],
"ProvisionedThroughput": {
"NumberOfDecreasesToday": 0,
"WriteCapacityUnits": 0,
"ReadCapacityUnits": 0
},
"TableSizeBytes": 0,
"TableName": "Music",
"BillingModeSummary": {
"BillingMode": "PAY_PER_REQUEST"
},
"TableStatus": "CREATING",
"TableId": "12345678-0123-4567-a123-abcdefghijkl",
"KeySchema": [
{
"KeyType": "HASH",
"AttributeName": "Artist"
},
{
"KeyType": "RANGE",
"AttributeName": "SongTitle"
}
],
"ItemCount": 0,
"CreationDateTime": 1542397468.348
}
}
Important
When calling DescribeTable on an on-demand table, read capacity units and write capacity
units are set to 0.
Describing a Table
To view details about a table, use the DescribeTable operation. You must provide the table name.
The output from DescribeTable is in the same format as that from CreateTable; it includes the
timestamp when the table was created, its key schema, its provisioned throughput settings, its estimated
size, and any secondary indexes that are present.
Important
When calling DescribeTable on an on-demand table, read capacity units and write capacity
units are set to 0.
Example
The table is ready for use when the TableStatus has changed from CREATING to ACTIVE.
Note
If you issue a DescribeTable request immediately after a CreateTable request, DynamoDB
might return an error (ResourceNotFoundException). This is because DescribeTable uses
an eventually consistent query, and the metadata for your table might not be available at that
moment. Wait for a few seconds, and then try the DescribeTable request again.
For billing purposes, your DynamoDB storage costs include a per-item overhead of 100 bytes.
(For more information, go to DynamoDB Pricing.) This extra 100 bytes per item is not used in
capacity unit calculations or by the DescribeTable operation.
Updating a Table
The UpdateTable operation allows you to do one of the following:
Example
This AWS CLI example shows how to modify a table's provisioned throughput settings:
--provisioned-throughput ReadCapacityUnits=20,WriteCapacityUnits=10
Note
When you issue an UpdateTable request, the status of the table changes from AVAILABLE to
UPDATING. The table remains fully available for use while it is UPDATING. When this process is
completed, the table status changes from UPDATING to AVAILABLE.
Example
This AWS CLI example shows how to modify a table's read/write capacity mode to on-demand mode:
Deleting a Table
You can remove an unused table with the DeleteTable operation. Deleting a table is an unrecoverable
operation.
Example
When you issue a DeleteTable request, the table's status changes from ACTIVE to DELETING. It might
take a while to delete the table, depending on the resources it uses (such as the data stored in the table,
and any streams or indexes on the table).
When the DeleteTable operation concludes, the table no longer exists in DynamoDB.
Example
This AWS CLI example shows how to list the DynamoDB table names:
Example
This AWS CLI example shows how to describe the current provisioned throughput limits:
The output shows the upper limits of read and write capacity units for the current AWS account and
Region.
For more information about these limits, and how to request limit increases, see Throughput Default
Limits (p. 874).
Managing Capacity
When you update a table from provisioned to on-demand mode, you don't need to specify how much
read and write throughput you expect your application to perform.
Consider the following when you update a table from on-demand to provisioned mode:
• If you're using the AWS Management Console, the console estimates initial provisioned capacity values
based on the consumed read and write capacity of your table and global secondary indexes over the
past 30 minutes. To override these recommended values, choose Override recommended values.
• If you're using the AWS CLI or AWS SDK, choose the right provisioned capacity settings of your table
and global secondary indexes by using Amazon CloudWatch to look at your historical consumption
(ConsumedWriteCapacityUnits and ConsumedReadCapacityUnits metrics) to determine the
new throughput settings.
Note
If switching a global table to provisioned mode, look at the maximum consumption across
all your regional replicas for base tables and global secondary indexes when determining the
new throughput settings.
• If you're using the console, all of your auto scaling settings (if any) will be deleted.
• If you're using the AWS CLI or AWS SDK, all of your auto scaling settings will be preserved. These
settings can be applied when you update your table to provisioned billing mode again.
• If you're using the console, DynamoDB will recommend enabling auto scaling with the following
defaults:
• Target utilization: 70%
• Minimum provisioned capacity: 5 units
• Maximum provisioned capacity: The region maximum
• If you're using the AWS CLI or SDK, your previous auto scaling settings (if any) will be preserved.
API Version 2012-08-10
338
Amazon DynamoDB Developer Guide
Managing Throughput Settings on Provisioned Tables
When you create a new provisioned table in DynamoDB, you must specify its provisioned throughput
capacity—the amount of read and write activity that the table will be able to support. DynamoDB uses
this information to reserve sufficient system resources to meet your throughput requirements.
Note
You can create an on-demand mode table instead so that you don't have to manage any
capacity settings for servers, storage, or throughput. DynamoDB instantly accommodates your
workloads as they ramp up or down to any previously reached traffic level. If a workload’s
traffic level hits a new peak, DynamoDB adapts rapidly to accommodate the workload. For more
information, see On-Demand Mode (p. 16).
You can optionally allow DynamoDB auto scaling to manage your table's throughput capacity. However,
you still must provide initial settings for read and write capacity when you create the table. DynamoDB
auto scaling uses these initial settings as a starting point, and then adjusts them dynamically in
response to your application's requirements. For more information, see Managing Throughput Capacity
Automatically with DynamoDB Auto Scaling (p. 343).
As your application data and access requirements change, you might need to adjust your table's
throughput settings. If you're using DynamoDB auto scaling, the throughput settings are automatically
adjusted in response to actual workloads. You can also use the UpdateTable operation to manually
adjust your table's throughput capacity. You might decide to do this if you need to bulk-load data from
an existing data store into your new DynamoDB table. You could create the table with a large write
throughput setting and then reduce this setting after the bulk data load is complete.
You specify throughput requirements in terms of capacity units—the amount of data your application
needs to read or write per second. You can modify these settings later, if needed, or enable DynamoDB
auto scaling to modify them automatically.
For example, suppose that you create a table with 10 provisioned read capacity units. This allows you to
perform 10 strongly consistent reads per second, or 20 eventually consistent reads per second, for items
up to 4 KB.
Reading an item larger than 4 KB consumes more read capacity units. For example, a strongly consistent
read of an item that is 8 KB (4 KB × 2) consumes 2 read capacity units. An eventually consistent read on
that same item consumes only 1 read capacity unit.
Item sizes for reads are rounded up to the next 4 KB multiple. For example, reading a 3,500-byte item
consumes the same throughput as reading a 4 KB item.
• GetItem—Reads a single item from a table. To determine the number of capacity units GetItem will
consume, take the item size and round it up to the next 4 KB boundary. If you specified a strongly
consistent read, this is the number of capacity units required. For an eventually consistent read (the
default), divide this number by two.
For example, if you read an item that is 3.5 KB, DynamoDB rounds the item size to 4 KB. If you read an
item of 10 KB, DynamoDB rounds the item size to 12 KB.
• BatchGetItem—Reads up to 100 items, from one or more tables. DynamoDB processes each item
in the batch as an individual GetItem request, so DynamoDB first rounds up the size of each item to
the next 4 KB boundary, and then calculates the total size. The result is not necessarily the same as
the total size of all the items. For example, if BatchGetItem reads a 1.5 KB item and a 6.5 KB item,
DynamoDB calculates the size as 12 KB (4 KB + 8 KB), not 8 KB (1.5 KB + 6.5 KB).
• Query—Reads multiple items that have the same partition key value. All of the items returned are
treated as a single read operation, where DynamoDB computes the total size of all items and then
rounds up to the next 4 KB boundary. For example, suppose your query returns 10 items whose
combined size is 40.8 KB. DynamoDB rounds the item size for the operation to 44 KB. If a query
returns 1500 items of 64 bytes each, the cumulative size is 96 KB.
• Scan—Reads all of the items in a table. DynamoDB considers the size of the items that are evaluated,
not the size of the items returned by the scan.
If you perform a read operation on an item that does not exist, DynamoDB still consumes provisioned
read throughput: A strongly consistent read request consumes one read capacity unit, while an
eventually consistent read request consumes 0.5 of a read capacity unit.
For any operation that returns items, you can request a subset of attributes to retrieve; however, doing
so has no impact on the item size calculations. In addition, Query and Scan can return item counts
instead of attribute values. Getting the count of items uses the same quantity of read capacity units and
is subject to the same item size calculations. This is because DynamoDB has to read each item in order to
increment the count.
The preceding calculations assume strongly consistent read requests. For an eventually consistent read
request, the operation consumes only half the capacity units. For an eventually consistent read, if the
total item size is 80 KB, the operation consumes only 10 capacity units.
For example, suppose that you create a table with 10 write capacity units. This allows you to perform 10
writes per second, for items up to 1 KB in size per second.
Item sizes for writes are rounded up to the next 1 KB multiple. For example, writing a 500-byte item
consumes the same throughput as writing a 1 KB item.
• PutItem—Writes a single item to a table. If an item with the same primary key exists in the table, the
operation replaces the item. For calculating provisioned throughput consumption, the item size that
matters is the larger of the two.
• UpdateItem—Modifies a single item in the table. DynamoDB considers the size of the item as it
appears before and after the update. The provisioned throughput consumed reflects the larger of
these item sizes. Even if you update just a subset of the item's attributes, UpdateItem will still
consume the full amount of provisioned throughput (the larger of the "before" and "after" item sizes).
• DeleteItem—Removes a single item from a table. The provisioned throughput consumption is based
on the size of the deleted item.
• BatchWriteItem—Writes up to 25 items to one or more tables. DynamoDB processes each item
in the batch as an individual PutItem or DeleteItem request (updates are not supported). So
DynamoDB first rounds up the size of each item to the next 1 KB boundary, and then calculates the
total size. The result is not necessarily the same as the total size of all the items. For example, if
BatchWriteItem writes a 500 byte item and a 3.5 KB item, DynamoDB calculates the size as 5 KB (1
KB + 4 KB), not 4 KB (500 bytes + 3.5 KB).
For PutItem, UpdateItem, and DeleteItem operations, DynamoDB rounds the item size up to the
next 1 KB. For example, if you put or delete an item of 1.6 KB, DynamoDB rounds the item size up to 2
KB.
PutItem, UpdateItem, and DeleteItem allow conditional writes, where you specify an expression
that must evaluate to true in order for the operation to succeed. If the expression evaluates to false,
DynamoDB still consumes write capacity units from the table:
• For an existing item, the number of write capacity units consumed depends on the size of the new
item. (For example, a failed conditional write of a 1 KB item would consume one write capacity unit.
If the new item were twice that size, the failed conditional write would consume two write capacity
units.)
• For a new item, DynamoDB consumes one write capacity unit.
The DynamoDB console displays Amazon CloudWatch metrics for your tables, so you can monitor
throttled read requests and write requests. If you encounter excessive throttling, you should consider
increasing your table's provisioned throughput settings.
In some cases, DynamoDB uses burst capacity to accommodate reads or writes in excess of your table's
throughput settings. With burst capacity, unexpected read or write requests can succeed where they
otherwise would be throttled. For more information, see Using Burst Capacity Effectively (p. 809).
• Item sizes. Some items are small enough that they can be read or written using a single capacity unit.
Larger items require multiple capacity units. By estimating the sizes of the items that will be in your
table, you can specify accurate settings for your table's provisioned throughput.
• Expected read and write request rates. In addition to item size, you should estimate the number of
reads and writes you need to perform per second.
• Read consistency requirements. Read capacity units are based on strongly consistent read operations,
which consume twice as many database resources as eventually consistent reads. You should
determine whether your application requires strongly consistent reads, or whether it can relax this
requirement and perform eventually consistent reads instead. (Read operations in DynamoDB are
eventually consistent, by default. You can request strongly consistent reads for these operations if
necessary.)
For example, suppose that you want to read 80 items per second from a table. The items are 3 KB in
size, and you want strongly consistent reads. For this scenario, each read requires one provisioned read
capacity unit. To determine this number, you divide the item size of the operation by 4 KB, and then
round up to the nearest whole number, as in this example:
For this scenario, you have to set the table's provisioned read throughput to 80 read capacity units:
• 1 read capacity unit per item × 80 reads per second = 80 read capacity units
Now suppose that you want to write 100 items per second to your table, and that the items are 512
bytes in size. For this scenario, each write requires one provisioned write capacity unit. To determine
this number, you divide the item size of the operation by 1 KB, and then round up to the nearest whole
number:
For this scenario, you would want to set the table's provisioned write throughput to 100 write capacity
units:
• 1 write capacity unit per item × 100 writes per second = 100 write capacity units
Note
For recommendations on provisioned throughput and related topics, see Best Practices for
Designing and Using Partition Keys Effectively (p. 809).
You can modify your table's provisioned throughput settings using the AWS Management Console or the
UpdateTable operation. For more information about throughput increases and decreases per day, see
Limits in DynamoDB (p. 873).
The total size of an item is the sum of the lengths of its attribute names and values. You can use the
following guidelines to estimate attribute sizes:
• Strings are Unicode with UTF-8 binary encoding. The size of a string is (length of attribute name) +
(number of UTF-8-encoded bytes).
• Numbers are variable length, with up to 38 significant digits. Leading and trailing zeroes are trimmed.
The size of a number is approximately (length of attribute name) + (1 byte per two significant digits) +
(1 byte).
• A binary value must be encoded in base64 format before it can be sent to DynamoDB, but the value's
raw byte length is used for calculating size. The size of a binary attribute is (length of attribute name) +
(number of raw bytes).
• The size of a null attribute or a Boolean attribute is (length of attribute name) + (1 byte).
• An attribute of type List or Map requires 3 bytes of overhead, regardless of its contents. The size of
a List or Map is (length of attribute name) + sum (size of nested elements) + (3 bytes) . The size of an
empty List or Map is (length of attribute name) + (3 bytes).
Note
We recommend that you choose shorter attribute names rather than long ones. This helps you
reduce the amount of storage required for your data.
Many database workloads are cyclical in nature or are difficult to predict in advance. For example,
consider a social networking app where most of the users are active during daytime hours. The database
must be able to handle the daytime activity, but there's no need for the same levels of throughput
at night. Another example might be a new mobile gaming app that is experiencing rapid adoption.
If the game becomes too popular, it could exceed the available database resources, resulting in slow
performance and unhappy customers. These kinds of workloads often require manual intervention to
scale database resources up or down in response to varying usage levels.
DynamoDB auto scaling uses the AWS Application Auto Scaling service to dynamically adjust provisioned
throughput capacity on your behalf, in response to actual traffic patterns. This enables a table or a global
secondary index to increase its provisioned read and write capacity to handle sudden increases in traffic,
without throttling. When the workload decreases, Application Auto Scaling decreases the throughput so
that you don't pay for unused provisioned capacity.
Note
If you use the AWS Management Console to create a table or a global secondary index,
DynamoDB auto scaling is enabled by default. You can modify your auto scaling settings at any
time. For more information, see Using the AWS Management Console With DynamoDB Auto
Scaling (p. 346).
With Application Auto Scaling, you create a scaling policy for a table or a global secondary index. The
scaling policy specifies whether you want to scale read capacity or write capacity (or both), and the
minimum and maximum provisioned capacity unit settings for the table or index.
The scaling policy also contains a target utilization—the percentage of consumed provisioned
throughput at a point in time. Application Auto Scaling uses a target tracking algorithm to adjust the
provisioned throughput of the table (or index) upward or downward in response to actual workloads, so
that the actual capacity utilization remains at or near your target utilization.
You can set the auto scaling target utilization values between 20% and 90% for your read and write
capacity.
Note
In addition to tables, DynamoDB auto scaling also supports global secondary indexes. Every
global secondary index has its own provisioned throughput capacity, separate from that of its
base table. When you create a scaling policy for a global secondary index, Application Auto
Scaling adjusts the provisioned throughput settings for the index to ensure that its actual
utilization stays at or near your desired utilization ratio.
The following diagram provides a high-level overview of how DynamoDB auto scaling manages
throughput capacity for a table:
The following steps summarize the auto scaling process as shown in the previous diagram:
1. You create an Application Auto Scaling policy for your DynamoDB table.
2. DynamoDB publishes consumed capacity metrics to Amazon CloudWatch.
3. If the table's consumed capacity exceeds your target utilization (or falls below the target) for a
specific length of time, Amazon CloudWatch triggers an alarm. You can view the alarm on the AWS
Management Console and receive notifications using Amazon Simple Notification Service (Amazon
SNS).
4. The CloudWatch alarm invokes Application Auto Scaling to evaluate your scaling policy.
5. Application Auto Scaling issues an UpdateTable request to adjust your table's provisioned
throughput.
6. DynamoDB processes the UpdateTable request, dynamically increasing (or decreasing) the table's
provisioned throughput capacity so that it approaches your target utilization.
To understand how DynamoDB auto scaling works, suppose that you have a table named ProductCatalog.
The table is bulk-loaded with data infrequently, so it doesn't incur very much write activity. However,
it does experience a high degree of read activity, which varies over time. By monitoring the Amazon
CloudWatch metrics for ProductCatalog, you determine that the table requires 1,200 read capacity
units (to avoid DynamoDB throttling read requests when activity is at its peak). You also determine that
ProductCatalog requires 150 read capacity units at a minimum, when read traffic is at its lowest point.
Within the range of 150 to 1,200 read capacity units, you decide that a target utilization of 70 percent
would be appropriate for the ProductCatalog table. Target utilization is the ratio of consumed capacity
units to provisioned capacity units, expressed as a percentage. Application Auto Scaling uses its target
tracking algorithm to ensure that the provisioned read capacity of ProductCatalog is adjusted as required
so that utilization remains at or near 70 percent.
Note
DynamoDB auto scaling modifies provisioned throughput settings only when the actual
workload stays elevated (or depressed) for a sustained period of several minutes. The
Application Auto Scaling target tracking algorithm seeks to keep the target utilization at or near
your chosen value over the long term.
Sudden, short-duration spikes of activity are accommodated by the table's built-in burst
capacity. For more information, see Using Burst Capacity Effectively (p. 809).
To enable DynamoDB auto scaling for the ProductCatalog table, you create a scaling policy. This policy
specifies the table or global secondary index that you want to manage, which capacity type to manage
(read capacity or write capacity), the upper and lower boundaries for the provisioned throughput
settings, and your target utilization.
When you create a scaling policy, Application Auto Scaling creates a pair of Amazon CloudWatch alarms
on your behalf. Each pair represents your upper and lower boundaries for provisioned throughput
settings. These CloudWatch alarms are triggered when the table's actual utilization deviates from your
target utilization for a sustained period of time.
When one of the CloudWatch alarms is triggered, Amazon SNS sends you a notification (if you have
enabled it). The CloudWatch alarm then invokes Application Auto Scaling, which in turn notifies
DynamoDB to adjust the ProductCatalog table's provisioned capacity upward or downward, as
appropriate.
Usage Notes
Before you begin using DynamoDB auto scaling, you should be aware of the following:
• DynamoDB auto scaling can increase read capacity or write capacity as often as necessary, in
accordance with your auto scaling policy. All DynamoDB limits remain in effect, as described in Limits
in DynamoDB.
• DynamoDB auto scaling doesn't prevent you from manually modifying provisioned throughput
settings. These manual adjustments don't affect any existing CloudWatch alarms that are related to
DynamoDB auto scaling.
• If you enable DynamoDB auto scaling for a table that has one or more global secondary indexes, we
highly recommend that you also apply auto scaling uniformly to those indexes. You can do this by
choosing Apply same settings to global secondary indexes in the AWS Management Console. For
more information, see Enabling DynamoDB Auto Scaling on Existing Tables (p. 347).
When you use the AWS Management Console to create a new table, DynamoDB auto scaling is enabled
for that table by default. You can also use the console to enable auto scaling for existing tables, modify
auto scaling settings, or disable auto scaling.
Note
For more advanced features like setting scale in and scale out cooldown times use the AWS
Command Line Interface (AWS CLI) to manage DynamoDB auto scaling.
Before You Begin: Grant User Permissions for DynamoDB Auto Scaling
In AWS Identity and Access Management (IAM), the AWS-managed policy DynamoDBFullAccess
provides the required permissions for using the DynamoDB console. However, for DynamoDB auto
scaling, IAM users will require some additional privileges.
Important
application-autoscaling:* permissions are required to delete an autoscaling enabled
table. The AWS-managed policy DynamoDBFullAccess attached next includes such
permissions.
To set up an IAM user for DynamoDB console access and DynamoDB auto scaling, add the following
policy:
• Select Read capacity, Write capacity, or both. (For write capacity, note that you can choose
Same settings as read.) For each of these, do the following:
(For Write capacity, note that you can choose Same settings as read.)
To view these auto scaling activities in the DynamoDB console, choose the table that you want to work
with. Choose Capacity, and then expand the Scaling activities section. When your table's throughput
settings are modified, you will see informational messages here.
To disable DynamoDB auto scaling, go to the Capacity tab for your table and clear Read capacity, Write
capacity, or both.
Instead of using the AWS Management Console, you can use the AWS Command Line Interface (AWS
CLI) to manage DynamoDB auto scaling. The tutorial in this section demonstrates how to install and
configure the AWS CLI for managing DynamoDB auto scaling . In this tutorial, you do the following:
• Create a DynamoDB table named TestTable. The initial throughput settings are 5 read capacity units
and 5 write capacity units.
• Create an Application Auto Scaling policy for TestTable. The policy seeks to maintain a 50 percent
target ratio between consumed write capacity and provisioned write capacity. The range for this
metric is between 5 and 10 write capacity units. (Application Auto Scaling is not allowed to adjust the
throughput beyond this range.)
• Run a Python program to drive write traffic to TestTable. When the target ratio exceeds 50 percent for
a sustained period of time, Application Auto Scaling notifies DynamoDB to adjust the throughput of
TestTable upward, so that the 50 percent target utilization can be maintained.
• Verify that DynamoDB has successfully adjusted the provisioned write capacity for TestTable.
If you haven't already done so, you must install and configure the AWS CLI. To do this, go to the AWS
Command Line Interface User Guide and follow these instructions:
Install Python
Part of this tutorial requires you to run a Python program (see Step 4: Drive Write Traffic to
TestTable (p. 351)). If you don't already have Python installed, you can download it using this link:
https://www.python.org/downloads
Note
You can also register a scalable target against a GSI. For example, for a GSI ("test-index"),
note that the resource id and scalable dimension arguments are updated appropriately:
Note
To further understand how TargetValue works, suppose that you have a table with a
provisioned throughput setting of 200 write capacity units. You decide to create a scaling policy
for this table, with a TargetValue of 70 percent.
Now suppose that you begin driving write traffic to the table so that the actual write throughput
is 150 capacity units. The consumed-to-provisioned ratio is now (150 / 200), or 75 percent. This
ratio exceeds your target, so Application Auto Scaling increases the provisioned write capacity
to 215 so that the ratio is (150 / 215), or 69.77 percent—as close to your TargetValue as
possible, but not exceeding it.
For TestTable, you set TargetValue to 50 percent. Application Auto Scaling adjusts the table's
provisioned throughput within the range of 5 to 10 capacity units (see Step 2: Register a Scalable
Target (p. 349)) so that the consumed-to-provisioned ratio remains at or near 50 percent. You set the
values for ScaleOutCooldown and ScaleInCooldown to 60 seconds.
{
"PredefinedMetricSpecification": {
"PredefinedMetricType": "DynamoDBWriteCapacityUtilization"
},
"ScaleOutCooldown": 60,
"ScaleInCooldown": 60,
"TargetValue": 50.0
}
3. In the output, note that Application Auto Scaling has created two CloudWatch alarms—one each for
the upper and lower boundary of the scaling target range.
4. Use the following AWS CLI command to view more details about the scaling policy:
5. In the output, verify that the policy settings match your specifications from Step 2: Register a
Scalable Target (p. 349) and Step 3: Create a Scaling Policy (p. 350).
import boto3
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table("TestTable")
i = 0
while (i < 10):
j = 0
table.put_item(
Item={
'pk':i,
'sk':j,
'filler':{"S":filler}
}
)
j += 1
i += 1
python bulk-load-test-table.py
The provisioned write capacity for TestTable is very low (5 write capacity units), so the program stalls
occasionally due to write throttling. This is expected behavior.
Let the program continue running, while you move on to the next step.
1. Type the following command to view the Application Auto Scaling actions:
Rerun this command occasionally, while the Python program is running. (It will take several minutes
before your scaling policy is invoked.) You should eventually see the following output:
...
{
"ScalableDimension": "dynamodb:table:WriteCapacityUnits",
"Description": "Setting write capacity units to 10.",
"ResourceId": "table/TestTable",
"ActivityId": "0cc6fb03-2a7c-4b51-b67f-217224c6b656",
"StartTime": 1489088210.175,
"ServiceNamespace": "dynamodb",
"EndTime": 1489088246.85,
"Cause": "monitor alarm AutoScaling-table/TestTable-AlarmHigh-1bb3c8db-1b97-4353-
baf1-4def76f4e1b9 in state ALARM triggered policy MyScalingPolicy",
"StatusMessage": "Successfully set write capacity units to 10. Change successfully
fulfilled by dynamodb.",
"StatusCode": "Successful"
},
...
This indicates that Application Auto Scaling has issued an UpdateTable request to DynamoDB.
2. Type the following command to verify that DynamoDB increased the table's write capacity:
--query "Table.[TableName,TableStatus,ProvisionedThroughput]"
• EnableDynamoDBAutoscaling.java
• DisableDynamoDBAutoscaling.java
• The program registers write capacity units as a scalable target for TestTable. The range for this metric
is between 5 and 10 write capacity units.
• After the scalable target is created, the program builds a target tracking configuration. The policy
seeks to maintain a 50 percent target ratio between consumed write capacity and provisioned write
capacity.
• The program then creates the scaling policy, based on the target tracking configuration.
The program requires that you supply an ARN for a valid Application Auto Scaling
service linked role. (For example: "arn:aws:iam::122517410325:role/
AWSServiceRoleForApplicationAutoScaling_DynamoDBTable.) In the following program,
replace SERVICE_ROLE_ARN_GOES_HERE with the actual ARN.
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package com.amazonaws.codesamples.autoscaling;
import com.amazonaws.services.applicationautoscaling.AWSApplicationAutoScalingClient;
import
com.amazonaws.services.applicationautoscaling.AWSApplicationAutoScalingClientBuilder;
import com.amazonaws.services.applicationautoscaling.model.DescribeScalableTargetsRequest;
import com.amazonaws.services.applicationautoscaling.model.DescribeScalableTargetsResult;
import com.amazonaws.services.applicationautoscaling.model.DescribeScalingPoliciesRequest;
import com.amazonaws.services.applicationautoscaling.model.DescribeScalingPoliciesResult;
import com.amazonaws.services.applicationautoscaling.model.MetricType;
import com.amazonaws.services.applicationautoscaling.model.PolicyType;
import com.amazonaws.services.applicationautoscaling.model.PredefinedMetricSpecification;
import com.amazonaws.services.applicationautoscaling.model.PutScalingPolicyRequest;
import com.amazonaws.services.applicationautoscaling.model.RegisterScalableTargetRequest;
import com.amazonaws.services.applicationautoscaling.model.ScalableDimension;
import com.amazonaws.services.applicationautoscaling.model.ServiceNamespace;
import
com.amazonaws.services.applicationautoscaling.model.TargetTrackingScalingPolicyConfiguration;
ServiceNamespace ns = ServiceNamespace.Dynamodb;
ScalableDimension tableWCUs = ScalableDimension.DynamodbTableWriteCapacityUnits;
String resourceID = "table/TestTable";
try {
aaClient.registerScalableTarget(rstRequest);
} catch (Exception e) {
System.err.println("Unable to register scalable target: ");
System.err.println(e.getMessage());
}
System.out.println();
try {
aaClient.putScalingPolicy(pspRequest);
} catch (Exception e) {
System.err.println("Unable to put scaling policy: ");
System.err.println(e.getMessage());
}
try {
DescribeScalingPoliciesResult dspResult =
aaClient.describeScalingPolicies(dspRequest);
System.out.println("DescribeScalingPolicies result: ");
System.out.println(dspResult);
} catch (Exception e) {
e.printStackTrace();
System.err.println("Unable to describe scaling policy: ");
System.err.println(e.getMessage());
}
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package com.amazonaws.codesamples.autoscaling;
import com.amazonaws.services.applicationautoscaling.AWSApplicationAutoScalingClient;
import com.amazonaws.services.applicationautoscaling.model.DeleteScalingPolicyRequest;
import com.amazonaws.services.applicationautoscaling.model.DeregisterScalableTargetRequest;
import com.amazonaws.services.applicationautoscaling.model.DescribeScalableTargetsRequest;
import com.amazonaws.services.applicationautoscaling.model.DescribeScalableTargetsResult;
import com.amazonaws.services.applicationautoscaling.model.DescribeScalingPoliciesRequest;
import com.amazonaws.services.applicationautoscaling.model.DescribeScalingPoliciesResult;
import com.amazonaws.services.applicationautoscaling.model.ScalableDimension;
import com.amazonaws.services.applicationautoscaling.model.ServiceNamespace;
ServiceNamespace ns = ServiceNamespace.Dynamodb;
ScalableDimension tableWCUs = ScalableDimension.DynamodbTableWriteCapacityUnits;
String resourceID = "table/TestTable";
try {
aaClient.deleteScalingPolicy(delSPRequest);
} catch (Exception e) {
System.err.println("Unable to delete scaling policy: ");
System.err.println(e.getMessage());
}
try {
DescribeScalingPoliciesResult dspResult =
aaClient.describeScalingPolicies(descSPRequest);
System.out.println("DescribeScalingPolicies result: ");
System.out.println(dspResult);
} catch (Exception e) {
e.printStackTrace();
System.err.println("Unable to describe scaling policy: ");
System.err.println(e.getMessage());
}
System.out.println();
try {
aaClient.deregisterScalableTarget(delSTRequest);
} catch (Exception e) {
System.err.println("Unable to deregister scalable target: ");
System.err.println(e.getMessage());
}
try {
DescribeScalableTargetsResult dsaResult =
aaClient.describeScalableTargets(dscRequest);
System.out.println("DescribeScalableTargets result: ");
System.out.println(dsaResult);
System.out.println();
} catch (Exception e) {
System.err.println("Unable to describe scalable target: ");
System.err.println(e.getMessage());
}
Tagging is supported by AWS services like Amazon EC2, Amazon S3, DynamoDB, and more. Efficient
tagging can provide cost insights by allowing you to create reports across services that carry a specific
tag.
Finally, it is good practice to follow optimal tagging strategies. For information, see AWS Tagging
Strategies.
Tagging Restrictions
Each tag consists of a key and a value, both of which you define. The following restrictions apply:
• Each DynamoDB table can have only one tag with the same key. If you try to add an existing tag (same
key), the existing tag value will be updated to the new value.
• Tag keys and values are case sensitive.
• Maximum key length: 128 Unicode characters
• Maximum value length: 256 Unicode characters
• Allowed characters are letters, whitespace, and numbers, plus the following special characters: + - = .
_:/
• Maximum number of tags per resource: 50
• AWS-assigned tag names and values are automatically assigned the aws: prefix, which you cannot
assign. AWS-assigned tag names do not count toward the tag limit of 50. User-assigned tag names
have the prefix user: in the cost allocation report.
• You cannot backdate the application of a tag.
Tagging Operations
You can use the Amazon DynamoDB console or the AWS Command Line Interface (AWS CLI) to add, list,
edit, or delete tags. You can then activate these user-defined tags so that they appear on the AWS Billing
and Cost Management console for cost allocation tracking. For more information, see Cost Allocation
Reports (p. 360).
For bulk editing, you can also use Tag Editor on the AWS Management Console. For more information,
see Working with Tag Editor.
To use the DynamoDB API instead, see the following operations in the Amazon DynamoDB API
Reference:
• TagResource
• UntagResource
• ListTagsOfResource
Topics
• Adding Tags to New or Existing Tables (Console) (p. 358)
• Adding Tags to New or Existing Tables (AWS CLI) (p. 359)
1. Sign in to the AWS Management Console and open the DynamoDB console at https://
console.aws.amazon.com/dynamodb/.
2. In the navigation pane, choose Tables, and then choose Create table.
3. On the Create DynamoDB table page, provide a name and primary key. Choose Add tags and enter
the tags that you want to use.
For information about tag structure, see Tagging Restrictions (p. 358).
For more information about creating tables, see Basic Operations for Tables (p. 333).
• The following example creates a new Movies table and adds the Owner tag with a value of
blueTeam:
• The following example adds the Owner tag with a value of blueTeam for the Movies table:
• The following example lists all the tags that are associated with the Movies table:
• An AWS-generated tag. AWS defines, creates, and applies this tag for you.
• User-defined tags. You define, create, and apply these tags.
You must activate both types of tags separately before they can appear in Cost Explorer or on a cost
allocation report.
1. Sign in to the AWS Management Console and open the Billing and Cost Management console at
https://console.aws.amazon.com/billing/home#/.
2. In the navigation pane, choose Cost Allocation Tags.
3. Under AWS-Generated Cost Allocation Tags, choose Activate.
1. Sign in to the AWS Management Console and open the Billing and Cost Management console at
https://console.aws.amazon.com/billing/home#/.
2. In the navigation pane, choose Cost Allocation Tags.
3. Under User-Defined Cost Allocation Tags, choose Activate.
After you create and activate tags, AWS generates a cost allocation report with your usage and costs
grouped by your active tags. The cost allocation report includes all of your AWS costs for each billing
period. The report includes both tagged and untagged resources, so that you can clearly organize the
charges for resources.
Note
Currently, any data transferred out from DynamoDB won't be broken down by tags on cost
allocation reports.
You can use the AWS SDK for Java to create, update, and delete tables, list all the tables in your account,
or get information about a specific table.
The following are the common steps for table operations using the AWS SDK for Java Document API.
Creating a Table
To create a table, you must provide the table name, its primary key, and the provisioned throughput
values. The following code snippet creates an example table that uses a numeric type attribute Id as its
primary key.
You must provide the table name, attribute definitions, key schema, and provisioned throughput
values.
3. Execute the createTable method by providing the request object as a parameter.
table.waitForActive();
The table will not be ready for use until DynamoDB creates it and sets its status to ACTIVE. The
createTable request returns a Table object that you can use to obtain more information about the
table.
Example
TableDescription tableDescription =
dynamoDB.getTable(tableName).describe();
You can call the describe method of the client to get table information at any time.
Example
Updating a Table
You can update only the provisioned throughput values of an existing table. Depending on you
application requirements, you might need to update these values.
Note
For more information on throughput increases and decreases per day, see Limits in
DynamoDB (p. 873).
Example
table.updateTable(provisionedThroughput);
table.waitForActive();
Deleting a Table
To delete a table:
Example
table.delete();
table.waitForDelete();
Listing Tables
To list tables in your account, create an instance of DynamoDB and execute the listTables method.
The ListTables operation requires no parameters.
Example
while (iterator.hasNext()) {
Table table = iterator.next();
System.out.println(table.getTableName());
}
Example: Create, Update, Delete, and List Tables Using the AWS
SDK for Java Document API
The following code sample uses the AWS SDK for Java Document API to create, update, and delete a
table (ExampleTable). As part of the table update, it increases the provisioned throughput values. The
example also lists all the tables in your account and gets the description of a specific table. For step-by-
step instructions to run the following example, see Java Code Examples (p. 328).
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package com.amazonaws.codesamples.document;
import java.util.ArrayList;
import java.util.Iterator;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import com.amazonaws.services.dynamodbv2.document.DynamoDB;
import com.amazonaws.services.dynamodbv2.document.Table;
import com.amazonaws.services.dynamodbv2.document.TableCollection;
import com.amazonaws.services.dynamodbv2.model.AttributeDefinition;
import com.amazonaws.services.dynamodbv2.model.CreateTableRequest;
import com.amazonaws.services.dynamodbv2.model.KeySchemaElement;
import com.amazonaws.services.dynamodbv2.model.KeyType;
import com.amazonaws.services.dynamodbv2.model.ListTablesResult;
import com.amazonaws.services.dynamodbv2.model.ProvisionedThroughput;
import com.amazonaws.services.dynamodbv2.model.TableDescription;
createExampleTable();
listMyTables();
getTableInformation();
updateExampleTable();
deleteExampleTable();
}
try {
// key
getTableInformation();
}
catch (Exception e) {
System.err.println("CreateTable request failed for " + tableName);
System.err.println(e.getMessage());
}
while (iterator.hasNext()) {
Table table = iterator.next();
System.out.println(table.getTableName());
}
}
try {
table.updateTable(new
ProvisionedThroughput().withReadCapacityUnits(6L).withWriteCapacityUnits(7L));
table.waitForActive();
}
catch (Exception e) {
System.err.println("UpdateTable request failed for " + tableName);
System.err.println(e.getMessage());
}
}
table.waitForDelete();
}
catch (Exception e) {
System.err.println("DeleteTable request failed for " + tableName);
System.err.println(e.getMessage());
}
}
You can use the AWS SDK for .NET to creates, update, and delete tables, list all the tables in your
account, or get information about a specific table.
The following are the common steps for table operations using the AWS SDK for .NET.
Note
The examples in this section do not work with .NET core as it does not support synchronous
methods. For more information, see AWS Asynchronous APIs for .NET.
Creating a Table
To create a table, you must provide the table name, its primary key, and the provisioned throughput
values.
The following are the steps to create a table using the .NET low-level API.
You must provide the table name, primary key, and the provisioned throughput values.
3. Execute the AmazonDynamoDBClient.CreateTable method by providing the request object as a
parameter.
The following C# code snippet demonstrates the preceding steps. The sample creates a table
(ProductCatalog) that uses Id as the primary key and set of provisioned throughput values. Depending
on your application requirements, you can update the provisioned throughput values by using the
UpdateTable API.
You must wait until DynamoDB creates the table and sets the table status to ACTIVE. The CreateTable
response includes the TableDescription property that provides the necessary table information.
Example
You can also call the DescribeTable method of the client to get table information at anytime.
Example
Updating a Table
You can update only the provisioned throughput values of an existing table. Depending on your
application requirements, you might need to update these values.
Note
You can increase throughput capacity as often as needed, and decrease it within certain
constraints. For more information on throughput increases and decreases per day, see Limits in
DynamoDB (p. 873).
The following are the steps to update a table using the .NET low-level API.
You must provide the table name and the new provisioned throughput values.
3. Execute the AmazonDynamoDBClient.UpdateTable method by providing the request object as a
parameter.
Example
TableName = tableName,
ProvisionedThroughput = new ProvisionedThroughput()
{
// Provide new values.
ReadCapacityUnits = 20,
WriteCapacityUnits = 10
}
};
var response = client.UpdateTable(request);
Deleting a Table
The following are the steps to delete a table using the .NET low-level API.
Example
Listing Tables
To list tables in your account using the AWS SDK for .NET low-level API, create an instance of the
AmazonDynamoDBClient and execute the ListTables method. The ListTables operation requires
no parameters. However, you can specify optional parameters. For example, you can set the Limit
parameter if you want to use paging to limit the number of table names per page. This requires you to
create a ListTablesRequest object and provide optional parameters as shown in the following C#
code snippet. Along with the page size, the request sets the ExclusiveStartTableName parameter.
Initially, ExclusiveStartTableName is null, however, after fetching the first page of result, to retrieve
the next page of result, you must set this parameter value to the LastEvaluatedTableName property
of the current result.
Example
lastEvaluatedTableName = result.LastEvaluatedTableName;
Example: Create, Update, Delete, and List Tables Using the AWS
SDK for .NET Low-Level API
The following C# example create,, updates, and deletes a table (ExampleTable). It also lists all the
tables in your account and gets the description of a specific table. The table update increases the
provisioned throughput values. For step-by-step instructions to test the following sample, see .NET Code
Examples (p. 330).
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
using System;
using System.Collections.Generic;
using Amazon.DynamoDBv2;
using Amazon.DynamoDBv2.Model;
using Amazon.Runtime;
namespace com.amazonaws.codesamples
{
class LowLevelTableExample
{
private static AmazonDynamoDBClient client = new AmazonDynamoDBClient();
private static string tableName = "ExampleTable";
DeleteExampleTable();
{
Console.WriteLine("\n*** Creating table ***");
var request = new CreateTableRequest
{
AttributeDefinitions = new List<AttributeDefinition>()
{
new AttributeDefinition
{
AttributeName = "Id",
AttributeType = "N"
},
new AttributeDefinition
{
AttributeName = "ReplyDateTime",
AttributeType = "N"
}
},
KeySchema = new List<KeySchemaElement>
{
new KeySchemaElement
{
AttributeName = "Id",
KeyType = "HASH" //Partition key
},
new KeySchemaElement
{
AttributeName = "ReplyDateTime",
KeyType = "RANGE" //Sort key
}
},
ProvisionedThroughput = new ProvisionedThroughput
{
ReadCapacityUnits = 5,
WriteCapacityUnits = 6
},
TableName = tableName
};
WaitUntilTableReady(tableName);
}
lastTableNameEvaluated = response.LastEvaluatedTableName;
} while (lastTableNameEvaluated != null);
}
WaitUntilTableReady(tableName);
}
In DynamoDB, an item is a collection of attributes. Each attribute has a name and a value. An attribute
value can be a scalar, a set, or a document type. For more information, see Amazon DynamoDB: How It
Works (p. 2).
Each of these operations require you to specify the primary key of the item you want to work with. For
example, to read an item using GetItem, you must specify the partition key and sort key (if applicable)
for that item.
In addition to the four basic CRUD operations, DynamoDB also provides the following:
These batch operations combine multiple CRUD operations into a single request. In addition, the batch
operations read and write items in parallel to minimize response latencies.
This section describes how to use these operations and includes related topics, such as conditional
updates and atomic counters. This section also includes example code that uses the AWS SDKs.
Reading an Item
To read an item from a DynamoDB table, use the GetItem operation. You must provide the name of the
table, along with the primary key of the item you want.
Example
The following AWS CLI example shows how to read an item from the ProductCatalog table.
Note
With GetItem, you must specify the entire primary key, not just part of it. For example, if a
table has a composite primary key (partition key and sort key), you must supply a value for the
partition key and a value for the sort key.
GetItem request performs an eventually consistent read, by default. You can use the ConsistentRead
parameter to request a strongly consistent read instead. (This will consume additional read capacity
units, but it will return the most up-to-date version of the item.)
GetItem returns all of the item's attributes. You can use a projection expression to return only some of
the attributes. (For more information, see Projection Expressions (p. 386).)
To return the number of read capacity units consumed by GetItem, set the ReturnConsumedCapacity
parameter to TOTAL.
Example
The following AWS CLI example shows some of the optional GetItem parameters.
Writing an Item
To create, update, or delete an item in a DynamoDB table, use one of the following operations:
• PutItem
• UpdateItem
• DeleteItem
For each of these operations, you need to specify the entire primary key, not just part of it. For example,
if a table has a composite primary key (partition key and sort key), you must supply a value for the
partition key and a value for the sort key.
To return the number of write capacity units consumed by any of these operations, set the
ReturnConsumedCapacity parameter to one of the following:
PutItem
PutItem creates a new item. If an item with the same key already exists in the table, it is replaced with
the new item.
Example
Write a new item to the Thread table. The primary key for Thread consists of ForumName (partition key)
and Subject (sort key).
{
"ForumName": {"S": "Amazon DynamoDB"},
"Subject": {"S": "New discussion thread"},
"Message": {"S": "First post in this thread"},
"LastPostedBy": {"S": "fred@example.com"},
"LastPostDateTime": {"S": "201603190422"}
}
UpdateItem
If an item with the specified key does not exist, UpdateItem creates a new item. Otherwise, it modifies
an existing item's attributes.
You use an update expression to specify the attributes you want to modify and their new values. (For
more information, see Update Expressions (p. 398).) Within the update expression, you use expression
attribute values as placeholders for the actual values. (For more information, see Expression Attribute
Values (p. 389).)
Example
Modify various attributes in the Thread item. The optional ReturnValues parameter shows the item as
it appears after the update. (For more information, see Return Values (p. 375).)
--table-name Thread \
--key file://key.json \
--update-expression "SET Answered = :zero, Replies = :zero, LastPostedBy
= :lastpostedby" \
--expression-attribute-values file://expression-attribute-values.json \
--return-values ALL_NEW
{
"ForumName": {"S": "Amazon DynamoDB"},
"Subject": {"S": "New discussion thread"}
}
{
":zero": {"N":"0"},
":lastpostedby": {"S":"barney@example.com"}
}
DeleteItem
DeleteItem deletes the item with the specified key.
Example
This AWS CLI example shows how to delete the Thread item.
Return Values
In some cases, you might want DynamoDB to return certain attribute values as they appeared
before or after you modified them. The PutItem, UpdateItem, and DeleteItem operations have
a ReturnValues parameter that you can use to return the attribute values before or after they are
modified.
The default value for ReturnValues is NONE, meaning that DynamoDB will not return any information
about attributes that were modified.
The following are the other valid settings for ReturnValues, organized by DynamoDB API operation:
PutItem
• ReturnValues: ALL_OLD
• If you overwrite an existing item, ALL_OLD returns the entire item as it appeared before the
overwrite.
UpdateItem
The most common usage for UpdateItem is to update an existing item. However, UpdateItem actually
performs an upsert, meaning that it will automatically create the item if it does not already exist.
• ReturnValues: ALL_OLD
• If you update an existing item, ALL_OLD returns the entire item as it appeared before the update.
• If you update a nonexistent item (upsert), ALL_OLD has no effect.
• ReturnValues: ALL_NEW
• If you update an existing item, ALL_NEW returns the entire item as it appeared after the update.
• If you update a nonexistent item (upsert), ALL_NEW returns the entire item.
• ReturnValues: UPDATED_OLD
• If you update an existing item, UPDATED_OLD returns only the updated attributes, as they appeared
before the update.
• If you update a nonexistent item (upsert), UPDATED_OLD has no effect.
• ReturnValues: UPDATED_NEW
• If you update an existing item, UPDATED_NEW returns only the affected attributes, as they appeared
after the update.
• If you update a nonexistent item (upsert), UPDATED_NEW returns only the updated attributes, as
they appear after the update.
DeleteItem
• ReturnValues: ALL_OLD
• If you delete an existing item, ALL_OLD returns the entire item as it appeared before you deleted it.
• If you delete a nonexistent item, ALL_OLD does not return any data.
Batch Operations
For applications that need to read or write multiple items, DynamoDB provides the BatchGetItem
and BatchWriteItem operations. Using these operations can reduce the number of network round
trips from your application to DynamoDB. In addition, DynamoDB performs the individual read or
write operations in parallel. Your applications benefit from this parallelism without having to manage
concurrency or threading.
The batch operations are essentially wrappers around multiple read or write requests. For example, if
a BatchGetItem request contains five items, DynamoDB performs five GetItem operations on your
behalf. Similarly, if a BatchWriteItem request contains two put requests and four delete requests,
DynamoDB performs two PutItem and four DeleteItem requests.
In general, a batch operation does not fail unless all of the requests in the batch fail. For example,
suppose you perform a BatchGetItem operation, but one of the individual GetItem requests in the
batch fails. In this case, BatchGetItem returns the keys and data from the GetItem request that failed.
The other GetItem requests in the batch are not affected.
BatchGetItem
A single BatchGetItem operation can contain up to 100 individual GetItem requests and can retrieve
up to 16 MB of data. In addition, a BatchGetItem operation can retrieve items from multiple tables.
Example
Retrieve two items from the Thread table, using a projection expression to return only some of the
attributes.
{
"Thread": {
"Keys": [
{
"ForumName":{"S": "Amazon DynamoDB"},
"Subject":{"S": "DynamoDB Thread 1"}
},
{
"ForumName":{"S": "Amazon S3"},
"Subject":{"S": "S3 Thread 1"}
}
],
"ProjectionExpression":"ForumName, Subject, LastPostedDateTime, Replies"
}
}
BatchWriteItem
The BatchWriteItem operation can contain up to 25 individual PutItem and DeleteItem requests
and can write up to 16 MB of data. (The maximum size of an individual item is 400 KB.) In addition, a
BatchWriteItem operation can put or delete items in multiple tables.
Note
BatchWriteItem does not support UpdateItem requests.
Example
{
"ProductCatalog": [
{
"PutRequest": {
"Item": {
"Id": { "N": "601" },
"Description": { "S": "Snowboard" },
"QuantityOnHand": { "N": "5" },
"Price": { "N": "100" }
}
}
},
{
"PutRequest": {
"Item": {
"Id": { "N": "602" },
"Description": { "S": "Snow shovel" }
}
}
}
]
}
Atomic Counters
You can use the UpdateItem operation to implement an atomic counter—a numeric attribute that
is incremented, unconditionally, without interfering with other write requests. (All write requests
are applied in the order in which they were received.) With an atomic counter, the updates are not
idempotent. In other words, the numeric value will increment each time you call UpdateItem.
You might use an atomic counter to keep track of the number of visitors to a website. In this case,
your application would increment a numeric value, regardless of its current value. If an UpdateItem
operation should fail, the application could simply retry the operation. This would risk updating the
counter twice, but you could probably tolerate a slight overcounting or undercounting of website visitors.
An atomic counter would not be appropriate where overcounting or undercounting cannot be tolerated
(For example, in a banking application). In this case, it is safer to use a conditional update instead of an
atomic counter.
For more information, see Incrementing and Decrementing Numeric Attributes (p. 402).
Example
The following AWS CLI example increments the Price of a product by 5. (Because UpdateItem is not
idempotent, the Price will increase every time you run this example.)
Conditional Writes
By default, the DynamoDB write operations (PutItem, UpdateItem, DeleteItem) are unconditional:
each of these operations will overwrite an existing item that has the specified primary key.
DynamoDB optionally supports conditional writes for these operations. A conditional write will succeed
only if the item attributes meet one or more expected conditions. Otherwise, it returns an error.
Conditional writes are helpful in many situations. For example, you might want a PutItem operation
to succeed only if there is not already an item with the same primary key. Or you could prevent an
UpdateItem operation from modifying an item if one of its attributes has a certain value.
Conditional writes are helpful in cases where multiple users attempt to modify the same item. Consider
the following diagram, in which two users (Alice and Bob) are working with the same item from a
DynamoDB table:
Suppose that Alice uses the AWS CLI to update the Price attribute to 8:
{
":newval":{"N":"8"}
}
Now suppose that Bob issues a similar UpdateItem request later, but changes the Price to 12. For Bob,
the --expression-attribute-values parameter looks like this:
{
":newval":{"N":"12"}
}
Now consider the following diagram, showing how conditional writes would prevent Alice's update from
being overwritten:
Alice first attempts to update Price to 8, but only if the current Price is 10:
{
":newval":{"N":"8"},
":currval":{"N":"10"}
Next, Bob attempts to update the Price to 12, but only if the current Price is 10. For Bob, the --
expression-attribute-values parameter looks like this:
{
":newval":{"N":"12"},
":currval":{"N":"10"}
}
Because Alice has previously changed the Price to 8, the condition expression evaluates to false and
Bob's update fails.
For example, suppose you issue an UpdateItem request to increase the Price of an item by 3, but only
if the Price is currently 20. After you send the request, but before you get the results back, a network
error occurs and you don't know whether the request was successful. Because this conditional write is
idempotent, you can retry the same UpdateItem request, and DynamoDB will update the item only if
the Price is currently 20.
• If the item does not currently exist in the table, DynamoDB will consume one write capacity unit.
• If the item does exist, then the number of write capacity units consumed depends on the size of the
item. For example, a failed conditional write of a 1 KB item would consume one write capacity unit. If
the item were twice that size, the failed conditional write would consume two write capacity units.
Note
Write operations consume write capacity units only. They never consume read capacity units.
A failed conditional write will return a ConditionalCheckFailedException. When this occurs, you will not
receive any information in the response about the write capacity that was consumed. However, you
can view the ConsumedWriteCapacityUnits metric for the table in Amazon CloudWatch. (For more
information, see DynamoDB Metrics (p. 758) in Logging and Monitoring in DynamoDB (p. 755).)
To return the number of write capacity units consumed during a conditional write, you use the
ReturnConsumedCapacity parameter:
Note
Unlike a global secondary index, a local secondary index shares its provisioned throughput
capacity with its table. Read and write activity on a local secondary index consumes provisioned
throughput capacity from the table.
Topics
• Specifying Item Attributes (p. 383)
• Projection Expressions (p. 386)
• Expression Attribute Names (p. 387)
• Expression Attribute Values (p. 389)
• Condition Expressions (p. 390)
• Update Expressions (p. 398)
Topics
• Top-Level Attributes (p. 384)
• Nested Attributes (p. 385)
• Document Paths (p. 385)
In this section, we will consider an item in the ProductCatalog table. (This table is described in Example
Tables and Data (p. 886).) Here is a representation of the item:
{
"Id": 123,
"Title": "Bicycle 123",
"Description": "123 description",
"BicycleType": "Hybrid",
"Brand": "Brand-Company C",
"Price": 500,
"Color": {"Red", "Black"},
"ProductCategory": "Bicycle",
"InStock": true,
"QuantityOnHand": null,
"RelatedItems": {
341,
472,
649
},
"Pictures": {
"FrontView": "http://example.com/products/123_front.jpg",
"RearView": "http://example.com/products/123_rear.jpg",
"SideView": "http://example.com/products/123_left_side.jpg"
},
"ProductReviews": {
"FiveStar": {
"Excellent! Can't recommend it highly enough! Buy it!",
"Do yourself a favor and buy this."
},
"OneStar": {
"Terrible product! Do not buy this."
}
},
"Comment": "This product sells out quickly during the summer",
"Safety.Warning": "Always wear a helmet"
}
Top-Level Attributes
An attribute is said to be top-level if it is not embedded within another attribute. For the ProductCatalog
item, the top-level attributes are:
• Id
• Title
• Description
• BicycleType
• Brand
• Price
• Color
• ProductCategory
• InStock
• QuantityOnHand
• RelatedItems
• Pictures
• ProductReviews
• Comment
• Safety.Warning
All of these top-level attributes are scalars, except for Color (list), RelatedItems (list), Pictures
(map) and ProductReviews (map).
Nested Attributes
An attribute is said to be nested if it is embedded within another attribute. To access a nested attribute,
you use dereference operators:
The dereference operator for a list element is [n], where n is the element number. List elements are zero-
based, so [0] represents the first element in the list, [1] represents the second, and so on. Here are some
examples:
• MyList[0]
• AnotherList[12]
• ThisList[5][11]
The element ThisList[5] is itself a nested list. Therefore, ThisList[5][11] refers to the twelfth
element in that list.
The number within the square brackets must be a non-negative integer. Therefore, the following
expressions are invalid:
• MyList[-1]
• MyList[0.4]
The dereference operator for a map element is . (a dot). Use a dot as a separator between elements in a
map:
• MyMap.nestedField
• MyMap.nestedField.deeplyNestedField
Document Paths
In an expression, you use a document path to tell DynamoDB where to find an attribute. For a top-level
attribute, the document path is simply the attribute name. For a nested attribute, you construct the
document path using dereference operators.
The following are some examples of document paths. (Refer to the item shown in Specifying Item
Attributes (p. 383).)
ProductDescription
• A top-level list attribute. (This will return the entire list, not just some of the elements.)
RelatedItems
• The third element from the RelatedItems list. (Remember that list elements are zero-based.)
RelatedItems[2]
• The front-view picture of the product.
Pictures.FrontView
• All of the five-star reviews.
ProductReviews.FiveStar
• The first of the five-star reviews.
ProductReviews.FiveStar[0]
Note
The maximum depth for a document path is 32. Therefore, the number of dereferences
operators in a path cannot exceed this limit.
You can use any attribute name in a document path, provided that the first character is a-z or A-
Z and the second character (if present) is a-z, A-Z, or 0-9. If an attribute name does not meet
this requirement, you will need to define an expression attribute name as a placeholder. For more
information, see Expression Attribute Names (p. 387).
Projection Expressions
To read data from a table, you use operations such as GetItem, Query, or Scan. DynamoDB returns
all of the item attributes by default. To get just some, rather than all of the attributes, use a projection
expression.
A projection expression is a string that identifies the attributes you want. To retrieve a single attribute,
specify its name. For multiple attributes, the names must be comma-separated.
The following are some examples of projection expressions, based on the ProductCatalog item from
Specifying Item Attributes (p. 383):
Title
• Three top-level attributes. DynamoDB will retrieve the entire Color set.
You can use any attribute name in a projection expression, provided that the first character is a-z
or A-Z and the second character (if present) is a-z, A-Z, or 0-9. If an attribute name does not meet
this requirement, you will need to define an expression attribute name as a placeholder. For more
information, see Expression Attribute Names (p. 387).
The following AWS CLI example shows how to use a projection expression with a GetItem operation.
This projection expression retrieves a top-level scalar attribute (Description), the first element in a list
(RelatedItems[0]), and a list nested within a map (ProductReviews.FiveStar).
{
"Id": { "N": "123" }
}
For programming language-specific code examples, see Getting Started with DynamoDB SDK (p. 74).
This section describes several situations in which you will need to use expression attribute names.
Note
The examples in this section use the AWS CLI. For programming language-specific code samples,
see Getting Started with DynamoDB SDK (p. 74).
Topics
• Reserved Words (p. 387)
• Attribute Names Containing Dots (p. 388)
• Nested Attributes (p. 388)
• Repeating Attribute Names (p. 389)
Reserved Words
On some occasions, you might need to write an expression containing an attribute name that conflicts
with a DynamoDB reserved word. (For a complete list of reserved words, see Reserved Words in
DynamoDB (p. 937).)
For example, the following AWS CLI example would fail because COMMENT is a reserved word:
To work around this, you can replace Comment with an expression attribute name such as #c. The #
(pound sign) is required and indicates that this is a placeholder for an attribute name. The AWS CLI
example would now look like this:
Note
If an attribute name begins with a number or contains a space, a special character, or a reserved
word, then you must use an expression attribute name to replace that attribute's name in the
expression.
DynamoDB would return an empty result, rather than the expected string ("Always wear a helmet").
This is because DynamoDB interprets a dot in an expression as a document path separator. In this
case, you would need to define an expression attribute name (such as #sw) as a substitute for
Safety.Warning. You could then use the following projection expression:
Nested Attributes
Suppose that you wanted to access the nested attribute ProductReviews.OneStar, using the
following projection expression:
The result would contain all of the one-star product reviews, which is expected.
But what if you decided to use an expression attribute name instead? For example, what would happen if
you were to define #pr1star as a substitute for ProductReviews.OneStar?
DynamoDB would return an empty result, instead of the expected map of one-star reviews. This is
because DynamoDB interprets a dot in an expression attribute value as a character within an attribute's
name. When DynamoDB evaluates the expression attribute name #pr1star, it determines that
ProductReviews.OneStar refers to a scalar attribute—which is not what was intended.
The correct approach would be to define an expression attribute name for each element in the document
path:
• #pr — ProductReviews
• #1star — OneStar
To make this more concise, you can replace ProductReviews with an expression attribute name such as
#pr. The revised expression would now look like this:
If you define an expression attribute name, you must use it consistently throughout the entire
expression. Also, you cannot omit the # symbol.
you might not know until runtime. An expression attribute value must begin with a :, and be followed by
one or more alphanumeric characters.
For example, suppose you wanted to return all of the ProductCatalog items that are available in Black
and cost 500 or less. You could use a Scan operation with a filter expression, as in this AWS CLI example:
{
":c": { "S": "Black" },
":p": { "N": "500" }
}
Note
A Scan operation reads every item in a table; therefore, you should avoid using Scan with large
tables.
The filter expression is applied to the Scan results, and items that do not match the filter
expression are discarded.
If you define an expression attribute value, you must use it consistently throughout the entire expression.
Also, you cannot omit the : symbol.
Expression attribute values are used with condition expressions, update expressions, and filter
expressions.
Note
For programming language-specific code examples, see Getting Started with DynamoDB
SDK (p. 74).
Condition Expressions
To manipulate data in a DynamoDB table, you use the PutItem, UpdateItem and DeleteItem
operations. (You can also use BatchWriteItem to perform multiple PutItem or DeleteItem
operations in a single call.)
For these data manipulation operations, you can specify a condition expression to determine which items
should be modified. If the condition expression evaluates to true, the operation succeeds; otherwise, the
operation fails.
The following are some AWS CLI examples of using condition expressions. These examples are based
on the ProductCatalog table, which was introduced in Specifying Item Attributes (p. 383). The
partition key for this table is Id; there is no sort key. The following PutItem operation creates a sample
ProductCatalog item that we will refer to in the examples:
The arguments for --item are stored in the file item.json. (For simplicity, only a few item attributes
are used.)
{
"Id": {"N": "456" },
"ProductCategory": {"S": "Sporting Goods" },
"Price": {"N": "650" }
}
Topics
• Preventing Overwrites of an Existing Item (p. 391)
• Checking for Attributes in an Item (p. 391)
• Conditional Deletes (p. 392)
• Conditional Updates (p. 393)
• Comparison Operator and Function Reference (p. 393)
If the condition expression evaluates to false, DynamoDB returns the following error message: The
conditional request failed
Note
For more information about attribute_not_exists and other functions, see Comparison
Operator and Function Reference (p. 393).
The following example uses attribute_not_exists to delete a product only if it does not have a
Price attribute:
DynamoDB also provides an attribute_exists function. The following example will delete a product
only if it has received poor reviews:
--condition-expression "attribute_exists(ProductReviews.OneStar)"
Note
For more information about attribute_not_exists, attribute_exists, and other
functions, see Comparison Operator and Function Reference (p. 393).
Conditional Deletes
To perform a conditional delete, you use a DeleteItem operation with a condition expression. The
condition expression must evaluate to true in order for the operation to succeed; otherwise, the
operation fails.
{
"Id": {
"N": "456"
},
"Price": {
"N": "650"
},
"ProductCategory": {
"S": "Sporting Goods"
}
}
Now suppose that you wanted to delete the item, but only under the following conditions:
{
":cat1": {"S": "Sporting Goods"},
":cat2": {"S": "Gardening Supplies"},
":lo": {"N": "500"},
":hi": {"N": "600"}
}
Note
In the condition expression, the : (colon character) indicates an expression attribute value—
placeholder for an actual value. For more information, see Expression Attribute Values (p. 389).
For more information about IN, AND, and other keywords, , see Comparison Operator and
Function Reference (p. 393).
In this example, the ProductCategory comparison evaluates to true, but the Price comparison
evaluates to false. This causes the condition expression to evaluate to false, and the DeleteItem
operation to fail.
Conditional Updates
To perform a conditional update, you use an UpdateItem operation with a condition expression.
The condition expression must evaluate to true in order for the operation to succeed; otherwise, the
operation fails.
Note
UpdateItem also supports update expressions, where you specify the modifications you
specify the changes you want to make to an item. For more information, see Update
Expressions (p. 398).
Suppose that you started with the item shown in Condition Expressions (p. 390):
{
"Id": { "N": "456"},
"Price": {"N": "650"},
"ProductCategory": {"S": "Sporting Goods"}
}
The following example performs an UpdateItem operation. It attempts to reduce the Price of a
product by 75—but the condition expression prevents the update if the current Price is below 500:
{
":discount": { "N": "75"},
":limit": {"N": "500"}
}
If the starting Price is 650, then the UpdateItem operation reduces the Price to 575. If you run the
UpdateItem operation again, the Price is reduced to 500. If you run it a third time, the condition
expression evaluates to false, and the update fails.
Note
In the condition expression, the : (colon character) indicates an expression attribute value—
placeholder for an actual value. For more information, see Expression Attribute Values (p. 389).
For more information about ">" and other operators, see Comparison Operator and Function
Reference (p. 393).
Topics
• Syntax for Condition Expressions (p. 394)
• Making Comparisons (p. 394)
condition-expression ::=
operand comparator operand
| operand BETWEEN operand AND operand
| operand IN ( operand (',' operand (, ...) ))
| function
| condition AND condition
| condition OR condition
| NOT condition
| ( condition )
comparator ::=
=
| <>
| <
| <=
| >
| >=
function ::=
attribute_exists (path)
| attribute_not_exists (path)
| attribute_type (path, type)
| begins_with (path, substr)
| contains (path, operand)
| size (path)
Making Comparisons
Use these comparators to compare an operand against a range of values, or an enumerated list of values:
• a = b — true if a is equal to b
• a <> b — true if a is not equal to b
• a < b — true if a is less than b
• a <= b — true if a is less than or equal to b
• a > b — true if a is greater than b
• a >= b — true if a is greater than or equal to b
Use the BETWEEN and IN keywords to compare an operand against a range of values, or an enumerated
list of values:
• a BETWEEN b AND c - true if a is greater than or equal to b, and less than or equal to c.
• a IN (b, c, d) — true if a is equal to any value in the list — for example, any of b, c or d. The list
can contain up to 100 values, separated by commas.
Functions
Use the following functions to determine whether an attribute exists in an item, or to evaluate the value
of an attribute. These function names are case-sensitive. For a nested attribute, you must provide its full
document path.
Function Description
• attribute_exists (Pictures.SideView)
• attribute_not_exists (Manufacturer)
• S — String
• SS — String Set
• N — Number
• NS — Number Set
• B — Binary
• BS — Binary Set
• BOOL — Boolean
• NULL — Null
• L — List
• M — Map
• attribute_type
(ProductReviews.FiveStar, :v_sub)
begins_with (path, substr) True if the attribute specified by path begins with
a particular substring.
Function Description
Example: Check whether the first few characters
of the front view picture URL are http://.
• begins_with
(Pictures.FrontView, :v_sub)
Function Description
• size(ProductReviews.OneStar)
> :v_sub
Logical Evaluations
Use the AND, OR and NOT keywords to perform logical evaluations. In the list following, a and b represent
conditions to be evaluated.
Parentheses
Use parentheses to change the precedence of a logical evaluation. For example, suppose that conditions
a and b are true, and that condition c is false. The following expression evaluates to true:
• a OR b AND c
However, if you enclose a condition in parentheses, it is evaluated first. For example, the following
evaluates to false:
• (a OR b) AND c
Note
You can nest parentheses in an expression. The innermost ones are evaluated first.
Precedence in Conditions
DynamoDB evaluates conditions from left to right using the following precedence rules:
Update Expressions
To update an existing item in a table, you use the UpdateItem operation. You must provide the key of
the item you want to update. You must also provide an update expression, indicating the attributes you
want to modify and the values you want to assign to them.
An update expression specifies how UpdateItem will modify the attributes of an item—for example,
setting a scalar value, or removing elements from a list or a map.
update-expression ::=
[ SET action [, action] ... ]
[ REMOVE action [, action] ...]
[ ADD action [, action] ... ]
[ DELETE action [, action] ...]
An update expression consists of one or more clauses. Each clause begins with a SET, REMOVE, ADD or
DELETE keyword. You can include any of these clauses in an update expression, in any order. However,
each action keyword can appear only once.
Within each clause are one or more actions, separated by commas. Each action represents a data
modification.
The examples in this section are based on the ProductCatalog item shown in Projection
Expressions (p. 386).
Topics
• SET—Modifying or Adding Item Attributes (p. 399)
• REMOVE—Deleting Attributes From An Item (p. 404)
• ADD—Updating Numbers and Sets (p. 405)
• DELETE—Removing Elements From A Set (p. 406)
You can also use SET to add or subtract from an attribute that is of type Number. To perform multiple
SET actions, separate them by commas.
set-action ::=
path = value
value ::=
operand
| operand '+' operand
| operand '-' operand
operand ::=
path | function
The following PutItem operation creates a sample item that we will refer to in the examples:
The arguments for --item are stored in the file item.json. (For simplicity, only a few item attributes
are used.)
{
"Id": {"N": "789"},
"ProductCategory": {"S": "Home Improvement"},
"Price": {"N": "52"},
"InStock": {"BOOL": true},
"Brand": {"S": "Acme"}
}
Topics
• Modifying Attributes (p. 400)
Modifying Attributes
Example
{
":c": { "S": "Hardware" },
":p": { "N": "60" }
}
Note
In the UpdateItem operation, --return-values ALL_NEW causes DynamoDB to return the
item as it appears after the update.
Example
{
":ri": {
"L": [
{ "S": "Hammer" }
]
},
":pr": {
"M": {
"FiveStar": {
"L": [
{ "S": "Best product ever!" }
]
}
}
}
}
Example
Add a new attribute to the RelatedItems list. (Remember that list elements are zero-based, so [0]
represents the first element in the list, [1] represents the second, and so on.)
{
":ri": { "S": "Nails" }
}
Note
When you use SET to update a list element, the contents of that element are replaced with
the new data that you specify. If the element does not already exist, SET will append the new
element at the end of the list.
If you add multiple elements in a single SET operation, the elements are sorted in order by
element number.
Example
{
"#pr": "ProductReviews",
"#5star": "FiveStar",
"#3star": "ThreeStar"
}
{
":r5": { "S": "Very happy with my purchase" },
":r3": {
"L": [
{ "S": "Just OK - not that great" }
]
}
}
You can add to or subtract from an existing numeric attribute. To do this, use the + (plus) and - (minus)
operators.
Example
To increase the Price, you would use the + operator in the update expression.
You can add elements to the end of a list,. To do this, use SET with the list_append function. (The
function name is case-sensitive.) The list_append function is specific to the SET action, and can only
be used in an update expression. The syntax is:
The function takes two lists as input, and appends list2 to list1.
Example
In Adding Elements To a List (p. 401), we created the RelatedItems list and populated it with two
elements: Hammer and Nails. Now we will append two more elements to the end of RelatedItems:
{
":vals": {
"L": [
{ "S": "Screwdriver" },
{"S": "Hacksaw" }
]
}
}
Finally, we will append one more element to the beginning of RelatedItems. To do this, we will swap
the order of the list_append elements. (Remember that list_append takes two lists as input, and
appends the second list to the first.)
The resulting RelatedItems attribute now contains five elements, in the following order: Chisel,
Hammer, Nails, Screwdriver, Hacksaw.
If you want to avoid overwriting an existing attribute, you can use SET with the if_not_exists
function. (The function name is case-sensitive.) The if_not_exists function is specific to the SET
action, and can only be used in an update expression. The syntax is:
If the item does not contain an attribute at the specified path, then if_not_exists evaluates to
value; otherwise, it evaluates to path.
Example
Set the Price of an item, but only if the item does not already have a Price attribute. (If Price already
exists, nothing happens.)
The following is a syntax summary for REMOVE in an update expression. The only operand is the
document path for the attribute you want to remove:
remove-action ::=
path
Example
Remove some attributes from an item. (If the attributes do not exist, nothing happens.)
Example
In Appending Elements To a List (p. 402), we modified a list attribute (RelatedItems) so that it
contained five elements:
• [0]—Chisel
• [1]—Hammer
• [2]—Nails
• [3]—Screwdriver
• [4]—Hacksaw
The following AWS CLI example deletes Hammer and Nails from the list.
After removing Hammer and Nails, the remaining elements are shifted. The list now contains the
following:
• [0]—Chisel
• [1]—Screwdriver
• [2]—Hacksaw
Use the ADD action in an update expression to add a new attribute and its value(s) to an item.
If the attribute already exists, then the behavior of ADD depends on the attribute's data type:
• If the attribute is a number, and the value you are adding is also a number, then the value is
mathematically added to the existing attribute. (If the value is a negative number, then it is subtracted
from the existing attribute.)
• If the attribute is a set, and the value you are adding is also a set, then the value is appended to the
existing set.
Note
The ADD action only supports number and set data types.
• The path element is the document path to an attribute. The attribute must be either a Number or a
set data type.
• The value element is a number that you want to add to the attribute (for Number data types), or a set
to append to the attribute (for set types).
add-action ::=
path value
Adding a Number
Assume that the QuantityOnHand attribute does not exist. The following AWS CLI example sets
QuantityOnHand to 5:
Now that QuantityOnHand exists, you can re-run the example to increment QuantityOnHand by 5
each time.
Assume that the Color attribute does not exist. The following AWS CLI example sets Color to a string
set with two elements:
Use the DELETE action in an update expression to remove one or more elements from a set. To perform
multiple DELETE actions, separate them by commas.
• The path element is the document path to an attribute. The attribute must be a set data type.
• The subset is one or more elements that you want to delete from path. that you want to delete. You
must specify subset as a set type.
delete-action ::=
path value
Example
In Adding Elements To A Set (p. 405), we created the Colors string set. This example removes some of
the elements from that set:
Time To Live
Time To Live (TTL) for DynamoDB allows you to define when items in a table expire so that they can be
automatically deleted from the database.
TTL is provided at no extra cost as a way to reduce storage usage and reduce the cost of storing
irrelevant data without using provisioned throughput. With TTL enabled on a table, you can set a
timestamp for deletion on a per-item basis, allowing you to limit storage usage to only those records
that are relevant.
TTL is useful if you have continuously accumulating data that loses relevance after a specific time period.
For example: session data, event logs, usage patterns, and other temporary data. If you have sensitive
data that must be retained only for a certain amount of time according to contractual or regulatory
obligations, TTL helps you ensure that it is removed promptly and as scheduled.
TTL compares the current time in epoch time format to the time stored in the Time To Live attribute of
an item. If the epoch time value stored in the attribute is less than the current time, the item is marked
as expired and subsequently deleted. This processing takes place automatically in the background and
does not affect read or write traffic to the table.
Note
The epoch time format is the number of seconds elapsed since 12:00:00 AM January 1st, 1970
UTC.
Important
DynamoDB typically deletes expired items within 48 hours of expiration. The exact duration
within which an item truly gets deleted after expiration is specific to the nature of the workload
and the size of the table. Items that have expired and not been deleted will still show up in
reads, queries, and scans. These items can still be updated and successful updates to change or
remove the expiration attribute will be honored.
As items are deleted, they are removed from any Local Secondary Index and Global Secondary Index
immediately in the same eventually consistent way as a standard delete operation.
For example, consider a table named SessionData that tracks the session history of users. Each
item in SessionData is identified by a partition key (UserName) and a sort key (SessionId). Additional
attributes like UserName,SessionId, CreationTime and ExpirationTime track the session information. The
ExpirationTime attribute is set as the Time To Live (TTL) attribute (Not all of the attributes are shown):
SessionData
In this example each item has an ExpirationTime attribute value set when it is created. Consider the first
record:
SessionData
In this example, the item CreationTime is set to Friday, April 29 12:00 PM UTC 2016 and the
ExpirationTime is set 2 hours later at Friday, April 29 2:00 PM UTC 2016. The item will expire when the
current time, in epoch format, is greater than the time in the ExpirationTime attribute. In this case, the
item with the key { Username: user1, SessionId: 74686572652773 } will expire after 2:00 PM
(1461938400).
Note
Due to the potential delay between expiration and deletion time, you might get expired items
when you query for items. If you don’t want to view expired items when you issue a read
request, you should filter them out.
To do this, use a filter expression that returns only items where the Time To Live expiration value
is greater than the current time in epoch format. For more information, see Filter Expressions for
Query (p. 457) and Filter Expressions for Scan (p. 473).
• Ensure that any existing timestamp values in the specified Time To Live attribute are correct and in the
right format.
• Items with an expiration time greater than 5 years in the past are not deleted.
• If data recovery is a concern, we recommend that you back up your table.
• For a 24-hour recovery window, you can use Amazon DynamoDB Streams. For more information, see
DynamoDB Streams and Time To Live (p. 571).
• For a full backup, you can use on-demand backups or point in time recovery. For more information,
see On-Demand Backup and Restore for DynamoDB (p. 596) and Point-in-Time Recovery for
DynamoDB (p. 608).
• You can set Time To Live when creating a DynamoDB table using CloudFormation. For more
information, see AWS CloudFormation User Guide.
• You can use IAM policies to prevent unauthorized updates to the TTL attribute or configuration of
the Time To Live feature. If you only allow access to specified actions in your existing IAM policies,
ensure that your policies are updated to allow dynamodb:UpdateTimeToLive for roles that need to
enable or disable Time To Live on tables. For more information, see Using Identity-Based Policies (IAM
Policies) for Amazon DynamoDB (p. 719).
• Consider whether you need to do any post-processing of deleted items. The Streams records of TTL
deletes are marked and you can monitor them using an AWS Lambda function. For more information
on the additions to the Streams record, see DynamoDB Streams and Time To Live (p. 571).
Topics
• Enable Time To Live (console) (p. 409)
• Enable Time To Live (CLI) (p. 411)
4. In the Manage TTL dialog box, choose Enable TTL and then type the TTL attribute name.
• Enable TTL – Choose this to either enable or disable TTL on the table. It may take up to one hour
for the change to fully process.
• TTL Attribute – The name of the DynamoDB attribute to store the TTL timestamp for items.
• 24-hour backup streams – Choose this to enable Amazon DynamoDB Streams on the table. For
more information about how you can use DynamoDB Streams for backup, see DynamoDB Streams
and Time To Live (p. 571).
5. (Optional) To preview some of the items that will be deleted when TTL is enabled, choose Run
preview.
Warning
This provides you with a sample list of items. It does not provide you with a complete list of
items that will be deleted by TTL.
6. Choose Continue to save the settings and enable TTL.
Now that TTL is enabled, the TTL attribute is marked TTL when you view items in the DynamoDB
console.
You can view the date and time that an item will expire by hovering your mouse over the attribute.
To add an item to the "TTLExample" table with the Time To Live attribute set using the BASH shell and
CLI:
aws dynamodb put-item --table-name "TTLExample" --item '{"id": {"N": "1"}, "ttl": {"N":
"'$EXP'"}}'
This example started with the current date and added five days to it to create an expiration time. Then, it
converts the expiration time to epoch time format to finally add an item to the "TTLExample" table.
Note
One way to set expiration values for Time To Live is to calculate the number of seconds to add
to the expiration time. For example, five days is 432000 seconds. However, it is often preferable
to start with a date and work from there.
It is fairly simple to get the current time in epoch time format. For example:
You can use the AWS SDK for Java Document API to perform typical create, read, update, and delete
(CRUD) operations on items in a table.
Note
The SDK for Java also provides an object persistence model, allowing you to map your client-
side classes to DynamoDB tables. This approach can reduce the amount of code you have to
write. For more information, see Java: DynamoDBMapper (p. 226).
The following sections describe Java snippets to perform several Java Document API item actions. To run
complete working examples instead, see:
• Example: CRUD Operations Using the AWS SDK for Java Document API (p. 421)
• Example: Batch Operations Using AWS SDK for Java Document API (p. 425)
• Example: Handling Binary Type Attributes Using the AWS SDK for Java Document API (p. 429)
Putting an Item
The putItem method stores an item in a table. If the item exists, it replaces the entire item. Instead of
replacing the entire item, if you want to update only specific attributes, you can use the updateItem
method. For more information, see Updating an Item (p. 419).
The following Java code snippet demonstrates the preceding tasks. The snippet writes a new item to the
ProductCatalog table.
Example
In the preceding example, the item has attributes that are scalars (String, Number, Boolean, Null), sets
(String Set), and document types (List, Map).
• A ConditionExpression that defines the conditions for the request. The snippet defines the
condition that the existing item that has the same primary key is replaced only if it has an ISBN
attribute that equals a specific value.
• A map for ExpressionAttributeValues that will be used in the condition. In this case, there is
only one substitution required: The placeholder :val in the condition expression will be replaced at
runtime with the actual ISBN value to be checked.
The following example adds a new book item using these optional parameters.
Example
Suppose that you wanted to store the following JSON document, containing vendors that can fulfill
orders for a particular product:
Example
{
"V01": {
"Name": "Acme Books",
"Offices": [ "Seattle" ]
},
"V02": {
"Name": "New Publishers, Inc.",
"Offices": ["London", "New York"
]
},
"V03": {
You can use the withJSON method to store this in the ProductCatalog table, in a Map attribute
named VendorInfo. The following Java code snippet demonstrates how to do this.
Getting an Item
To retrieve a single item, use the getItem method of a Table object. Follow these steps:
The following Java code snippet demonstrates the preceding steps. The code snippet gets the item that
has the specified partition key.
You can use a ProjectionExpression to retrieve only specific attributes or elements, rather than
an entire item. A ProjectionExpression can specify top-level or nested attributes, using document
paths. For more information, see Projection Expressions (p. 386).
The parameters of the getItem method do not let you specify read consistency; however, you can create
a GetItemSpec, which provides full access to all of the inputs to the low-level GetItem operation. The
code example below creates a GetItemSpec, and uses that spec as input to the getItem method.
Example
System.out.println(item.toJSONPretty());
To print an Item in a human-readable format, use the toJSONPretty method. The output from the
example above looks like this:
{
"RelatedItems" : [ 341 ],
"Reviews" : {
"FiveStar" : [ "Excellent! Can't recommend it highly enough! Buy it!", "Do yourself a
favor and buy this" ]
},
"Id" : 123,
"Title" : "20-Bicycle 123"
}
Note
You can use the toJSON method to convert any item (or its attributes) to a JSON-formatted
string. The following code snippet retrieves several top-level and nested attributes, and prints
the results as JSON:
{"VendorInfo":{"V01":{"Name":"Acme Books","Offices":
["Seattle"]}},"Price":30,"Title":"Book 210 Title"}
The following Java code snippet demonstrates the preceding steps. The example performs a
batchWriteItem operation on two tables - Forum and Thread. The corresponding TableWriteItems
objects define the following actions:
For a working example, see Example: Batch Write Operation Using the AWS SDK for Java Document
API (p. 425).
The following Java code snippet demonstrates the preceding steps. The example retrieves two items
from the Forum table and three items from the Thread table.
The following code snippet retrieves two items from the Forum table. The
withProjectionExpression parameter specifies that only the Threads attribute is to be retrieved.
Example
forumTableKeysAndAttributes.addHashOnlyPrimaryKeys("Name",
"Amazon S3",
"Amazon DynamoDB");
Updating an Item
The updateItem method of a Table object can update existing attribute values, add new attributes, or
delete attributes from an existing item.
• If an item does not exist (no item in the table with the specified primary key), updateItem adds a new
item to the table
• If an item exists, updateItem performs the update as specified by the UpdateExpression
parameter:
Note
It is also possible to "update" an item using putItem. For example, if you call putItem to add
an item to the table, but there is already an item with the specified primary key, putItem will
replace the entire item. If there are attributes in the existing item that are not specified in the
input, putItem will remove those attributes from the item.
In general, we recommend that you use updateItem whenever you want to modify any item
attributes. The updateItem method will only modify the item attributes that you specify in the
input, and the other attributes in the item will remain unchanged.
1. Create an instance of the Table class to represent the table you want to work with.
2. Call the updateTable method of the Table instance. You must specify the primary key of the item
that you want to retrieve, along with an UpdateExpression that describes the attributes to modify
and how to modify them.
The following Java code snippet demonstrates the preceding tasks. The snippet updates a book item
in the ProductCatalog table. It adds a new author to the set of Authors and deletes the existing ISBN
attribute. It also reduces the price by one.
Example
Atomic Counter
You can use updateItem to implement an atomic counter, where you increment or decrement the value
of an existing attribute without interfering with other write requests. To increment an atomic counter,
use an UpdateExpression with a set action to add a numeric value to an existing attribute of type
Number.
The following code snippet demonstrates this, incrementing the Quantity attribute by one. It also
demonstrates the use of the ExpressionAttributeNames parameter in an UpdateExpression.
Deleting an Item
The deleteItem method deletes an item from a table. You must provide the primary key of the item
you want to delete.
Example
Example
Note
The SDK for Java also provides an object persistence model, allowing you to map your client-
side classes to DynamoDB tables. This approach can reduce the amount of code you have to
write. For more information, see Java: DynamoDBMapper (p. 226).
Note
This code sample assumes that you have already loaded data into DynamoDB for your account
by following the instructions in the Creating Tables and Loading Sample Data (p. 323) section.
For step-by-step instructions to run the following example, see Java Code Examples (p. 328).
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package com.amazonaws.codesamples.document;
import java.io.IOException;
import java.util.Arrays;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Map;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import com.amazonaws.services.dynamodbv2.document.DeleteItemOutcome;
import com.amazonaws.services.dynamodbv2.document.DynamoDB;
import com.amazonaws.services.dynamodbv2.document.Item;
import com.amazonaws.services.dynamodbv2.document.Table;
import com.amazonaws.services.dynamodbv2.document.UpdateItemOutcome;
import com.amazonaws.services.dynamodbv2.document.spec.DeleteItemSpec;
import com.amazonaws.services.dynamodbv2.document.spec.UpdateItemSpec;
import com.amazonaws.services.dynamodbv2.document.utils.NameMap;
import com.amazonaws.services.dynamodbv2.document.utils.ValueMap;
import com.amazonaws.services.dynamodbv2.model.ReturnValue;
createItems();
retrieveItem();
}
catch (Exception e) {
System.err.println("Create items failed.");
System.err.println(e.getMessage());
}
}
try {
}
catch (Exception e) {
System.err.println("GetItem failed.");
System.err.println(e.getMessage());
}
try {
}
catch (Exception e) {
System.err.println("Failed to add new attribute in " + tableName);
System.err.println(e.getMessage());
}
}
try {
}
catch (Exception e) {
System.err.println("Failed to update multiple attributes in " + tableName);
System.err.println(e.getMessage());
}
}
try {
// Specify the desired price (25.00) and also the condition (price =
// 20.00)
}
catch (Exception e) {
System.err.println("Error updating item in " + tableName);
System.err.println(e.getMessage());
}
}
try {
}
catch (Exception e) {
System.err.println("Error deleting item in " + tableName);
System.err.println(e.getMessage());
}
}
}
This section provides examples of batch write and batch get operations using the AWS SDK for Java
Document API.
Note
The SDK for Java also provides an object persistence model, allowing you to map your client-
side classes to DynamoDB tables. This approach can reduce the amount of code you have to
write. For more information, see Java: DynamoDBMapper (p. 226).
Example: Batch Write Operation Using the AWS SDK for Java Document API
The following Java code example uses the batchWriteItem method to perform the following put and
delete operations:
You can specify any number of put and delete requests against one or more tables when creating your
batch write request. However, batchWriteItem limits the size of a batch write request and the number
of put and delete operations in a single batch write operation. If your request exceeds these limits, your
request is rejected. If your table does not have sufficient provisioned throughput to serve this request,
the unprocessed request items are returned in the response.
The following example checks the response to see if it has any unprocessed request items. If it does,
it loops back and resends the batchWriteItem request with unprocessed items in the request.
If you followed the Creating Tables and Loading Sample Data (p. 323) section, you should already
have created the Forum and Thread tables. You can also create these tables and upload sample data
programmatically. For more information, see Creating Example Tables and Uploading Data Using the
AWS SDK for Java (p. 895).
For step-by-step instructions to test the following sample, see Java Code Examples (p. 328).
Example
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package com.amazonaws.codesamples.document;
import java.io.IOException;
import java.util.Arrays;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import com.amazonaws.services.dynamodbv2.document.BatchWriteItemOutcome;
import com.amazonaws.services.dynamodbv2.document.DynamoDB;
import com.amazonaws.services.dynamodbv2.document.Item;
import com.amazonaws.services.dynamodbv2.document.TableWriteItems;
import com.amazonaws.services.dynamodbv2.model.WriteRequest;
writeMultipleItemsBatchWrite();
try {
do {
if (outcome.getUnprocessedItems().size() == 0) {
System.out.println("No unprocessed items found");
}
else {
System.out.println("Retrieving the unprocessed items");
outcome = dynamoDB.batchWriteItemUnprocessed(unprocessedItems);
}
}
catch (Exception e) {
System.err.println("Failed to retrieve items: ");
e.printStackTrace(System.err);
}
Example: Batch Get Operation Using the AWS SDK for Java Document API
The following Java code example uses the batchGetItem method to retrieve multiple items from the
Forum and the Thread tables. The BatchGetItemRequest specifies the table names and a list of keys
for each item to get. The example processes the response by printing the items retrieved.
Note
This code sample assumes that you have already loaded data into DynamoDB for your account
by following the instructions in the Creating Tables and Loading Sample Data (p. 323) section.
For step-by-step instructions to run the following example, see Java Code Examples (p. 328).
Example
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package com.amazonaws.codesamples.document;
import java.io.IOException;
import java.util.List;
import java.util.Map;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import com.amazonaws.services.dynamodbv2.document.BatchGetItemOutcome;
import com.amazonaws.services.dynamodbv2.document.DynamoDB;
import com.amazonaws.services.dynamodbv2.document.Item;
import com.amazonaws.services.dynamodbv2.document.TableKeysAndAttributes;
import com.amazonaws.services.dynamodbv2.model.KeysAndAttributes;
try {
BatchGetItemOutcome outcome =
dynamoDB.batchGetItem(forumTableKeysAndAttributes,
threadTableKeysAndAttributes);
do {
for (String tableName : outcome.getTableItems().keySet()) {
System.out.println("Items in table " + tableName);
List<Item> items = outcome.getTableItems().get(tableName);
for (Item item : items) {
System.out.println(item.toJSONPretty());
}
}
if (unprocessed.isEmpty()) {
System.out.println("No unprocessed keys found");
}
else {
System.out.println("Retrieving the unprocessed keys");
outcome = dynamoDB.batchGetItemUnprocessed(unprocessed);
}
} while (!unprocessed.isEmpty());
}
catch (Exception e) {
System.err.println("Failed to retrieve items.");
System.err.println(e.getMessage());
}
If you followed the Creating Tables and Loading Sample Data (p. 323) section, you should already have
created the Reply table. You can also create this tables programmatically. For more information, see
Creating Example Tables and Uploading Data Using the AWS SDK for Java (p. 895).
For step-by-step instructions to test the following sample, see Java Code Examples (p. 328).
Example
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package com.amazonaws.codesamples.document;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.text.SimpleDateFormat;
import java.util.Date;
import java.util.TimeZone;
import java.util.zip.GZIPInputStream;
import java.util.zip.GZIPOutputStream;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import com.amazonaws.services.dynamodbv2.document.DynamoDB;
import com.amazonaws.services.dynamodbv2.document.Item;
import com.amazonaws.services.dynamodbv2.document.Table;
import com.amazonaws.services.dynamodbv2.document.spec.GetItemSpec;
dateFormatter.setTimeZone(TimeZone.getTimeZone("UTC"));
String replyDateTime = dateFormatter.format(new Date());
e.printStackTrace(System.err);
}
}
is.close();
baos.close();
bais.close();
return result;
}
}
You can use the AWS SDK for .NET low-level API to perform typical create, read, update, and delete
(CRUD) operations on an item in a table.
The following are the common steps you follow to perform data CRUD operations using the .NET low-
level API.
For example, use the PutItemRequest request object when uploading an item and use the
GetItemRequest request object when retrieving an existing item.
You can use the request object to provide both the required and optional parameters.
3. Execute the appropriate method provided by the client by passing in the request object that you
created in the preceding step.
Putting an Item
The PutItem method uploads an item to a table. If the item exists, it replaces the entire item.
Note
Instead of replacing the entire item, if you want to update only specific attributes, you can use
the UpdateItem method. For more information, see Updating an Item (p. 435).
The following are the steps to upload an item using the low-level .NET SDK API.
To put an item, you must provide the table name and the item.
3. Execute the PutItem method by providing the PutItemRequest object that you created in the
preceding step.
The following C# code snippet demonstrates the preceding steps. The example uploads an item to the
ProductCatalog table.
Example
In the preceding example, you upload a book item that has the Id, Title, ISBN, and Authors attributes.
Note that Id is a numeric type attribute and all other attributes are of the string type. Authors is a String
set.
Example
};
var response = client.PutItem(request);
Getting an Item
The GetItem method retrieves an item.
Note
To retrieve multiple items you can use the BatchGetItem method. For more information, see
Batch Get: Getting Multiple Items (p. 440).
The following are the steps to retrieve an existing item using the low-level .NET SDK API.
To get an item, you must provide the table name and primary key of the item.
3. Execute the GetItem method by providing the GetItemRequest object that you created in the
preceding step.
The following C# code snippet demonstrates the preceding steps. The example retrieves an item from
the ProductCatalog table.
Example
Updating an Item
The UpdateItem method updates an existing item if it is present. You can use the UpdateItem
operation to update existing attribute values, add new attributes, or delete attributes from the existing
collection. If the item that has the specified primary key is not found, it adds a new item.
• If the item does not exist, UpdateItem adds a new item using the primary key that is specified in the
input.
• If the item exists, UpdateItem applies the updates as follows:
Note
The PutItem operation also can perform an update. For more information, see Putting an
Item (p. 433). For example, if you call PutItem to upload an item and the primary key exists,
the PutItem operation replaces the entire item. Note that, if there are attributes in the existing
item and those attributes are not specified in the input, the PutItem operation deletes those
attributes. However, UpdateItem only updates the specified input attributes, any other existing
attributes of that item remain unchanged.
The following are the steps to update an existing item using the low-level .NET SDK API.
This is the request object in which you describe all the updates, such as add attributes, update existing
attributes, or delete attributes. To delete an existing attribute, specify the attribute name with null
value.
3. Execute the UpdateItem method by providing the UpdateItemRequest object that you created in
the preceding step.
The following C# code snippet demonstrates the preceding steps. The example updates a book item in
the ProductCatalog table. It adds a new author to the Authors collection, and deletes the existing ISBN
attribute. It also reduces the price by one.
};
var response = client.UpdateItem(request);
Example
Atomic Counter
You can use updateItem to implement an atomic counter, where you increment or decrement the value
of an existing attribute without interfering with other write requests. To update an atomic counter,
use updateItem with an attribute of type Number in the UpdateExpression parameter, and ADD as the
Action.
The following code snippet demonstrates this, incrementing the Quantity attribute by one.
{
{"#Q", "Quantity"}
},
ExpressionAttributeValues = new Dictionary<string, AttributeValue>()
{
{":incr",new AttributeValue {N = "1"}}
},
UpdateExpression = "SET #Q = #Q + :incr",
TableName = tableName
};
Deleting an Item
The DeleteItem method deletes an item from a table.
The following are the steps to delete an item using the low-level .NET SDK API.
To delete an item, the table name and item's primary key are required.
3. Execute the DeleteItem method by providing the DeleteItemRequest object that you created in
the preceding step.
Example
Example
// Optional parameters.
ReturnValues = "ALL_OLD",
ExpressionAttributeNames = new Dictionary<string, string>()
{
{"#IP", "InPublication"}
},
ExpressionAttributeValues = new Dictionary<string, AttributeValue>()
{
{":inpub",new AttributeValue {BOOL = false}}
},
ConditionExpression = "#IP = :inpub"
};
The following C# code snippet demonstrates the preceding steps. The example creates a
BatchWriteItemRequest to perform the following write operations:
For a working example, see Example: Batch Operations Using AWS SDK for .NET Low-Level API (p. 447).
The following are the steps to retrieve multiple items using the low-level .NET SDK API.
To retrieve multiple items, the table name and a list of primary key values are required.
3. Execute the BatchGetItem method by providing the BatchGetItemRequest object that you
created in the preceding step.
4. Process the response. You should check if there were any unprocessed keys, which could happen if you
reach the provisioned throughput limit or some other transient error.
The following C# code snippet demonstrates the preceding steps. The example retrieves items from two
tables, Forum and Thread. The request specifies two items in the Forum and three items in the Thread
table. The response includes items from both of the tables. The code shows how you can process the
response.
Example
Example: CRUD Operations Using the AWS SDK for .NET Low-
Level API
The following C# code example illustrates CRUD operations on an item. The example adds an item to the
ProductCatalog table, retrieves it, performs various updates, and finally deletes the item. If you followed
the steps in Creating Tables and Loading Sample Data (p. 323), you already have the ProductCatalog
table created. You can also create these sample tables programmatically. For more information, see
Creating Example Tables and Uploading Data Using the AWS SDK for .NET (p. 902).
For step-by-step instructions to test the following sample, see .NET Code Examples (p. 330).
Example
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
using System;
using System.Collections.Generic;
using Amazon.DynamoDBv2;
using Amazon.DynamoDBv2.Model;
using Amazon.Runtime;
using Amazon.SecurityToken;
namespace com.amazonaws.codesamples
{
class LowLevelItemCRUDExample
{
private static string tableName = "ProductCatalog";
private static AmazonDynamoDBClient client = new AmazonDynamoDBClient();
// Delete item.
DeleteItem();
Console.WriteLine("To continue, press Enter");
Console.ReadLine();
}
catch (Exception e)
{
Console.WriteLine(e.Message);
Console.WriteLine("To continue, press Enter");
Console.ReadLine();
}
}
} }
},
// Perform the following updates:
// 1) Add two new authors to the list
// 1) Set a new attribute
// 2) Remove the ISBN attribute
ExpressionAttributeNames = new Dictionary<string, string>()
{
{"#A","Authors"},
{"#NA","NewAttribute"},
{"#I","ISBN"}
},
ExpressionAttributeValues = new Dictionary<string, AttributeValue>()
{
{":auth",new AttributeValue {
SS = {"Author YY", "Author ZZ"}
}},
{":new",new AttributeValue {
S = "New Value"
}}
},
TableName = tableName,
ReturnValues = "ALL_NEW" // Give me all attributes of the updated item.
};
var response = client.UpdateItem(request);
TableName = tableName,
ReturnValues = "ALL_NEW" // Give me all attributes of the updated item.
};
var response = client.UpdateItem(request);
Console.WriteLine(
attributeName + " " +
(value.S == null ? "" : "S=[" + value.S + "]") +
(value.N == null ? "" : "N=[" + value.N + "]") +
(value.SS == null ? "" : "SS=[" + string.Join(",", value.SS.ToArray())
+ "]") +
(value.NS == null ? "" : "NS=[" + string.Join(",", value.NS.ToArray())
+ "]")
);
}
Console.WriteLine("************************************************");
}
}
This section provides examples of batch operations, batch write and batch get, that DynamoDB supports.
Example: Batch Write Operation Using the AWS SDK for .NET Low-Level API
The following C# code example uses the BatchWriteItem method to perform the following put and
delete operations:
You can specify any number of put and delete requests against one or more tables when creating your
batch write request. However, DynamoDB BatchWriteItem limits the size of a batch write request and
the number of put and delete operations in a single batch write operation. For more information, see
BatchWriteItem. If your request exceeds these limits, your request is rejected. If your table does not have
sufficient provisioned throughput to serve this request, the unprocessed request items are returned in
the response.
The following example checks the response to see if it has any unprocessed request items. If it does,
it loops back and resends the BatchWriteItem request with unprocessed items in the request. If
you followed the steps in Creating Tables and Loading Sample Data (p. 323), you already have the
Forum and Thread tables created. You can also create these sample tables and upload sample data
programmatically. For more information, see Creating Example Tables and Uploading Data Using the
AWS SDK for .NET (p. 902).
For step-by-step instructions to test the following sample, see .NET Code Examples (p. 330).
Example
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
using System;
using System.Collections.Generic;
using Amazon.DynamoDBv2;
using Amazon.DynamoDBv2.Model;
using Amazon.Runtime;
namespace com.amazonaws.codesamples
{
class LowLevelBatchWrite
{
private static string table1Name = "Forum";
private static string table2Name = "Thread";
private static AmazonDynamoDBClient client = new AmazonDynamoDBClient();
CallBatchWriteTillCompletion(request);
}
int callCount = 0;
do
{
Console.WriteLine("Making request");
response = client.BatchWriteItem(request);
callCount++;
Console.WriteLine("Unprocessed");
foreach (var unp in unprocessed)
{
Console.WriteLine("{0} - {1}", unp.Key, unp.Value.Count);
}
Console.WriteLine();
// For the next iteration, the request will have unprocessed items.
request.RequestItems = unprocessed;
} while (response.UnprocessedItems.Count > 0);
Example: Batch Get Operation Using the AWS SDK for .NET Low-Level API
The following C# code example uses the BatchGetItem method to retrieve multiple items from the
Forum and the Thread tables. The BatchGetItemRequest specifies the table names and a list of
primary keys for each table. The example processes the response by printing the items retrieved.
If you followed the steps in Creating Tables and Loading Sample Data (p. 323), you already have these
tables created with sample data. You can also create these sample tables and upload sample data
programmatically. For more information, see Creating Example Tables and Uploading Data Using the
AWS SDK for .NET (p. 902).
For step-by-step instructions to test the following sample, see .NET Code Examples (p. 330).
Example
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
using System;
using System.Collections.Generic;
using Amazon.DynamoDBv2;
using Amazon.DynamoDBv2.Model;
using Amazon.Runtime;
namespace com.amazonaws.codesamples
{
class LowLevelBatchGet
{
private static string table1Name = "Forum";
private static string table2Name = "Thread";
private static AmazonDynamoDBClient client = new AmazonDynamoDBClient();
BatchGetItemResponse response;
do
{
Console.WriteLine("Making request");
response = client.BatchGetItem(request);
request.RequestItems = unprocessedKeys;
} while (response.UnprocessedKeys.Count > 0);
}
Console.WriteLine(
attributeName + " " +
(value.S == null ? "" : "S=[" + value.S + "]") +
(value.N == null ? "" : "N=[" + value.N + "]") +
(value.SS == null ? "" : "SS=[" + string.Join(",", value.SS.ToArray())
+ "]") +
(value.NS == null ? "" : "NS=[" + string.Join(",", value.NS.ToArray())
+ "]")
);
}
Console.WriteLine("************************************************");
}
}
illustration, the example uses the GZipStream class to compress a sample stream and assigns it to the
ExtendedMessage attribute, and decompresses it when printing the attribute value.
If you followed the steps in Creating Tables and Loading Sample Data (p. 323), you already have the
Reply table created. You can also create these sample tables programmatically. For more information,
see Creating Example Tables and Uploading Data Using the AWS SDK for .NET (p. 902).
For step-by-step instructions to test the following sample, see .NET Code Examples (p. 330).
Example
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
using System;
using System.Collections.Generic;
using System.IO;
using System.IO.Compression;
using Amazon.DynamoDBv2;
using Amazon.DynamoDBv2.Model;
using Amazon.Runtime;
namespace com.amazonaws.codesamples
{
class LowLevelItemBinaryExample
{
private static string tableName = "Reply";
private static AmazonDynamoDBClient client = new AmazonDynamoDBClient();
try
{
CreateItem(replyIdPartitionKey, replyDateTimeSortKey);
RetrieveItem(replyIdPartitionKey, replyDateTimeSortKey);
// Delete item.
DeleteItem(replyIdPartitionKey, replyDateTimeSortKey);
Console.WriteLine("To continue, press Enter");
Console.ReadLine();
}
catch (AmazonDynamoDBException e) { Console.WriteLine(e.Message); }
catch (AmazonServiceException e) { Console.WriteLine(e.Message); }
catch (Exception e) { Console.WriteLine(e.Message); }
}
{
TableName = tableName,
Item = new Dictionary<string, AttributeValue>()
{
{ "Id", new AttributeValue {
S = partitionKey
}},
{ "ReplyDateTime", new AttributeValue {
S = sortKey
}},
{ "Subject", new AttributeValue {
S = "Binary type "
}},
{ "Message", new AttributeValue {
S = "Some message about the binary type"
}},
{ "ExtendedMessage", new AttributeValue {
B = compressedMessage
}}
}
};
client.PutItem(request);
}
PrintItem(attributeList);
}
Console.WriteLine(
attributeName + " " +
(value.S == null ? "" : "S=[" + value.S + "]") +
(value.N == null ? "" : "N=[" + value.N + "]") +
(value.SS == null ? "" : "SS=[" + string.Join(",", value.SS.ToArray())
+ "]") +
(value.NS == null ? "" : "NS=[" + string.Join(",", value.NS.ToArray())
+ "]") +
(value.B == null ? "" : "B=[" + FromGzipMemoryStream(value.B) + "]")
);
}
Console.WriteLine("************************************************");
}
The Query operation finds items based on primary key values. You can query any table or secondary
index that has a composite primary key (a partition key and a sort key).
You must provide the name of the partition key attribute, and a single value for that attribute. Query
will return all of the items with that partition key value. You can optionally provide a sort key attribute,
and use a comparison operator to refine the search results.
You must specify the partition key name and value as an equality condition.
You can optionally provide a second condition for the sort key (if present). The sort key condition must
use one of the following comparison operators:
• begins_with (a, substr)— true if the value of attribute a begins with a particular substring.
The following AWS CLI examples demonstrate the use of key condition expressions. Note that these
expressions use placeholders (such as :name and :sub) instead of actual values. For more information,
see Expression Attribute Names (p. 387) and Expression Attribute Values (p. 389).
Example
Query the Thread table for a particular ForumName (partition key). All of the items with that
ForumName value will be read by the query, because the sort key (Subject) is not included in
KeyConditionExpression.
Example
Query the Thread table for a particular ForumName (partition key), but this time return only the items
with a given Subject (sort key).
{
":name":{"S":"Amazon DynamoDB"},
":sub":{"S":"DynamoDB Thread 1"}
}
Example
Query the Reply table for a particular Id (partition key), but return only those items whose ReplyDateTime
(sort key) begins with certain characters.
{
":id":{"S":"Amazon DynamoDB#DynamoDB Thread 1"},
":dt":{"S":"2015-09"}
}
You can use any attribute name in a key condition expression, provided that the first character
is a-z or A-Z and the second character (if present) is a-z, A-Z, or 0-9. In addition, the attribute
name must not be a DynamoDB reserved word. (For a complete list of these, see Reserved Words
in DynamoDB (p. 937).) If an attribute name does not meet these requirements, you will need to
define an expression attribute name as a placeholder. For more information, see Expression Attribute
Names (p. 387).
For items with a given partition key value, DynamoDB stores these items close together, in sorted
order by sort key value. In a Query operation, DynamoDB retrieves the items in sorted order, and then
processes the items using KeyConditionExpression and any FilterExpression that might be
present. Only then are the Query results sent back to the client.
A Query operation always returns a result set. If no matching items are found, the result set will be
empty.
Query results are always sorted by the sort key value. If the data type of the sort key is Number, the
results are returned in numeric order; otherwise, the results are returned in order of UTF-8 bytes. By
default, the sort order is ascending. To reverse the order, set the ScanIndexForward parameter to
false.
A single Query operation can retrieve a maximum of 1 MB of data. This limit applies before any
FilterExpression is applied to the results. If LastEvaluatedKey is present in the response and is
non-null, you will need to paginate the result set (see Paginating the Results (p. 459)).
A filter expression is applied after a Query finishes, but before the results are returned. Therefore, a
Query will consume the same amount of read capacity, regardless of whether a filter expression is
present.
A Query operation can retrieve a maximum of 1 MB of data. This limit applies before the filter
expression is evaluated.
A filter expression cannot contain partition key or sort key attributes. You need to specify those
attributes in the key condition expression, not the filter expression.
The syntax for a filter expression is identical to that of a condition expression. Filter expressions can use
the same comparators, functions, and logical operators as a condition expression. For more information,
Condition Expressions (p. 390).
Example
The following AWS CLI example queries the Thread table for a particular ForumName (partition key) and
Subject (sort key). Of the items that are found, only the most popular discussion threads are returned—in
other words, only those threads with more than a certain number of Views.
{
":fn":{"S":"Amazon DynamoDB#DynamoDB Thread 1"},
":num":{"N":"3"}
}
Note that Views is a reserved word in DynamoDB (see Reserved Words in DynamoDB (p. 937)), so this
example uses #v as a placeholder. For more information, see Expression Attribute Names (p. 387).
Note
A filter expression removes items from the Query result set. If possible, avoid using Query
where you expect to retrieve a large number of items, but also need to discard most of those
items.
For example, suppose you Query a table, with a Limit value of 6, and without a filter expression. The
Query result will contain the first six items from the table that match the key condition expression from
the request.
Now suppose you add a filter expression to the Query. In this case, DynamoDB will apply the filter
expression to the six items that were returned, discarding those that do not match. The final Query
result will contain 6 items or fewer, depending on the number of items that were filtered.
A single Query will only return a result set that fits within the 1 MB size limit. To determine whether
there are more results, and to retrieve them one page at a time, applications should do the following:
In other words, the LastEvaluatedKey from a Query response should be used as the
ExclusiveStartKey for the next Query request. If there is not a LastEvaluatedKey element in a
Query response, then you have retrieved the final page of results. (The absence of LastEvaluatedKey
is the only way to know that you have reached the end of the result set.)
You can use the AWS CLI to view this behavior. The CLI sends low-level Query requests to DynamoDB,
repeatedly, until LastEvaluatedKey is no longer present in the results. Consider the following AWS CLI
example that retrieves movie titles from a particular year:
Ordinarily, the AWS CLI handles pagination automatically; however, in this example, the CLI's --
page-size parameter limits the number of items per page. The --debug parameter prints low-level
information about requests and responses.
If you run the example, the first response from DynamoDB looks similar to this:
The LastEvaluatedKey in the response indicates that not all of the items have been retrieved. The
AWS CLI will then issue another Query request to DynamoDB. This request and response pattern
continues, until the final response:
The absence of LastEvaluatedKey indicates that there are no more items to retrieve.
Note
The AWS SDKs handle the low-level DynamoDB responses (including the presence or absence
of LastEvaluatedKey), and provide various abstractions for paginating Query results. For
example, the SDK for Java document interface provides java.util.Iterator support, so that
you can walk through the results one at a time.
For code examples in various programming languages, see the Amazon DynamoDB Getting
Started Guide and the AWS SDK documentation for your language.
• ScannedCount — the number of items that matched the key condition expression, before a filter
expression (if present) was applied.
• Count — the number of items that remain, after a filter expression (if present) was applied.
Note
If you do not use a filter expression, then ScannedCount and Count will have the same value.
If the size of the Query result set is larger than 1 MB, then ScannedCount and Count will represent
only a partial count of the total items. You will need to perform multiple Query operations in order to
retrieve all of the results (see Paginating the Results (p. 459)).
Each Query response will contain the ScannedCount and Count for the items that were processed by
that particular Query request. To obtain grand totals for all of the Query requests, you could keep a
running tally of both ScannedCount and Count.
By default, a Query operation does not return any data on how much read capacity it consumes.
However, you can specify the ReturnConsumedCapacity parameter in a Query request to obtain this
information. The following are the valid settings for ReturnConsumedCapacity:
DynamoDB calculates the number of read capacity units consumed based on item size, not on the
amount of data that is returned to an application. For this reason, the number of capacity units
consumed will be the same whether you request all of the attributes (the default behavior) or just some
of them (using a projection expression). The number will also be the same whether or not you use a filter
expression.
If you require strongly consistent reads, set the ConsistentRead parameter to true in the Query
request.
The following are the steps to retrieve an item using the AWS SDK for Java Document API.
The response includes an ItemCollection object that provides all items returned by the query.
The following Java code snippet demonstrates the preceding tasks. The snippet assumes you have a
Reply table that stores replies for forum threads. For more information, see Creating Tables and Loading
Sample Data (p. 323).
Each forum thread has a unique ID and can have zero or more replies. Therefore, the Id attribute
of the Reply table is composed of both the forum name and forum subject. Id (partition key) and
ReplyDateTime (sort key) make up the composite primary key for the table.
The following query retrieves all replies for a specific thread subject. The query requires both the table
name and the Subject value.
Example
while (iterator.hasNext()) {
item = iterator.next();
System.out.println(item.toJSONPretty());
}
The following Java code snippet retrieves forum thread replies posted in the past 15 days. The snippet
specifies optional parameters using:
• A KeyConditionExpression to retrieve the replies from a specific discussion forum (partition key)
and, within that set of items, replies that were posted within the last 15 days (sort key).
• A FilterExpression to return only the replies from a specific user. The filter is applied after the
query is processed, but before the results are returned to the user.
• A ValueMap to define the actual values for the KeyConditionExpression placeholders.
• A ConsistentRead setting of true, to request a strongly consistent read.
This snippet uses a QuerySpec object which gives access to all of the low-level Query input parameters.
Example
You can also optionally limit the number of items per page by using the withMaxPageSize method.
When time you call the query method, you get an ItemCollection that contains the resulting items.
You can then step through the results, processing one page at a time, until there are no more pages.
The following Java code snippet modifies the query specification shown above. This time, the query spec
uses the withMaxPageSize method. The Page class provides an Iterator that allows the code to process
the items on each page.
Example
spec.withMaxPageSize(10);
Example
In this Java code example, you execute variations of finding replies for a thread 'DynamoDB Thread 1' in
forum 'DynamoDB'.
Both the preceding two queries shows how you can specify sort key conditions to narrow the query
results and use other optional query parameters.
Note
This code sample assumes that you have already loaded data into DynamoDB for your account
by following the instructions in the Creating Tables and Loading Sample Data (p. 323) section.
For step-by-step instructions to run the following example, see Java Code Examples (p. 328).
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package com.amazonaws.codesamples.document;
import java.text.SimpleDateFormat;
import java.util.Date;
import java.util.Iterator;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import com.amazonaws.services.dynamodbv2.document.DynamoDB;
import com.amazonaws.services.dynamodbv2.document.Item;
import com.amazonaws.services.dynamodbv2.document.ItemCollection;
import com.amazonaws.services.dynamodbv2.document.Page;
import com.amazonaws.services.dynamodbv2.document.QueryOutcome;
import com.amazonaws.services.dynamodbv2.document.Table;
import com.amazonaws.services.dynamodbv2.document.spec.QuerySpec;
import com.amazonaws.services.dynamodbv2.document.utils.ValueMap;
findRepliesForAThread(forumName, threadSubject);
findRepliesForAThreadSpecifyOptionalLimit(forumName, threadSubject);
findRepliesInLast15DaysWithConfig(forumName, threadSubject);
findRepliesPostedWithinTimePeriod(forumName, threadSubject);
findRepliesUsingAFilterExpression(forumName, threadSubject);
}
System.out.println("\nfindRepliesForAThread results:");
System.out.println("\nfindRepliesForAThreadSpecifyOptionalLimit results:");
System.out.println("\nfindRepliesInLast15DaysWithConfig results:");
Iterator<Item> iterator = items.iterator();
while (iterator.hasNext()) {
System.out.println(iterator.next().toJSONPretty());
}
System.out.println("\nfindRepliesPostedWithinTimePeriod results:");
Iterator<Item> iterator = items.iterator();
while (iterator.hasNext()) {
System.out.println(iterator.next().toJSONPretty());
}
}
System.out.println("\nfindRepliesUsingAFilterExpression results:");
Iterator<Item> iterator = items.iterator();
while (iterator.hasNext()) {
System.out.println(iterator.next().toJSONPretty());
}
}
The following are the steps to query a table using low-level .NET SDK API.
3. Execute the Query method and provide the QueryRequest object that you created in the preceding
step.
The response includes the QueryResult object that provides all items returned by the query.
The following C# code snippet demonstrates the preceding tasks. The snippet assumes you have a Reply
table stores replies for forum threads. For more information, see Creating Tables and Loading Sample
Data (p. 323).
Example
Each forum thread has a unique ID and can have zero or more replies. Therefore, the primary key is
composed of both the Id (partition key) and ReplyDateTime (sort key).
The following query retrieves all replies for a specific thread subject. The query requires both the table
name and the Subject value.
Example
The following C# code snippet retrieves forum thread replies posted in the past 15 days. The snippet
specifies the following optional parameters:
Example
You can also optionally limit the page size, or the number of items per page, by adding the optional
Limit parameter. Each time you execute the Query method, you get one page of results that has the
specified number of items. To fetch the next page, you execute the Query method again by providing
the primary key value of the last item in the previous page so that the method can return the next set
of items. You provide this information in the request by setting the ExclusiveStartKey property.
Initially, this property can be null. To retrieve subsequent pages, you must update this property value to
the primary key of the last item in the preceding page.
The following C# code snippet queries the Reply table. In the request, it specifies the Limit and
ExclusiveStartKey optional parameters. The do/while loop continues to scan one page at time
until the LastEvaluatedKey returns a null value.
Example
do
{
var request = new QueryRequest
{
TableName = "Reply",
KeyConditionExpression = "Id = :v_Id",
ExpressionAttributeValues = new Dictionary<string, AttributeValue> {
{":v_Id", new AttributeValue { S = "Amazon DynamoDB#DynamoDB Thread 2" }}
},
// Optional parameters.
Limit = 1,
ExclusiveStartKey = lastKeyEvaluated
};
lastKeyEvaluated = response.LastEvaluatedKey;
Example
In this C# code example, you execute variations of "Find replies for a thread "DynamoDB Thread 1" in
forum "DynamoDB".
This function illustrate the use of pagination to process multipage result. Amazon DynamoDB has
a page size limit and if your result exceeds the page size, you get only the first page of results. This
coding pattern ensures your code processes all the pages in the query result.
• Find replies in the last 15 days.
• Find replies in a specific date range.
Both of the preceding two queries shows how you can specify sort key conditions to narrow query
results and use other optional query parameters.
For step-by-step instructions to test the following sample, see .NET Code Examples (p. 330).
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
using System;
using System.Collections.Generic;
using Amazon.DynamoDBv2;
using Amazon.DynamoDBv2.Model;
using Amazon.Runtime;
using Amazon.Util;
namespace com.amazonaws.codesamples
{
class LowLevelQuery
{
private static AmazonDynamoDBClient client = new AmazonDynamoDBClient();
FindRepliesForAThread(forumName, threadSubject);
FindRepliesForAThreadSpecifyOptionalLimit(forumName, threadSubject);
FindRepliesInLast15DaysWithConfig(forumName, threadSubject);
FindRepliesPostedWithinTimePeriod(forumName, threadSubject);
Console.ReadLine();
}
// Optional parameter.
ProjectionExpression = "Id, ReplyDateTime, PostedBy",
// Optional parameter.
ConsistentRead = true
};
},
Limit = 2, // The Reply table has only a few sample items. So the page
size is smaller.
ExclusiveStartKey = lastKeyEvaluated
};
Console.ReadLine();
}
Console.WriteLine(
attributeName + " " +
(value.S == null ? "" : "S=[" + value.S + "]") +
(value.N == null ? "" : "N=[" + value.N + "]") +
A Scan operation reads every item in a table or a secondary index. By default, a Scan operation returns
all of the data attributes for every item in the table or index. You can use the ProjectionExpression
parameter so that Scan only returns some of the attributes, rather than all of them.
Scan always returns a result set. If no matching items are found, the result set will be empty.
A single Scan request can retrieve a maximum of 1 MB of data; DynamoDB can optionally apply a filter
expression to this data, narrowing the results before they are returned to the user.
A filter expression is applied after a Scan finishes, but before the results are returned. Therefore, a Scan
will consume the same amount of read capacity, regardless of whether a filter expression is present.
A Scan operation can retrieve a maximum of 1 MB of data. This limit applies before the filter expression
is evaluated.
With Scan, you can specify any attributes in a filter expression—including partition key and sort key
attributes.
The syntax for a filter expression is identical to that of a condition expression. Filter expressions can use
the same comparators, functions, and logical operators as a condition expression. For more information,
Condition Expressions (p. 390).
Example
The following AWS CLI example scans the Thread table and returns only the items that were last posted
to by a particular user.
For example, suppose you Scan a table, with a Limit value of 6, and without a filter expression. The
Scan result will contain the first six items from the table that match the key condition expression from
the request.
Now suppose you add a filter expression to the Scan. In this case, DynamoDB will apply the filter
expression to the six items that were returned, discarding those that do not match. The final Scan result
will contain 6 items or fewer, depending on the number of items that were filtered.
A single Scan will only return a result set that fits within the 1 MB size limit. To determine whether there
are more results, and to retrieve them one page at a time, applications should do the following:
In other words, the LastEvaluatedKey from a Scan response should be used as the
ExclusiveStartKey for the next Scan request. If there is not a LastEvaluatedKey element in a
Scan response, then you have retrieved the final page of results. (The absence of LastEvaluatedKey is
the only way to know that you have reached the end of the result set.)
You can use the AWS CLI to view this behavior. The CLI sends low-level Scan requests to DynamoDB,
repeatedly, until LastEvaluatedKey is no longer present in the results. Consider the following AWS CLI
example that scans the entire Movies table but returns only the movies from a particular genre:
--projection-expression "title" \
--filter-expression 'contains(info.genres,:gen)' \
--expression-attribute-values '{":gen":{"S":"Sci-Fi"}}' \
--page-size 100 \
--debug
Ordinarily, the AWS CLI handles pagination automatically; however, in this example, the CLI's --
page-size parameter limits the number of items per page. The --debug parameter prints low-level
information about requests and responses.
If you run the example, the first response from DynamoDB looks similar to this:
The LastEvaluatedKey in the response indicates that not all of the items have been retrieved.
The AWS CLI will then issue another Scan request to DynamoDB. This request and response pattern
continues, until the final response:
The absence of LastEvaluatedKey indicates that there are no more items to retrieve.
Note
The AWS SDKs handle the low-level DynamoDB responses (including the presence or absence
of LastEvaluatedKey), and provide various abstractions for paginating Scan results. For
example, the SDK for Java document interface provides java.util.Iterator support, so that
you can walk through the results one at a time.
For code examples in various programming languages, see the Amazon DynamoDB Getting
Started Guide and the AWS SDK documentation for your language.
• ScannedCount — the number of items evaluated, before any ScanFilter is applied. A high
ScannedCount value with few, or no, Count results indicates an inefficient Scan operation. If you did
not use a filter in the request, then ScannedCount is the same as Count.
• Count — the number of items that remain, after a filter expression (if present) was applied.
Note
If you do not use a filter expression, then ScannedCount and Count will have the same value.
If the size of the Scan result set is larger than 1 MB, then ScannedCount and Count will represent only
a partial count of the total items. You will need to perform multiple Scan operations in order to retrieve
all of the results (see Paginating the Results (p. 474)).
Each Scan response will contain the ScannedCount and Count for the items that were processed
by that particular Scan request. To obtain grand totals for all of the Scan requests, you could keep a
running tally of both ScannedCount and Count.
By default, a Scan operation does not return any data on how much read capacity it consumes. However,
you can specify the ReturnConsumedCapacity parameter in a Scan request to obtain this information.
The following are the valid settings for ReturnConsumedCapacity:
DynamoDB calculates the number of read capacity units consumed based on item size, not on the
amount of data that is returned to an application. For this reason, the number of capacity units
consumed will be the same whether you request all of the attributes (the default behavior) or just some
of them (using a projection expression). The number will also be the same whether or not you use a filter
expression.
If you require strongly consistent reads, as of the time that the Scan begins, set the ConsistentRead
parameter to true in the Scan request. This will ensure that all of the write operations that completed
before the Scan began will be included in the Scan response.
Setting ConsistentRead to true can be useful in table backup or replication scenarios, in conjunction
with DynamoDB Streams: You first use Scan with ConsistentRead set to true, in order to obtain a
consistent copy of the data in the table. During the Scan, DynamoDB Streams records any additional
write activity that occurs on the table. After the Scan completes, you can apply the write activity from
the stream to the table.
Note
Note that a Scan operation with ConsistentRead set to true will consume twice as many
read capacity units, as compared to leaving ConsistentRead at its default value (false).
Parallel Scan
By default, the Scan operation processes data sequentially. DynamoDB returns data to the application in
1 MB increments, and an application performs additional Scan operations to retrieve the next 1 MB of
data.
The larger the table or index being scanned, the more time the Scan will take to complete. In addition, a
sequential Scan might not always be able to fully utilize the provisioned read throughput capacity: Even
though DynamoDB distributes a large table's data across multiple physical partitions, a Scan operation
can only read one partition at a time. For this reason, the throughput of a Scan is constrained by the
maximum throughput of a single partition.
To address these issues, the Scan operation can logically divide a table or secondary index into multiple
segments, with multiple application workers scanning the segments in parallel. Each worker can be a
thread (in programming languages that support multithreading) or an operating system process. To
perform a parallel scan, each worker issues its own Scan request with the following parameters:
• Segment — A segment to be scanned by a particular worker. Each worker should use a different value
for Segment.
• TotalSegments — The total number of segments for the parallel scan. This value must be the same
as the number of workers that your application will use.
The following diagram shows how a multithreaded application performs a parallel Scan with three
degrees of parallelism:
In this diagram, the application spawns three threads and assigns each thread a number. (Segments
are zero-based, so the first number is always 0.) Each thread issues a Scan request, setting Segment
to its designated number and setting TotalSegments to 3. Each thread scans its designated segment,
retrieving data 1 MB at a time, and returns the data to the application's main thread.
The values for Segment and TotalSegments apply to individual Scan requests, and you can use
different values at any time. You might need to experiment with these values, and the number of
workers you use, until your application achieves its best performance.
Note
A parallel scan with a large number of workers can easily consume all of the provisioned
throughput for the table or index being scanned. It is best to avoid such scans if the table or
index is also incurring heavy read or write activity from other applications.
To control the amount of data returned per request, use the Limit parameter. This can help
prevent situations where one worker consumes all of the provisioned throughput, at the
expense of all other workers.
The following are the steps to scan a table using the AWS SDK for Java Document API.
Example
The table maintains all the replies for various forum threads. Therefore, the primary key is composed
of both the Id (partition key) and ReplyDateTime (sort key). The following Java code snippet scans the
entire table. The ScanRequest instance specifies the name of the table to scan.
Example
The following Java snippet scans the ProductCatalog table to find items that are priced less than 0. The
snippet specifies the following optional parameters:
• A filter expression to retrieve only the items priced less than 0 (error condition).
Example
You can also optionally limit the page size, or the number of items per page, by using the withLimit
method of the scan request. Each time you execute the scan method, you get one page of results
that has the specified number of items. To fetch the next page, you execute the scan method
again by providing the primary key value of the last item in the previous page so that the scan
method can return the next set of items. You provide this information in the request by using the
withExclusiveStartKey method. Initially, the parameter of this method can be null. To retrieve
subsequent pages, you must update this property value to the primary key of the last item in the
preceding page.
The following Java code snippet scans the ProductCatalog table. In the request, the withLimit and
withExclusiveStartKey methods are used. The do/while loop continues to scan one page at time
until the getLastEvaluatedKey method of the result returns a value of null.
Example
Note
This code sample assumes that you have already loaded data into DynamoDB for your account
by following the instructions in the Creating Tables and Loading Sample Data (p. 323) section.
For step-by-step instructions to run the following example, see Java Code Examples (p. 328).
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package com.amazonaws.codesamples.document;
import java.util.HashMap;
import java.util.Iterator;
import java.util.Map;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import com.amazonaws.services.dynamodbv2.document.DynamoDB;
import com.amazonaws.services.dynamodbv2.document.Item;
import com.amazonaws.services.dynamodbv2.document.ItemCollection;
import com.amazonaws.services.dynamodbv2.document.ScanOutcome;
import com.amazonaws.services.dynamodbv2.document.Table;
findProductsForPriceLessThanOneHundred();
}
System.out.println("Scan of " + tableName + " for items with a price less than
100.");
Iterator<Item> iterator = items.iterator();
while (iterator.hasNext()) {
System.out.println(iterator.next().toJSONPretty());
}
}
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package com.amazonaws.codesamples.document;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashSet;
import java.util.Iterator;
import java.util.List;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
import com.amazonaws.AmazonServiceException;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import com.amazonaws.services.dynamodbv2.document.DynamoDB;
import com.amazonaws.services.dynamodbv2.document.Item;
import com.amazonaws.services.dynamodbv2.document.ItemCollection;
import com.amazonaws.services.dynamodbv2.document.ScanOutcome;
import com.amazonaws.services.dynamodbv2.document.Table;
import com.amazonaws.services.dynamodbv2.document.spec.ScanSpec;
import com.amazonaws.services.dynamodbv2.model.AttributeDefinition;
import com.amazonaws.services.dynamodbv2.model.KeySchemaElement;
import com.amazonaws.services.dynamodbv2.model.KeyType;
import com.amazonaws.services.dynamodbv2.model.ProvisionedThroughput;
shutDownExecutorService(executor);
}
@Override
public void run() {
System.out.println("Scanning " + tableName + " segment " + segment + " out of "
+ totalSegments
+ " segments " + itemLimit + " items at a time...");
int totalScannedItemCount = 0;
try {
ScanSpec spec = new
ScanSpec().withMaxResultSize(itemLimit).withTotalSegments(totalSegments)
.withSegment(segment);
}
catch (Exception e) {
System.err.println(e.getMessage());
}
finally {
System.out.println("Scanned " + totalScannedItemCount + " items from
segment " + segment + " out of "
+ totalSegments + " of " + tableName);
}
}
}
try {
System.out.println("Processing record #" + productIndex);
}
catch (Exception e) {
System.err.println("Failed to create item " + productIndex + " in " +
tableName);
System.err.println(e.getMessage());
}
}
}
catch (Exception e) {
System.err.println("Failed to delete table " + tableName);
e.printStackTrace(System.err);
}
}
try {
System.out.println("Creating table " + tableName);
// key
if (sortKeyName != null) {
keySchema.add(new
KeySchemaElement().withAttributeName(sortKeyName).withKeyType(KeyType.RANGE)); // Sort
// key
attributeDefinitions
.add(new
AttributeDefinition().withAttributeName(sortKeyName).withAttributeType(sortKeyType));
}
.withReadCapacityUnits(readCapacityUnits).withWriteCapacityUnits(writeCapacityUnits));
System.out.println("Waiting for " + tableName + " to be created...this may take
a while...");
table.waitForActive();
}
catch (Exception e) {
System.err.println("Failed to create table " + tableName);
e.printStackTrace(System.err);
}
}
The following are the steps to scan a table using the AWS SDK for NET low-level API:
Example
The table maintains all the replies for various forum threads. Therefore, the primary key is composed of
both the Id (partition key) and ReplyDateTime (sort key). The following C# code snippet scans the entire
table. The ScanRequest instance specifies the name of the table to scan.
Example
The following C# code scans the ProductCatalog table to find items that are priced less than 0. The
sample specifies the following optional parameters:
• A FilterExpression parameter to retrieve only the items priced less than 0 (error condition).
• A ProjectionExpression parameter to specify the attributes to retrieve for items in the query
results.
The following C# code snippet scans the ProductCatalog table to find all items priced less than 0.
Example
You can also optionally limit the page size, or the number of items per page, by adding the optional
Limit parameter. Each time you execute the Scan method, you get one page of results that has the
specified number of items. To fetch the next page, you execute the Scan method again by providing
the primary key value of the last item in the previous page so that the Scan method can return the next
set of items. You provide this information in the request by setting the ExclusiveStartKey property.
Initially, this property can be null. To retrieve subsequent pages, you must update this property value to
the primary key of the last item in the preceding page.
The following C# code snippet scans the ProductCatalog table. In the request, it specifies the Limit and
ExclusiveStartKey optional parameters. The do/while loop continues to scan one page at time
until the LastEvaluatedKey returns a null value.
Example
For step-by-step instructions to test the following sample, see .NET Code Examples (p. 330).
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
using System;
using System.Collections.Generic;
using Amazon.DynamoDBv2;
using Amazon.DynamoDBv2.Model;
using Amazon.Runtime;
namespace com.amazonaws.codesamples
{
class LowLevelScan
{
private static AmazonDynamoDBClient client = new AmazonDynamoDBClient();
try
{
FindProductsForPriceLessThanZero();
Console.WriteLine(
attributeName + " " +
(value.S == null ? "" : "S=[" + value.S + "]") +
(value.N == null ? "" : "N=[" + value.N + "]") +
(value.SS == null ? "" : "SS=[" + string.Join(",", value.SS.ToArray())
+ "]") +
(value.NS == null ? "" : "NS=[" + string.Join(",", value.NS.ToArray())
+ "]")
);
}
Console.WriteLine("************************************************");
}
}
For step-by-step instructions to test the following sample, see .NET Code Examples (p. 330).
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
using System;
using System.Collections.Generic;
using System.Threading;
using System.Threading.Tasks;
using Amazon.DynamoDBv2;
using Amazon.DynamoDBv2.Model;
using Amazon.Runtime;
namespace com.amazonaws.codesamples
{
class LowLevelParallelScan
{
private static AmazonDynamoDBClient client = new AmazonDynamoDBClient();
private static string tableName = "ProductCatalog";
private static int exampleItemCount = 100;
private static int scanItemLimit = 10;
private static int totalSegments = 5;
tasks[segment] = task;
}
CreateItem(itemIndex.ToString());
}
Console.WriteLine();
}
WaitUntilTableReady(tableName);
}
Amazon DynamoDB provides fast access to items in a table by specifying primary key values. However,
many applications might benefit from having one or more secondary (or alternate) keys available, to
allow efficient access to data with attributes other than the primary key. To address this, you can create
one or more secondary indexes on a table, and issue Query or Scan requests against these indexes.
A secondary index is a data structure that contains a subset of attributes from a table, along with an
alternate key to support Query operations. You can retrieve data from the index using a Query, in much
the same way as you use Query with a table. A table can have multiple secondary indexes, which gives
your applications access to many different query patterns.
Note
You can also Scan an index, in much the same way as you would Scan a table.
Every secondary index is associated with exactly one table, from which it obtains its data. This is called
the base table for the index. When you create an index, you define an alternate key for the index
(partition key and sort key). You also define the attributes that you want to be projected, or copied, from
the base table into the index. DynamoDB copies these attributes into the index, along with the primary
key attributes from the base table. You can then query or scan the index just as you would query or scan
a table.
Every secondary index is automatically maintained by DynamoDB. When you add, modify, or delete
items in the base table, any indexes on that table are also updated to reflect these changes.
• Global secondary index — an index with a partition key and a sort key that can be different from
those on the base table. A global secondary index is considered "global" because queries on the index
can span all of the data in the base table, across all partitions.
• Local secondary index — an index that has the same partition key as the base table, but a different
sort key. A local secondary index is "local" in the sense that every partition of a local secondary index is
scoped to a base table partition that has the same partition key value.
You should consider your application's requirements when you determine which type of index to use.
The following table shows the main differences between a global secondary index and a local secondary
index:
Key Schema The primary key of a global The primary key of a local
secondary index can be either secondary index must be
simple (partition key) or composite (partition key and
composite (partition key and sort key).
sort key).
Key Attributes The index partition key and sort The partition key of the index
key (if present) can be any base is the same attribute as the
table attributes of type string, partition key of the base table.
number, or binary. The sort key can be any base
table attribute of type string,
number, or binary.
Size Restrictions Per Partition There are no size restrictions for For each partition key value, the
Key Value global secondary indexes. total size of all indexed items
must be 10 GB or less.
Online Index Operations Global secondary indexes can Local secondary indexes are
be created at the same time created at the same time that
that you create a table. You you create a table. You cannot
can also add a new global add a local secondary index
secondary index to an existing to an existing table, nor can
table, or delete an existing you delete any local secondary
global secondary index. indexes that currently exist.
For more information, see
Managing Global Secondary
Indexes (p. 503).
Queries and Partitions A global secondary index lets A local secondary index lets you
you query over the entire table, query over a single partition, as
across all partitions. specified by the partition key
value in the query.
Projected Attributes With global secondary index If you query or scan a local
queries or scans, you can only secondary index, you can
request the attributes that request attributes that are
are projected into the index. not projected in to the index.
DynamoDB will not fetch any DynamoDB will automatically
attributes from the table. fetch those attributes from the
table.
If you want to create more than one table with secondary indexes, you must do so sequentially. For
example, you would create the first table and wait for it to become ACTIVE, create the next table and
wait for it to become ACTIVE, and so on. If you attempt to concurrently create more than one table with
a secondary index, DynamoDB will return a LimitExceededException.
• The type of index to be created – either a global secondary index or a local secondary index.
• A name for the index. The naming rules for indexes are the same as those for tables, as listed in Limits
in DynamoDB (p. 873). The name must be unique for the base table it is associated with, but you can
use the same name for indexes that are associated with different base tables.
• The key schema for the index. Every attribute in the index key schema must be a top-level attribute of
type String, Number, or Binary. Other data types, including documents and sets, are not allowed. Other
requirements for the key schema depend on the type of index:
• For a global secondary index, the partition key can be any scalar attribute of the base table. A sort
key is optional, and it too can be any scalar attribute of the base table.
• For a local secondary index, the partition key must be the same as the base table's partition key, and
the sort key must be a non-key base table attribute.
• Additional attributes, if any, to project from the base table into the index. These attributes are in
addition to the table's key attributes, which are automatically projected into every index. You can
project attributes of any data type, including scalars, documents, and sets.
• The provisioned throughput settings for the index, if necessary:
• For a global secondary index, you must specify read and write capacity unit settings. These
provisioned throughput settings are independent of the base table's settings.
• For a local secondary index, you do not need to specify read and write capacity unit settings. Any
read and write operations on a local secondary index draw from the provisioned throughput settings
of its base table.
For maximum query flexibility, you can create up to 20 global secondary indexes (default limit) and up to
5 local secondary indexes per table.
The limit of global secondary indexes per table is 5 for the following regions:
To get a detailed listing of secondary indexes on a table, use the DescribeTable operation.
DescribeTable will return the name, storage size and item counts for every secondary index on the
table. These values are not updated in real time, but they are refreshed approximately every six hours.
You can access the data in a secondary index using either the Query or Scan operation. You must specify
the name of the base table and the name of the index that you want to use, the attributes to be returned
in the results, and any condition expressions or filters that you want to apply. DynamoDB can return the
results in ascending or descending order.
When you delete a table, all of the indexes associated with that table are also deleted.
For best practices, see Best Practices for Using Secondary Indexes in DynamoDB (p. 814).
Some applications might need to perform many kinds of queries, using a variety of different attributes
as query criteria. To support these requirements, you can create one or more global secondary indexes
and issue Query requests against these indexes. To illustrate, consider a table named GameScores that
keeps track of users and scores for a mobile gaming application. Each item in GameScores is identified
by a partition key (UserId) and a sort key (GameTitle). The following diagram shows how the items in the
table would be organized. (Not all of the attributes are shown)
Now suppose that you wanted to write a leaderboard application to display top scores for each game.
A query that specified the key attributes (UserId and GameTitle) would be very efficient; however, if the
application needed to retrieve data from GameScores based on GameTitle only, it would need to use a
Scan operation. As more items are added to the table, scans of all the data would become slow and
inefficient, making it difficult to answer questions such as these:
• What is the top score ever recorded for the game Meteor Blasters?
• Which user had the highest score for Galaxy Invaders?
• What was the highest ratio of wins vs. losses?
To speed up queries on non-key attributes, you can create a global secondary index. A global secondary
index contains a selection of attributes from the base table, but they are organized by a primary key that
is different from that of the table. The index key does not need to have any of the key attributes from
the table; it doesn't even need to have the same key schema as a table.
For example, you could create a global secondary index named GameTitleIndex, with a partition key of
GameTitle and a sort key of TopScore. Since the base table's primary key attributes are always projected
into an index, the UserId attribute is also present. The following diagram shows what GameTitleIndex
index would look like:
Now you can query GameTitleIndex and easily obtain the scores for Meteor Blasters. The results are
ordered by the sort key values, TopScore. If you set the ScanIndexForward parameter to false, the
results are returned in descending order, so the highest score is returned first.
Every global secondary index must have a partition key, and can have an optional sort key. The index
key schema can be different from the base table schema; you could have a table with a simple primary
key (partition key), and create a global secondary index with a composite primary key (partition key and
sort key) — or vice-versa. The index key attributes can consist of any top-level String, Number, or Binary
attributes from the base table; other scalar types, document types, and set types are not allowed.
You can project other base table attributes into the index if you want. When you query the index,
DynamoDB can retrieve these projected attributes efficiently; however, global secondary index queries
cannot fetch attributes from the base table. For example, if you queried GameTitleIndex, as shown in
the diagram above, the query would not be able to access any non-key attributes other than TopScore
(though the key attributes GameTitle and UserId would automatically be projected).
In a DynamoDB table, each key value must be unique. However, the key values in a global secondary
index do not need to be unique. To illustrate, suppose that a game named Comet Quest is especially
difficult, with many new users trying but failing to get a score above zero. Here is some data that we
could use to represent this:
When this data is added to the GameScores table, DynamoDB will propagate it to GameTitleIndex. If
we then query the index using Comet Quest for GameTitle and 0 for TopScore, the following data is
returned:
Only the items with the specified key values appear in the response; within that set of data, the items are
in no particular order.
A global secondary index only keeps track of data items where its key attribute(s) actually exist. For
example, suppose that you added another new item to the GameScores table, but only provided the
required primary key attributes:
UserId GameTitle
Because you didn't specify the TopScore attribute, DynamoDB would not propagate this item to
GameTitleIndex. Thus, if you queried GameScores for all the Comet Quest items, you would get the
following four items:
A similar query on GameTitleIndex would still return three items, rather than four. This is because the
item with the nonexistent TopScore is not propagated to the index:
Attribute Projections
A projection is the set of attributes that is copied from a table into a secondary index. The partition key
and sort key of the table are always projected into the index; you can project other attributes to support
your application's query requirements. When you query an index, Amazon DynamoDB can access any
attribute in the projection as if those attributes were in a table of their own.
When you create a secondary index, you need to specify the attributes that will be projected into the
index. DynamoDB provides three different options for this:
• KEYS_ONLY – Each item in the index consists only of the table partition key and sort key values, plus
the index key values. The KEYS_ONLY option results in the smallest possible secondary index.
• INCLUDE – In addition to the attributes described in KEYS_ONLY, the secondary index will include
other non-key attributes that you specify.
• ALL – The secondary index includes all of the attributes from the source table. Because all of the table
data is duplicated in the index, an ALL projection results in the largest possible secondary index.
In the diagram above, GameTitleIndex has only one projected attribute: UserId. So while an application
can efficiently determine the UserId of the top scorers for each game using GameTitle and TopScore in
queries, it can't efficiently determine the highest ratio of wins vs. losses for the top scorers. To do so, it
would have to perform an additional query on the base table to fetch the wins and losses for each of
the top scorers. A more efficient way to support queries on this data would be to project these attributes
from the base table into the global secondary index, as shown in this diagram:
Because the non-key attributes Wins and Losses are projected into the index, an application can
determine the wins vs. losses ratio for any game, or for any combination of game and user ID.
When you choose the attributes to project into a global secondary index, you must consider the tradeoff
between provisioned throughput costs and storage costs:
• If you need to access just a few attributes with the lowest possible latency, consider projecting only
those attributes into a global secondary index. The smaller the index, the less that it will cost to store
it, and the less your write costs will be.
• If your application will frequently access some non-key attributes, you should consider projecting
those attributes into a global secondary index. The additional storage costs for the global secondary
index will offset the cost of performing frequent table scans.
• If you need to access most of the non-key attributes on a frequent basis, you can project these
attributes—or even the entire base table— into a global secondary index. This will give you maximum
flexibility; however, your storage cost would increase, or even double.
• If your application needs to query a table infrequently, but must perform many writes or updates
against the data in the table, consider projecting KEYS_ONLY. The global secondary index would be of
minimal size, but would still be available when needed for query activity.
Consider the following data returned from a Query that requests gaming data for a leaderboard
application:
{
"TableName": "GameScores",
"IndexName": "GameTitleIndex",
"KeyConditionExpression": "GameTitle = :v_title",
"ExpressionAttributeValues": {
":v_title": {"S": "Meteor Blasters"}
},
"ProjectionExpression": "UserId, TopScore",
"ScanIndexForward": false
}
In this query:
• DynamoDB accesses GameTitleIndex, using the GameTitle partition key to locate the index items
for Meteor Blasters. All of the index items with this key are stored adjacent to each other for rapid
retrieval.
• Within this game, DynamoDB uses the index to access all of the user IDs and top scores for this game.
• The results are returned, sorted in descending order because the ScanIndexForward parameter is set
to false.
Global secondary indexes inherit the read/write capacity mode from the base table. For more
information, see Considerations When Changing Read/Write Capacity Mode (p. 338).
When you create a global secondary index, you specify one or more index key attributes and their data
types. This means that whenever you write an item to the base table, the data types for those attributes
must match the index key schema's data types. In the case of GameTitleIndex, the GameTitle partition key
in the index is defined as a String data type, and the TopScore sort key in the index is of type Number.
If you attempt to add an item to the GameScores table and specify a different data type for either
GameTitle or TopScore, DynamoDB will return a ValidationException because of the data type
mismatch.
When you put or delete items in a table, the global secondary indexes on that table are updated in an
eventually consistent fashion. Changes to the table data are propagated to the global secondary indexes
within a fraction of a second, under normal conditions. However, in some unlikely failure scenarios,
longer propagation delays might occur. Because of this, your applications need to anticipate and handle
situations where a query on a global secondary index returns results that are not up-to-date.
If you write an item to a table, you don't have to specify the attributes for any global secondary index
sort key. Using GameTitleIndex as an example, you would not need to specify a value for the TopScore
attribute in order to write a new item to the GameScores table. In this case, Amazon DynamoDB does not
write any data to the index for this particular item.
A table with many global secondary indexes will incur higher costs for write activity than tables with
fewer indexes. For more information, see Provisioned Throughput Considerations for Global Secondary
Indexes (p. 501).
For example, if you Query a global secondary index and exceed its provisioned read capacity, your
request will be throttled. If you perform heavy write activity on the table, but a global secondary index
on that table has insufficient write capacity, then the write activity on the table will be throttled.
Note
To avoid potential throttling, the provisioned write capacity for a global secondary index should
be equal or greater than the write capacity of the base table since new updates will write to
both the base table and global secondary index.
To view the provisioned throughput settings for a global secondary index, use the DescribeTable
operation; detailed information about all of the table's global secondary indexes will be returned.
For global secondary index queries, DynamoDB calculates the provisioned read activity in the same way
as it does for queries against tables. The only difference is that the calculation is based on the sizes of
the index entries, rather than the size of the item in the base table. The number of read capacity units
is the sum of all projected attribute sizes across all of the items returned; the result is then rounded up
to the next 4 KB boundary. For more information on how DynamoDB calculates provisioned throughput
usage, see Managing Throughput Settings on Provisioned Tables (p. 339).
The maximum size of the results returned by a Query operation is 1 MB; this includes the sizes of all the
attribute names and values across all of the items returned.
For example, consider a global secondary index where each item contains 2000 bytes of data. Now
suppose that you Query this index and that the query returns 8 items. The total size of the matching
items is 2000 bytes × 8 items = 16,000 bytes; this is then rounded up to the nearest 4 KB boundary.
Since global secondary index queries are eventually consistent, the total cost is 0.5 × (16 KB / 4 KB), or 2
read capacity units.
In order for a table write to succeed, the provisioned throughput settings for the table and all of its
global secondary indexes must have enough write capacity to accommodate the write; otherwise, the
write to the table will be throttled.
The cost of writing an item to a global secondary index depends on several factors:
• If you write a new item to the table that defines an indexed attribute, or you update an existing item
to define a previously undefined indexed attribute, one write operation is required to put the item into
the index.
• If an update to the table changes the value of an indexed key attribute (from A to B), two writes are
required, one to delete the previous item from the index and another write to put the new item into
the index.
• If an item was present in the index, but a write to the table caused the indexed attribute to be deleted,
one write is required to delete the old item projection from the index.
• If an item is not present in the index before or after the item is updated, there is no additional write
cost for the index.
• If an update to the table only changes the value of projected attributes in the index key schema, but
does not change the value of any indexed key attribute, then one write is required to update the values
of the projected attributes into the index.
All of these factors assume that the size of each item in the index is less than or equal to the 1 KB item
size for calculating write capacity units. Larger index entries will require additional write capacity units.
You can minimize your write costs by considering which attributes your queries will need to return and
projecting only those attributes into the index.
The amount of space used by an index item is the sum of the following:
• The size in bytes of the base table primary key (partition key and sort key)
• The size in bytes of the index key attribute
• The size in bytes of the projected attributes (if any)
• 100 bytes of overhead per index item
To estimate the storage requirements for a global secondary index, you can estimate the average size
of an item in the index and then multiply by the number of items in the base table that have the global
secondary index key attributes.
If a table contains an item where a particular attribute is not defined, but that attribute is defined as an
index partition key or sort key, then DynamoDB does not write any data for that item to the index.
Topics
• Creating a Table with Global Secondary Indexes (p. 503)
• Describing the Global Secondary Indexes on a Table (p. 503)
• Adding a Global Secondary Index to an Existing Table (p. 504)
• Deleting a Global Secondary Index (p. 506)
• Modifying a Global Secondary Index During Creation (p. 506)
• Detecting and Correcting Index Key Violations (p. 506)
You must specify one attribute to act as the index partition key. You can optionally specify another
attribute for the index sort key. It is not necessary for either of these key attributes to be the
same as a key attribute in the table. For example, in the GameScores table (see Global Secondary
Indexes (p. 496)), neither TopScore nor TopScoreDateTime are key attributes. You could create a global
secondary index with a partition key of TopScore and a sort key of TopScoreDateTime. You might use such
an index to determine whether there is a correlation between high scores and the time of day a game is
played.
Each index key attribute must be a scalar of type String, Number, or Binary. (It cannot be a document or
a set.) You can project attributes of any data type into a global secondary index. This includes scalars,
documents, and sets. For a complete list of data types, see Data Types (p. 12).
If using provisioned mode, you must provide ProvisionedThroughput settings for the index,
consisting of ReadCapacityUnits and WriteCapacityUnits. These provisioned throughput settings
are separate from those of the table, but behave in similar ways. For more information, see Provisioned
Throughput Considerations for Global Secondary Indexes (p. 501).
Global secondary indexes inherit the read/write capacity mode from the base table. For more
information, see Considerations When Changing Read/Write Capacity Mode (p. 338).
The IndexStatus for a global secondary index will be one of the following:
• CREATING—The index is currently being created, and is not yet available for use.
• ACTIVE—The index is ready for use, and applications can perform Query operations on the index
When DynamoDB has finished building a global secondary index, the index status changes from
CREATING to ACTIVE.
• An index name. The name must be unique among all the indexes on the table.
• The key schema of the index. You must specify one attribute for the index partition key; you can
optionally specify another attribute for the index sort key. It is not necessary for either of these key
attributes to be the same as a key attribute in the table. The data types for each schema attribute
must be scalar: String, Number, or Binary.
• The attributes to be projected from the table into the index:
• KEYS_ONLY – Each item in the index consists only of the table partition key and sort key values, plus
the index key values.
• INCLUDE – In addition to the attributes described in KEYS_ONLY, the secondary index will include
other non-key attributes that you specify.
• ALL – The index includes all of the attributes from the source table.
• The provisioned throughput settings for the index, consisting of ReadCapacityUnits
and WriteCapacityUnits. These provisioned throughput settings are separate from those of the
table.
You can only create one global secondary index per UpdateTable operation.
When you add a new global secondary index to an existing table, the table continues to be available
while the index is being built. However, the new index is not available for Query operations until its
status changes from CREATING to ACTIVE.
Resource Allocation
DynamoDB allocates the compute and storage resources that will be needed for building the index.
During the resource allocation phase, the IndexStatus attribute is CREATING and the
Backfilling attribute is false. Use the DescribeTable operation to retrieve the status of a table
and all of its secondary indexes.
While the index is in the resource allocation phase, you can't delete the index or delete its parent
table. You also can't modify the provisioned throughput of the index or the table. You cannot add
or delete other indexes on the table. However, you can modify the provisioned throughput of these
other indexes.
Backfilling
For each item in the table, DynamoDB determines which set of attributes to write to the index
based on its projection (KEYS_ONLY, INCLUDE, or ALL). It then writes these attributes to the index.
During the backfill phase, DynamoDB keeps track of items that are being added, deleted, or updated
in the table. The attributes from these items are also added, deleted, or updated in the index as
appropriate.
During the backfilling phase, the IndexStatus attribute is set to CREATING and the Backfilling
attribute is true. Use the DescribeTable operation to retrieve the status of a table and all of its
secondary indexes.
While the index is backfilling, you cannot delete its parent table. However, you can still delete the
index or modify the provisioned throughput of the table and any of its global secondary indexes.
Note
During the backfilling phase, some writes of violating items might succeed while others
are rejected. After backfilling, all writes to items that violate the new index's key schema
are rejected. We recommend that you run the Violation Detector tool after the backfill
phase finishes to detect and resolve any key violations that might have occurred. For more
information, see Detecting and Correcting Index Key Violations (p. 506).
While the resource allocation and backfilling phases are in progress, the index is in the CREATING state.
During this time, DynamoDB performs read operations on the table. You are not charged for this read
activity.
When the index build is complete, its status changes to ACTIVE. You can't Query or Scan the index until
it is ACTIVE.
Note
In some cases, DynamoDB can't write data from the table to the index due to index key
violations. This can occur if the data type of an attribute value does not match the data type of
an index key schema data type, or if the size of an attribute exceeds the maximum length for an
index key attribute. Index key violations do not interfere with global secondary index creation.
However, when the index becomes ACTIVE, the violating keys are not present in the index.
DynamoDB provides a standalone tool for finding and resolving these issues. For more
information, see Detecting and Correcting Index Key Violations (p. 506).
The time required for building a global secondary index depends on several factors, such as the
following:
If you are adding a global secondary index to a very large table, it might take a long time for the creation
process to complete. To monitor progress and determine whether the index has sufficient write capacity,
consult the following Amazon CloudWatch metrics:
• OnlineIndexPercentageProgress
• OnlineIndexConsumedWriteCapacity
• OnlineIndexThrottleEvents
Note
For more information about CloudWatch metrics related to DynamoDB, see DynamoDB
Metrics (p. 758).
If the provisioned write throughput setting on the index is too low, the index build will take longer
to complete. To shorten the time it takes to build a new global secondary index, you can increase its
provisioned write capacity temporarily.
Note
As a general rule, we recommend setting the provisioned write capacity of the index to 1.5 times
the write capacity of the table. This is a good setting for many use cases; however, your actual
requirements may be higher or lower.
While an index is being backfilled, DynamoDB uses internal system capacity to read from the table. This
is to minimize the impact of the index creation and to assure that your table does not run out of read
capacity.
However, it is possible that the volume of incoming write activity might exceed the provisioned write
capacity of the index. This is a bottleneck scenario, in which the index creation takes more time because
the write activity to the index is throttled. During the index build, we recommend that you monitor
the Amazon CloudWatch metrics for the index to determine whether its consumed write capacity is
exceeding its provisioned capacity. In a bottleneck scenario, you should increase the provisioned write
capacity on the index to avoid write throttling during the backfill phase.
After the index has been created, you should set its provisioned write capacity to reflect the normal
usage of your application.
You can only delete one global secondary index per UpdateTable operation.
While the global secondary index, is being deleted, there is no effect on any read or write activity in the
parent table. While the deletion is in progress, you can still modify the provisioned throughput on other
indexes.
Note
When you delete a table using the DeleteTable action, all of the global secondary indexes on
that table are also deleted.
While the backfill is proceeding, you can update the provisioned throughput parameters for the index.
You might decide to do this in order to speed up the index build: You can increase the write capacity of
the index while it is being built, and then decrease it afterward. To modify the provisioned throughput
settings of the index, use the UpdateTable operation. The index status changes to UPDATING, and
Backfilling is true until the index is ready for use.
During the backfilling phase, you can delete the index that is being created. During this phase, you can't
add or delete other indexes on the table.
Note
For indexes that were created as part of a CreateTable operation, the Backfilling attribute
does not appear in the DescribeTable output. For more information, see Phases of Index
Creation (p. 504).
• There is a data type mismatch between an attribute value and the index key schema data type. For
example, suppose one of the items in the GameScores table had a TopScore value of type String. If you
added a global secondary index with a partition key of TopScore, of type Number, the item from the
table would violate the index key.
• An attribute value from the table exceeds the maximum length for an index key attribute. The
maximum length of a partition key is 2048 bytes, and the maximum length of a sort key is 1024 bytes.
If any of the corresponding attribute values in the table exceed these limits, the item from the table
would violate the index key.
If an index key violation occurs, the backfill phase continues without interruption; however, any violating
items are not included in the index. After the backfill phase completes, all writes to items that violate the
new index's key schema will be rejected.
To identify and fix attribute values in a table that violate an index key, use the Violation Detector tool. To
run Violation Detector, you create a configuration file that specifies the name of a table to be scanned,
the names and data types of the global secondary index partition key and sort key, and what actions to
take if any index key violations are found. Violation Detector can run in one of two different modes:
• Detection mode—detect index key violations. Use detection mode to report the items in the table that
would cause key violations in a global secondary index. (You can optionally request that these violating
table items be deleted immediately when they are found.) The output from detection mode is written
to a file, which you can use for further analysis.
• Correction mode— correct index key violations. In correction mode, Violation Detector reads an input
file with the same format as the output file from detection mode. Correction mode reads the records
from the input file and, for each record, it either deletes or updates the corresponding items in the
table. (Note that if you choose to update the items, you must edit the input file and set appropriate
values for these updates.)
Violation Detector is available as an executable Java Archive (.jar file), and will run on Windows,
macOS, or Linux computers. Violation Detector requires Java 1.7 (or later) and Apache Maven.
• https://github.com/awslabs/dynamodb-online-index-violation-detector
Follow the instructions in the README.md file to download and install Violation Detector using Maven.
To start Violation Detector, go to the directory where you have built ViolationDetector.java and
enter the following command.
accessKey
= access_key_id_goes_here
secretKey
= secret_key_goes_here
S | N | B
S | N | B
detectionOutputPath
= //local/path/
filename.csv
detectionOutputPath =
s3://bucket/filename.csv
correctionOutputPath
= //local/path/
filename.csv
correctionOutputPath =
s3://bucket/filename.csv
Detection
To detect index key violations, use Violation Detector with the --detect command line option. To show
how this option works, consider the ProductCatalog table shown in Creating Tables and Loading
Sample Data (p. 323). The following is a list of items in the table; only the primary key (Id) and the
Price attribute are shown.
101 5
102 20
103 200
201 100
202 200
203 300
204 400
205 500
Note that all of the values for Price are of type Number. However, because DynamoDB is schemaless, it
is possible to add an item with a non-numeric Price. For example, suppose that we add another item to
the ProductCatalog table:
999 "Hello"
Now we will add a new global secondary index to the table: PriceIndex. The primary key for this index
is a partition key, Price, which is of type Number. After the index has been built, it will contain eight
items—but the ProductCatalog table has nine items. The reason for this discrepancy is that the value
"Hello" is of type String, but PriceIndex has a primary key of type Number. The String value violates
the global secondary index key, so it is not present in the index.
To use Violation Detector in this scenario, you first create a configuration file such as the following:
awsCredentialsFile = /home/alice/credentials.txt
dynamoDBRegion = us-west-2
tableName = ProductCatalog
gsiHashKeyName = Price
gsiHashKeyType = N
recordDetails = true
recordGsiValueInViolationRecord = true
detectionOutputPath = ./gsi_violation_check.csv
correctionInputPath = ./gsi_violation_check.csv
numOfSegments = 1
readWriteIOPSPercent = 40
Violation detection started: sequential scan, Table name: ProductCatalog, GSI name:
PriceIndex
Progress: Items scanned in total: 9, Items scanned by this thread: 9, Violations
found by this thread: 1, Violations deleted by this thread: 0
Violation detection finished: Records scanned: 9, Violations found: 1, Violations deleted:
0, see results at: ./gsi_violation_check.csv
If the recordDetails config parameter is set to true, then Violation Detector writes details of each
violation to the output file, as in the following example:
Table Hash Key,GSI Hash Key Value,GSI Hash Key Violation Type,GSI Hash Key Violation
Description,GSI Hash Key Update Value(FOR USER),Delete Blank Attributes When Updating?(Y/
N)
The output file is in comma-separated value format (CSV). The first line in the file is a header, followed
by one record per item that violates the index key. The fields of these violation records are as follows:
• Table Hash Key—the partition key value of the item in the table.
• Table Range Key—the sort key value of the item in the table.
• GSI Hash Key Value—the partition key value of the global secondary index
If either of these fields are non-blank, then Delete Blank Attribute When Updating(Y/N) has
no effect.
Note
The output format might vary, depending on the configuration file and command line options.
For example, if the table has a simple primary key (without a sort key), no sort key fields will be
present in the output.
The violation records in the file might not be in sorted order.
Correction
To correct index key violations, use Violation Detector with the --correct command line option.
In correction mode, Violation Detector reads the input file specified by the correctionInputPath
parameter. This file has the same format as the detectionOutputPath file, so that you can use the
output from detection as input for correction.
Violation Detector provides two different ways to correct index key violations:
• Delete violations—delete the table items that have violating attribute values.
• Update violations—update the table items, replacing the violating attributes with non-violating
values.
In either case, you can use the output file from detection mode as input for correction mode.
Continuing with our ProductCatalog example, suppose that we want to delete the violating item from
the table. To do this, we use the following command line:
At this point, you are asked to confirm whether you want to delete the violating items.
Now both ProductCatalog and PriceIndex have the same number of items.
The following are the steps to create a table with a global secondary index, using the DynamoDB
document API.
You must provide the table name, its primary key, and the provisioned throughput values. For the
global secondary index, you must provide the index name, its provisioned throughput settings, the
attribute definitions for the index sort key, the key schema for the index, and the attribute projection.
3. Call the createTable method by providing the request object as a parameter.
The following Java code snippet demonstrates the preceding steps. The snippet creates a table
(WeatherData) with a global secondary index (PrecipIndex). The index partition key is Date and its sort
key is Precipitation. All of the table attributes are projected into the index. Users can query this index to
obtain weather data for a particular date, optionally sorting the data by precipitation amount.
Note that since Precipitation is not a key attribute for the table, it is not required; however, WeatherData
items without Precipitation will not appear in PrecipIndex.
// Attribute definitions
ArrayList<AttributeDefinition> attributeDefinitions = new ArrayList<AttributeDefinition>();
attributeDefinitions.add(new AttributeDefinition()
.withAttributeName("Location")
.withAttributeType("S"));
attributeDefinitions.add(new AttributeDefinition()
.withAttributeName("Date")
.withAttributeType("S"));
attributeDefinitions.add(new AttributeDefinition()
.withAttributeName("Precipitation")
.withAttributeType("N"));
// PrecipIndex
GlobalSecondaryIndex precipIndex = new GlobalSecondaryIndex()
.withIndexName("PrecipIndex")
.withProvisionedThroughput(new ProvisionedThroughput()
.withReadCapacityUnits((long) 10)
.withWriteCapacityUnits((long) 1))
.withProjection(new Projection().withProjectionType(ProjectionType.ALL));
indexKeySchema.add(new KeySchemaElement()
.withAttributeName("Date")
.withKeyType(KeyType.HASH)); //Partition key
indexKeySchema.add(new KeySchemaElement()
.withAttributeName("Precipitation")
.withKeyType(KeyType.RANGE)); //Sort key
precipIndex.setKeySchema(indexKeySchema);
You must wait until DynamoDB creates the table and sets the table status to ACTIVE. After that, you can
begin putting data items into the table.
The following are the steps to access global secondary index information a table.
Example
Iterator<GlobalSecondaryIndexDescription> gsiIter =
tableDesc.getGlobalSecondaryIndexes().iterator();
while (gsiIter.hasNext()) {
GlobalSecondaryIndexDescription gsiDesc = gsiIter.next();
System.out.println("Info for index "
+ gsiDesc.getIndexName() + ":");
The following are the steps to query a global secondary index using the AWS SDK for Java Document
API.
The attribute name Date is a DynamoDB reserved word. Therefore, we must use an expression attribute
name as a placeholder in the KeyConditionExpression.
Example
.with("#d", "Date"))
.withValueMap(new ValueMap()
.withString(":v_date","2013-08-10")
.withNumber(":v_precip",0));
Example: Global Secondary Indexes Using the AWS SDK for Java Document API
The following Java code example shows how to work with global secondary indexes. The example
creates a table named Issues, which might be used in a simple bug tracking system for software
development. The partition key is IssueId and the sort key is Title. There are three global secondary
indexes on this table:
• CreateDateIndex—the partition key is CreateDate and the sort key is IssueId. In addition to the
table keys, the attributes Description and Status are projected into the index.
• TitleIndex—the partition key is Title and the sort key is IssueId. No attributes other than the table
keys are projected into the index.
• DueDateIndex—the partition key is DueDate, and there is no sort key. All of the table attributes are
projected into the index.
After the Issues table is created, the program loads the table with data representing software bug
reports, and then queries the data using the global secondary indexes. Finally, the program deletes the
Issues table.
For step-by-step instructions to test the following sample, see Java Code Examples (p. 328).
Example
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package com.amazonaws.codesamples.document;
import java.util.ArrayList;
import java.util.Iterator;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import com.amazonaws.services.dynamodbv2.document.DynamoDB;
import com.amazonaws.services.dynamodbv2.document.Index;
import com.amazonaws.services.dynamodbv2.document.Item;
import com.amazonaws.services.dynamodbv2.document.ItemCollection;
import com.amazonaws.services.dynamodbv2.document.QueryOutcome;
import com.amazonaws.services.dynamodbv2.document.Table;
import com.amazonaws.services.dynamodbv2.document.spec.QuerySpec;
import com.amazonaws.services.dynamodbv2.document.utils.ValueMap;
import com.amazonaws.services.dynamodbv2.model.AttributeDefinition;
import com.amazonaws.services.dynamodbv2.model.CreateTableRequest;
import com.amazonaws.services.dynamodbv2.model.GlobalSecondaryIndex;
import com.amazonaws.services.dynamodbv2.model.KeySchemaElement;
import com.amazonaws.services.dynamodbv2.model.KeyType;
import com.amazonaws.services.dynamodbv2.model.Projection;
import com.amazonaws.services.dynamodbv2.model.ProvisionedThroughput;
createTable();
loadData();
queryIndex("CreateDateIndex");
queryIndex("TitleIndex");
queryIndex("DueDateIndex");
deleteTable(tableName);
// Attribute definitions
ArrayList<AttributeDefinition> attributeDefinitions = new
ArrayList<AttributeDefinition>();
attributeDefinitions.add(new
AttributeDefinition().withAttributeName("IssueId").withAttributeType("S"));
attributeDefinitions.add(new
AttributeDefinition().withAttributeName("Title").withAttributeType("S"));
attributeDefinitions.add(new
AttributeDefinition().withAttributeName("CreateDate").withAttributeType("S"));
attributeDefinitions.add(new
AttributeDefinition().withAttributeName("DueDate").withAttributeType("S"));
// key
tableKeySchema.add(new
KeySchemaElement().withAttributeName("Title").withKeyType(KeyType.RANGE)); // Sort
// key
// CreateDateIndex
GlobalSecondaryIndex createDateIndex = new
GlobalSecondaryIndex().withIndexName("CreateDateIndex")
.withProvisionedThroughput(ptIndex)
.withKeySchema(new
KeySchemaElement().withAttributeName("CreateDate").withKeyType(KeyType.HASH), // Partition
// key
new
KeySchemaElement().withAttributeName("IssueId").withKeyType(KeyType.RANGE)) // Sort
// key
.withProjection(
new
Projection().withProjectionType("INCLUDE").withNonKeyAttributes("Description", "Status"));
// TitleIndex
GlobalSecondaryIndex titleIndex = new
GlobalSecondaryIndex().withIndexName("TitleIndex")
.withProvisionedThroughput(ptIndex)
.withKeySchema(new
KeySchemaElement().withAttributeName("Title").withKeyType(KeyType.HASH), // Partition
// key
new
KeySchemaElement().withAttributeName("IssueId").withKeyType(KeyType.RANGE)) // Sort
// key
.withProjection(new Projection().withProjectionType("KEYS_ONLY"));
// DueDateIndex
GlobalSecondaryIndex dueDateIndex = new
GlobalSecondaryIndex().withIndexName("DueDateIndex")
.withProvisionedThroughput(ptIndex)
.withKeySchema(new
KeySchemaElement().withAttributeName("DueDate").withKeyType(KeyType.HASH)) // Partition
// key
.withProjection(new Projection().withProjectionType("ALL"));
System.out.println("\n***********************************************************
\n");
System.out.print("Querying index " + indexName + "...");
if (indexName == "CreateDateIndex") {
System.out.println("Issues filed on 2013-11-01");
querySpec.withKeyConditionExpression("CreateDate = :v_date and
begins_with(IssueId, :v_issue)")
.withValueMap(new ValueMap().withString(":v_date",
"2013-11-01").withString(":v_issue", "A-"));
items = index.query(querySpec);
}
else if (indexName == "TitleIndex") {
System.out.println("Compilation errors");
querySpec.withKeyConditionExpression("Title = :v_title and
begins_with(IssueId, :v_issue)")
.withValueMap(new ValueMap().withString(":v_title", "Compilation
error").withString(":v_issue", "A-"));
items = index.query(querySpec);
}
else if (indexName == "DueDateIndex") {
System.out.println("Items that are due on 2013-11-30");
querySpec.withKeyConditionExpression("DueDate = :v_date")
.withValueMap(new ValueMap().withString(":v_date", "2013-11-30"));
items = index.query(querySpec);
}
else {
System.out.println("\nNo valid index name provided");
return;
}
while (iterator.hasNext()) {
System.out.println(iterator.next().toJSONPretty());
}
// IssueId, Title,
// Description,
putItem("A-102", "Can't read data file", "The main data file is missing, or the
permissions are incorrect",
"2013-11-01", "2013-11-04", "2013-11-30", 2, "In progress");
table.putItem(item);
}
You can use the AWS SDK for .NET low-level API to create a table with one or more global secondary
indexes, describe the indexes on the table, and perform queries using the indexes. These operations map
to the corresponding DynamoDB operations. For more information, see the Amazon DynamoDB API
Reference.
The following are the common steps for table operations using the .NET low-level API.
2. Provide the required and optional parameters for the operation by creating the corresponding request
objects.
For example, create a CreateTableRequest object to create a table and QueryRequest object to
query a table or an index.
3. Execute the appropriate method provided by the client that you created in the preceding step.
The following are the steps to create a table with a global secondary index, using the .NET low-level API.
You must provide the table name, its primary key, and the provisioned throughput values. For the
global secondary index, you must provide the index name, its provisioned throughput settings, the
attribute definitions for the index sort key, the key schema for the index, and the attribute projection.
3. Execute the CreateTable method by providing the request object as a parameter.
The following C# code snippet demonstrates the preceding steps. The snippet creates a table
(WeatherData) with a global secondary index (PrecipIndex). The index partition key is Date and its sort
key is Precipitation. All of the table attributes are projected into the index. Users can query this index to
obtain weather data for a particular date, optionally sorting the data by precipitation amount.
Note that since Precipitation is not a key attribute for the table, it is not required; however, WeatherData
items without Precipitation will not appear in PrecipIndex.
// Attribute definitions
var attributeDefinitions = new List<AttributeDefinition>()
{
{new AttributeDefinition{
AttributeName = "Location",
AttributeType = "S"}},
{new AttributeDefinition{
AttributeName = "Date",
AttributeType = "S"}},
{new AttributeDefinition(){
AttributeName = "Precipitation",
AttributeType = "N"}
}
};
// PrecipIndex
var precipIndex = new GlobalSecondaryIndex
{
IndexName = "PrecipIndex",
ProvisionedThroughput = new ProvisionedThroughput
{
ReadCapacityUnits = (long)10,
WriteCapacityUnits = (long)1
},
Projection = new Projection { ProjectionType = "ALL" }
};
precipIndex.KeySchema = indexKeySchema;
You must wait until DynamoDB creates the table and sets the table status to ACTIVE. After that, you can
begin putting data items into the table.
The following are the steps to access global secondary index information a table using the .NET low-level
API.
Example
List<GlobalSecondaryIndexDescription> globalSecondaryIndexes =
response.DescribeTableResult.Table.GlobalSecondaryIndexes;
// This code snippet will work for multiple indexes, even though
// there is only one index in this example.
if (projection.ProjectionType.ToString().Equals("INCLUDE")) {
Console.WriteLine("\t\tThe non-key projected attributes are: "
+ projection.NonKeyAttributes);
}
}
The following are the steps to query a global secondary index using the .NET low-level API.
The attribute name Date is a DynamoDB reserved word. Therefore, we must use an expression attribute
name as a placeholder in the KeyConditionExpression.
Example
}
Console.WriteLine();
}
Example: Global Secondary Indexes Using the AWS SDK for .NET Low-Level API
The following C# code example shows how to work with global secondary indexes. The example creates
a table named Issues, which might be used in a simple bug tracking system for software development.
The partition key is IssueId and the sort key is Title. There are three global secondary indexes on this
table:
• CreateDateIndex—the partition key is CreateDate and the sort key is IssueId. In addition to the table
keys, the attributes Description and Status are projected into the index.
• TitleIndex—the partition key is Title and the sort key is IssueId. No attributes other than the table
keys are projected into the index.
• DueDateIndex—the partition key is DueDate, and there is no sort key. All of the table attributes are
projected into the index.
After the Issues table is created, the program loads the table with data representing software bug
reports, and then queries the data using the global secondary indexes. Finally, the program deletes the
Issues table.
For step-by-step instructions to test the following sample, see .NET Code Examples (p. 330).
Example
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
using System;
using System.Collections.Generic;
using System.Linq;
using Amazon.DynamoDBv2;
using Amazon.DynamoDBv2.DataModel;
using Amazon.DynamoDBv2.DocumentModel;
using Amazon.DynamoDBv2.Model;
using Amazon.Runtime;
using Amazon.SecurityToken;
namespace com.amazonaws.codesamples
{
class LowLevelGlobalSecondaryIndexExample
{
private static AmazonDynamoDBClient client = new AmazonDynamoDBClient();
public static String tableName = "Issues";
QueryIndex("CreateDateIndex");
QueryIndex("TitleIndex");
QueryIndex("DueDateIndex");
DeleteTable(tableName);
}
}
};
// CreateDateIndex
var createDateIndex = new GlobalSecondaryIndex()
{
IndexName = "CreateDateIndex",
ProvisionedThroughput = ptIndex,
KeySchema = {
new KeySchemaElement {
AttributeName = "CreateDate", KeyType = "HASH" //Partition key
},
new KeySchemaElement {
AttributeName = "IssueId", KeyType = "RANGE" //Sort key
}
},
Projection = new Projection
{
ProjectionType = "INCLUDE",
NonKeyAttributes = {
"Description", "Status"
}
}
};
// TitleIndex
var titleIndex = new GlobalSecondaryIndex()
{
IndexName = "TitleIndex",
ProvisionedThroughput = ptIndex,
KeySchema = {
new KeySchemaElement {
AttributeName = "Title", KeyType = "HASH" //Partition key
},
new KeySchemaElement {
AttributeName = "IssueId", KeyType = "RANGE" //Sort key
}
},
Projection = new Projection
{
ProjectionType = "KEYS_ONLY"
}
};
// DueDateIndex
var dueDateIndex = new GlobalSecondaryIndex()
{
IndexName = "DueDateIndex",
ProvisionedThroughput = ptIndex,
KeySchema = {
new KeySchemaElement {
AttributeName = "DueDate",
KeyType = "HASH" //Partition key
}
},
Projection = new Projection
{
ProjectionType = "ALL"
}
};
WaitUntilTableReady(tableName);
}
// IssueId, Title,
// Description,
// CreateDate, LastUpdateDate, DueDate,
// Priority, Status
{
Dictionary<String, AttributeValue> item = new Dictionary<string,
AttributeValue>();
try
{
client.PutItem(new PutItemRequest
{
TableName = tableName,
Item = item
});
}
catch (Exception e)
{
Console.WriteLine(e.ToString());
}
}
String keyConditionExpression;
if (indexName == "CreateDateIndex")
{
Console.WriteLine("Issues filed on 2013-11-01\n");
// Select
queryRequest.Select = "ALL_PROJECTED_ATTRIBUTES";
}
else if (indexName == "DueDateIndex")
{
Console.WriteLine("Items that are due on 2013-11-30\n");
// Select
queryRequest.Select = "ALL_PROJECTED_ATTRIBUTES";
}
else
{
Console.WriteLine("\nNo valid index name provided");
return;
}
queryRequest.KeyConditionExpression = keyConditionExpression;
queryRequest.ExpressionAttributeValues = expressionAttributeValues;
while (tablePresent)
{
System.Threading.Thread.Sleep(5000); // Wait 5 seconds.
try
{
var res = client.DescribeTable(new DescribeTableRequest
{
TableName = tableName
});
catch (ResourceNotFoundException)
{
tablePresent = false;
}
}
}
}
}
Some applications only need to query data using the base table's primary key; however, there may be
situations where an alternate sort key would be helpful. To give your application a choice of sort keys,
you can create one or more local secondary indexes on a table and issue Query or Scan requests against
these indexes.
For example, consider the Thread table that is defined in Creating Tables and Loading Sample
Data (p. 323). This table is useful for an application such as the AWS Discussion Forums. The following
diagram shows how the items in the table would be organized. (Not all of the attributes are shown.)
DynamoDB stores all of the items with the same partition key value contiguously. In this example, given
a particular ForumName, a Query operation could immediately locate all of the threads for that forum.
Within a group of items with the same partition key value, the items are sorted by sort key value. If the
sort key (Subject) is also provided in the query, DynamoDB can narrow down the results that are returned
—for example, returning all of the threads in the "S3" forum that have a Subject beginning with the
letter "a".
Some requests might require more complex data access patterns. For example:
To answer these questions, the Query action would not be sufficient. Instead, you would have to Scan
the entire table. For a table with millions of items, this would consume a large amount of provisioned
read throughput and take a long time to complete.
However, you can specify one or more local secondary indexes on non-key attributes, such as Replies or
LastPostDateTime.
A local secondary index maintains an alternate sort key for a given partition key value. A local secondary
index also contains a copy of some or all of the attributes from its base table; you specify which
attributes are projected into the local secondary index when you create the table. The data in a local
secondary index is organized by the same partition key as the base table, but with a different sort key.
This lets you access data items efficiently across this different dimension. For greater query or scan
flexibility, you can create up to five local secondary indexes per table.
Suppose that an application needs to find all of the threads that have been posted within the last three
months. Without a local secondary index, the application would have to Scan the entire Thread table
and discard any posts that were not within the specified time frame. With a local secondary index, a
Query operation could use LastPostDateTime as a sort key and find the data quickly.
The following diagram shows a local secondary index named LastPostIndex. Note that the partition key is
the same as that of the Thread table, but the sort key is LastPostDateTime.
In this example, the partition key is ForumName and the sort key of the local secondary index is
LastPostDateTime. In addition, the sort key value from the base table (in this example, Subject) is
projected into the index, but it is not a part of the index key. If an application needs a list that is based on
ForumName and LastPostDateTime, it can issue a Query request against LastPostIndex. The query results
are sorted by LastPostDateTime, and can be returned in ascending or descending order. The query can
also apply key conditions, such as returning only items that have a LastPostDateTime within a particular
time span.
Every local secondary index automatically contains the partition and sort keys from its base table; you
can optionally project non-key attributes into the index. When you query the index, DynamoDB can
retrieve these projected attributes efficiently. When you query a local secondary index, the query can
also retrieve attributes that are not projected into the index. DynamoDB will automatically fetch these
attributes from the base table, but at a greater latency and with higher provisioned throughput costs.
For any local secondary index, you can store up to 10 GB of data per distinct partition key value. This
figure includes all of the items in the base table, plus all of the items in the indexes, that have the same
partition key value. For more information, see Item Collections (p. 539).
Attribute Projections
With LastPostIndex, an application could use ForumName and LastPostDateTime as query criteria;
however, to retrieve any additional attributes, DynamoDB would need to perform additional read
operations against the Thread table. These extra reads are known as fetches, and they can increase the
total amount of provisioned throughput required for a query.
Suppose that you wanted to populate a web page with a list of all the threads in "S3" and the number
of replies for each thread, sorted by the last reply date/time beginning with the most recent reply. To
populate this list, you would need the following attributes:
• Subject
• Replies
• LastPostDateTime
The most efficient way to query this data, and to avoid fetch operations, would be to project the Replies
attribute from the table into the local secondary index, as shown in this diagram:
A projection is the set of attributes that is copied from a table into a secondary index. The partition key
and sort key of the table are always projected into the index; you can project other attributes to support
your application's query requirements. When you query an index, Amazon DynamoDB can access any
attribute in the projection as if those attributes were in a table of their own.
When you create a secondary index, you need to specify the attributes that will be projected into the
index. DynamoDB provides three different options for this:
• KEYS_ONLY – Each item in the index consists only of the table partition key and sort key values, plus
the index key values. The KEYS_ONLY option results in the smallest possible secondary index.
• INCLUDE – In addition to the attributes described in KEYS_ONLY, the secondary index will include
other non-key attributes that you specify.
• ALL – The secondary index includes all of the attributes from the source table. Because all of the table
data is duplicated in the index, an ALL projection results in the largest possible secondary index.
In the previous diagram, the non-key attribute Replies is projected into LastPostIndex. An application can
query LastPostIndex instead of the full Thread table to populate a web page with Subject, Replies and
LastPostDateTime. If any other non-key attributes are requested, DynamoDB would need to fetch those
attributes from the Thread table.
From an application's point of view, fetching additional attributes from the base table is automatic and
transparent, so there is no need to rewrite any application logic. However, note that such fetching can
greatly reduce the performance advantage of using a local secondary index.
When you choose the attributes to project into a local secondary index, you must consider the tradeoff
between provisioned throughput costs and storage costs:
• If you need to access just a few attributes with the lowest possible latency, consider projecting only
those attributes into a local secondary index. The smaller the index, the less that it will cost to store it,
and the less your write costs will be. If there are attributes that you occasionally need to fetch, the cost
for provisioned throughput may well outweigh the longer-term cost of storing those attributes.
• If your application will frequently access some non-key attributes, you should consider projecting
those attributes into a local secondary index. The additional storage costs for the local secondary
index will offset the cost of performing frequent table scans.
• If you need to access most of the non-key attributes on a frequent basis, you can project these
attributes—or even the entire base table— into a local secondary index. This will give you maximum
flexibility and lowest provisioned throughput consumption, because no fetching would be required;
however, your storage cost would increase, or even double if you are projecting all attributes.
• If your application needs to query a table infrequently, but must perform many writes or updates
against the data in the table, consider projecting KEYS_ONLY. The local secondary index would be of
minimal size, but would still be available when needed for query activity.
You must specify one non-key attribute to act as the sort key of the local secondary index. The attribute
that you choose must be a scalar String, Number, or Binary; other scalar types, document types, and set
types are not allowed. For a complete list of data types, see Data Types (p. 12).
Important
For tables with local secondary indexes, there is a 10 GB size limit per partition key value. A
table with local secondary indexes can store any number of items, as long as the total size for
any one partition key value does not exceed 10 GB. For more information, see Item Collection
Size Limit (p. 541).
You can project attributes of any data type into a local secondary index. This includes scalars, documents,
and sets. For a complete list of data types, see Data Types (p. 12).
You can query a local secondary index using either eventually consistent or strongly consistent reads. To
specify which type of consistency you want, use the ConsistentRead parameter of the Query operation. A
strongly consistent read from a local secondary index will always return the latest updated values. If the
query needs to fetch additional attributes from the base table, then those attributes will be consistent
with respect to the index.
Example
Consider the following data returned from a Query that requests data from the discussion threads in a
particular forum:
{
"TableName": "Thread",
"IndexName": "LastPostIndex",
"ConsistentRead": false,
"ProjectionExpression": "Subject, LastPostDateTime, Replies, Tags",
"KeyConditionExpression":
"ForumName = :v_forum and LastPostDateTime between :v_start and :v_end",
"ExpressionAttributeValues": {
":v_start": {"S": "2015-08-31T00:00:00.000Z"},
":v_end": {"S": "2015-11-31T00:00:00.000Z"},
":v_forum": {"S": "EC2"}
}
}
In this query:
• DynamoDB accesses LastPostIndex, using the ForumName partition key to locate the index items for
"EC2". All of the index items with this key are stored adjacent to each other for rapid retrieval.
• Within this forum, DynamoDB uses the index to look up the keys that match the specified
LastPostDateTime condition.
• Because the Replies attribute is projected into the index, DynamoDB can retrieve this attribute without
consuming any additional provisioned throughput.
• The Tags attribute is not projected into the index, so DynamoDB must access the Thread table and
fetch this attribute.
• The results are returned, sorted by LastPostDateTime. The index entries are sorted by partition key
value and then by sort key value, and Query returns them in the order they are stored. (You can use
the ScanIndexForward parameter to return the results in descending order.)
Because the Tags attribute is not projected into the local secondary index, DynamoDB must consume
additional read capacity units to fetch this attribute from the base table. If you need to run this query
often, you should project Tags into LastPostIndex to avoid fetching from the base table; however, if you
needed to access Tags only occasionally, then the additional storage cost for projecting Tags into the
index might not be worthwhile.
and returns it to the application. You can also request that only some of the data be returned, and that
the remaining data should be discarded. To do this, use the FilterExpression parameter of the Scan
API. For more information, see Filter Expressions for Scan (p. 473).
When you create a local secondary index, you specify an attribute to serve as the sort key for the index.
You also specify a data type for that attribute. This means that whenever you write an item to the base
table, if the item defines an index key attribute, its type must match the index key schema's data type. In
the case of LastPostIndex, the LastPostDateTime sort key in the index is defined as a String data type. If
you attempt to add an item to the Thread table and specify a different data type for LastPostDateTime
(such as Number), DynamoDB will return a ValidationException because of the data type mismatch.
There is no requirement for a one-to-one relationship between the items in a base table and the items in
a local secondary index; in fact, this behavior can be advantageous for many applications.
A table with many local secondary indexes will incur higher costs for write activity than tables with
fewer indexes. For more information, see Provisioned Throughput Considerations for Local Secondary
Indexes (p. 537).
Important
For tables with local secondary indexes, there is a 10 GB size limit per partition key value. A
table with local secondary indexes can store any number of items, as long as the total size for
any one partition key value does not exceed 10 GB. For more information, see Item Collection
Size Limit (p. 541).
As with table queries, an index query can use either eventually consistent or strongly consistent reads
depending on the value of ConsistentRead. One strongly consistent read consumes one read capacity
unit; an eventually consistent read consumes only half of that. Thus, by choosing eventually consistent
reads, you can reduce your read capacity unit charges.
For index queries that request only index keys and projected attributes, DynamoDB calculates the
provisioned read activity in the same way as it does for queries against tables. The only difference is
that the calculation is based on the sizes of the index entries, rather than the size of the item in the
base table. The number of read capacity units is the sum of all projected attribute sizes across all of the
items returned; the result is then rounded up to the next 4 KB boundary. For more information on how
DynamoDB calculates provisioned throughput usage, see Managing Throughput Settings on Provisioned
Tables (p. 339).
For index queries that read attributes that are not projected into the local secondary index, DynamoDB
will need to fetch those attributes from the base table, in addition to reading the projected attributes
from the index. These fetches occur when you include any non-projected attributes in the Select or
ProjectionExpression parameters of the Query operation. Fetching causes additional latency in
query responses, and it also incurs a higher provisioned throughput cost: In addition to the reads from
the local secondary index described above, you are charged for read capacity units for every base table
item fetched. This charge is for reading each entire item from the table, not just the requested attributes.
The maximum size of the results returned by a Query operation is 1 MB; this includes the sizes of all
the attribute names and values across all of the items returned. However, if a Query against a local
secondary index causes DynamoDB to fetch item attributes from the base table, the maximum size of the
data in the results might be lower. In this case, the result size is the sum of:
• The size of the matching items in the index, rounded up to the next 4 KB.
• The size of each matching item in the base table, with each item individually rounded up to the next 4
KB.
Using this formula, the maximum size of the results returned by a Query operation is still 1 MB.
For example, consider a table where the size of each item is 300 bytes. There is a local secondary index
on that table, but only 200 bytes of each item is projected into the index. Now suppose that you Query
this index, that the query requires table fetches for each item, and that the query returns 4 items.
DynamoDB sums up the following:
• The size of the matching items in the index: 200 bytes × 4 items = 800 bytes; this is then rounded up
to 4 KB.
• The size of each matching item in the base table: (300 bytes, rounded up to 4 KB) × 4 items = 16 KB.
The cost of writing an item to a local secondary index depends on several factors:
• If you write a new item to the table that defines an indexed attribute, or you update an existing item
to define a previously undefined indexed attribute, one write operation is required to put the item into
the index.
• If an update to the table changes the value of an indexed key attribute (from A to B), two writes are
required, one to delete the previous item from the index and another write to put the new item into
the index.
• If an item was present in the index, but a write to the table caused the indexed attribute to be deleted,
one write is required to delete the old item projection from the index.
• If an item is not present in the index before or after the item is updated, there is no additional write
cost for the index.
All of these factors assume that the size of each item in the index is less than or equal to the 1 KB item
size for calculating write capacity units. Larger index entries will require additional write capacity units.
You can minimize your write costs by considering which attributes your queries will need to return and
projecting only those attributes into the index.
The amount of space used by an index item is the sum of the following:
• The size in bytes of the base table primary key (partition key and sort key)
• The size in bytes of the index key attribute
• The size in bytes of the projected attributes (if any)
• 100 bytes of overhead per index item
To estimate the storage requirements for a local secondary index, you can estimate the average size of
an item in the index and then multiply by the number of items in the base table.
If a table contains an item where a particular attribute is not defined, but that attribute is defined as an
index sort key, then DynamoDB does not write any data for that item to the index.
Item Collections
Note
The following section pertains only to tables that have local secondary indexes.
In DynamoDB, an item collection is any group of items that have the same partition key value in a table
and all of its local secondary indexes. In the examples used throughout this section, the partition key
for the Thread table is ForumName, and the partition key for LastPostIndex is also ForumName. All the
table and index items with the same ForumName are part of the same item collection. For example, in
the Thread table and the LastPostIndex local secondary index, there is an item collection for forum EC2
and a different item collection for forum RDS.
The following diagram shows the item collection for forum S3:
In this diagram, the item collection consists of all the items in Thread and LastPostIndex where the
ForumName partition key value is "S3". If there were other local secondary indexes on the table, then any
items in those indexes with ForumName equal to "S3" would also be part of the item collection.
You can use any of the following operations in DynamoDB to return information about item collections:
• BatchWriteItem
• DeleteItem
• PutItem
• UpdateItem
Each of these operations support the ReturnItemCollectionMetrics parameter. When you set this
parameter to SIZE, you can view information about the size of each item collection in the index.
Example
Here is a snippet from the output of an UpdateItem operation on the Thread table, with
ReturnItemCollectionMetrics set to SIZE. The item that was updated had a ForumName value of
"EC2", so the output includes information about that item collection.
{
ItemCollectionMetrics: {
ItemCollectionKey: {
ForumName: "EC2"
},
SizeEstimateRangeGB: [0.0, 1.0]
}
}
The SizeEstimateRangeGB object shows that the size of this item collection is between 0 and 1
gigabyte. DynamoDB periodically updates this size estimate, so the numbers might be different next
time the item is modified.
To reduce the size of an item collection, you can do one of the following:
• Delete any unnecessary items with the partition key value in question. When you delete these items
from the base table, DynamoDB will also remove any index entries that have the same partition key
value.
• Update the items by removing attributes or by reducing the size of the attributes. If these
attributes are projected into any local secondary indexes, DynamoDB will also reduce the size of the
corresponding index entries.
• Create a new table with the same partition key and sort key, and then move items from the old table
to the new table. This might be a good approach if a table has historical data that is infrequently
accessed. You might also consider archiving this historical data to Amazon Simple Storage Service
(Amazon S3).
When the total size of the item collection drops below 10 GB, you will once again be able to add items
with the same partition key value.
We recommend as a best practice that you instrument your application to monitor the sizes of your
item collections. One way to do so is to set the ReturnItemCollectionMetrics parameter to
SIZE whenever you use BatchWriteItem, DeleteItem, PutItem or UpdateItem. Your application
should examine the ReturnItemCollectionMetrics object in the output and log an error message
whenever an item collection exceeds a user-defined limit (8 GB, for example). Setting a limit that is less
than 10 GB would provide an early warning system so you know that an item collection is approaching
the limit in time to do something about it.
stored in the same partition. The "S3" item collection would be stored on one partition, "EC2" in another
partition, and "RDS" in a third partition.
You should design your applications so that table data is evenly distributed across distinct partition key
values. For tables with local secondary indexes, your applications should not create "hot spots" of read
and write activity within a single item collection on a single partition.
You can use the AWS SDK for Java Document API to create a table with one or more local secondary
indexes, describe the indexes on the table, and perform queries using the indexes.
The following are the common steps for table operations using the AWS SDK for Java Document API.
The following are the steps to create a table with a local secondary index, using the DynamoDB
document API.
You must provide the table name, its primary key, and the provisioned throughput values. For the
local secondary index, you must provide the index name, the name and data type for the index sort
key, the key schema for the index, and the attribute projection.
3. Call the createTable method by providing the request object as a parameter.
The following Java code snippet demonstrates the preceding steps. The snippet creates a table (Music)
with a secondary index on the AlbumTitle attribute. The table partition key and sort key, plus the index
sort key, are the only attributes projected into the index.
//ProvisionedThroughput
createTableRequest.setProvisionedThroughput(new
ProvisionedThroughput().withReadCapacityUnits((long)5).withWriteCapacityUnits((long)5));
//AttributeDefinitions
ArrayList<AttributeDefinition> attributeDefinitions= new ArrayList<AttributeDefinition>();
attributeDefinitions.add(new
AttributeDefinition().withAttributeName("Artist").withAttributeType("S"));
attributeDefinitions.add(new
AttributeDefinition().withAttributeName("SongTitle").withAttributeType("S"));
attributeDefinitions.add(new
AttributeDefinition().withAttributeName("AlbumTitle").withAttributeType("S"));
createTableRequest.setAttributeDefinitions(attributeDefinitions);
//KeySchema
ArrayList<KeySchemaElement> tableKeySchema = new ArrayList<KeySchemaElement>();
tableKeySchema.add(new
KeySchemaElement().withAttributeName("Artist").withKeyType(KeyType.HASH)); //Partition
key
tableKeySchema.add(new
KeySchemaElement().withAttributeName("SongTitle").withKeyType(KeyType.RANGE)); //Sort key
createTableRequest.setKeySchema(tableKeySchema);
.withIndexName("AlbumTitleIndex").withKeySchema(indexKeySchema).withProjection(projection);
You must wait until DynamoDB creates the table and sets the table status to ACTIVE. After that, you can
begin putting data items into the table.
The following are the steps to access local secondary index information a table using the AWS SDK for
Java Document API
2. Create an instance of the Table class. You must provide the table name.
3. Call the describeTable method on the Table object.
Example
List<LocalSecondaryIndexDescription> localSecondaryIndexes
= tableDescription.getLocalSecondaryIndexes();
// This code snippet will work for multiple indexes, even though
// there is only one index in this example.
The only attributes returned are those that have been projected into the index. You could modify this
query to select non-key attributes too, but this would require table fetch activity that is relatively
expensive. For more information about table fetches, see Attribute Projections (p. 534).
The following are the steps to query a local secondary index using the AWS SDK for Java Document API.
Example
while (itemsIter.hasNext()) {
Item item = itemsIter.next();
System.out.println(item.toJSONPretty());
}
• OrderCreationDateIndex—the sort key is OrderCreationDate, and the following attributes are projected
into the index:
• ProductCategory
• ProductName
• OrderStatus
• ShipmentTrackingId
• IsOpenIndex—the sort key is IsOpen, and all of the table attributes are projected into the index.
After the CustomerOrders table is created, the program loads the table with data representing customer
orders, and then queries the data using the local secondary indexes. Finally, the program deletes the
CustomerOrders table.
For step-by-step instructions to test the following sample, see Java Code Examples (p. 328).
Example
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package com.amazonaws.codesamples.document;
import java.util.ArrayList;
import java.util.Iterator;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import com.amazonaws.services.dynamodbv2.document.DynamoDB;
import com.amazonaws.services.dynamodbv2.document.Index;
import com.amazonaws.services.dynamodbv2.document.Item;
import com.amazonaws.services.dynamodbv2.document.ItemCollection;
import com.amazonaws.services.dynamodbv2.document.PutItemOutcome;
import com.amazonaws.services.dynamodbv2.document.QueryOutcome;
import com.amazonaws.services.dynamodbv2.document.Table;
import com.amazonaws.services.dynamodbv2.document.spec.QuerySpec;
import com.amazonaws.services.dynamodbv2.document.utils.ValueMap;
import com.amazonaws.services.dynamodbv2.model.AttributeDefinition;
import com.amazonaws.services.dynamodbv2.model.CreateTableRequest;
import com.amazonaws.services.dynamodbv2.model.KeySchemaElement;
import com.amazonaws.services.dynamodbv2.model.KeyType;
import com.amazonaws.services.dynamodbv2.model.LocalSecondaryIndex;
import com.amazonaws.services.dynamodbv2.model.Projection;
import com.amazonaws.services.dynamodbv2.model.ProjectionType;
import com.amazonaws.services.dynamodbv2.model.ProvisionedThroughput;
import com.amazonaws.services.dynamodbv2.model.ReturnConsumedCapacity;
import com.amazonaws.services.dynamodbv2.model.Select;
createTable();
loadData();
query(null);
query("IsOpenIndex");
query("OrderCreationDateIndex");
deleteTable(tableName);
attributeDefinitions
.add(new
AttributeDefinition().withAttributeName("OrderCreationDate").withAttributeType("N"));
attributeDefinitions.add(new
AttributeDefinition().withAttributeName("IsOpen").withAttributeType("N"));
createTableRequest.setAttributeDefinitions(attributeDefinitions);
// key
tableKeySchema.add(new
KeySchemaElement().withAttributeName("OrderId").withKeyType(KeyType.RANGE)); // Sort
// key
createTableRequest.setKeySchema(tableKeySchema);
// OrderCreationDateIndex
LocalSecondaryIndex orderCreationDateIndex = new
LocalSecondaryIndex().withIndexName("OrderCreationDateIndex");
// key
indexKeySchema.add(new
KeySchemaElement().withAttributeName("OrderCreationDate").withKeyType(KeyType.RANGE)); //
Sort
// key
orderCreationDateIndex.setKeySchema(indexKeySchema);
orderCreationDateIndex.setProjection(projection);
localSecondaryIndexes.add(orderCreationDateIndex);
// IsOpenIndex
LocalSecondaryIndex isOpenIndex = new
LocalSecondaryIndex().withIndexName("IsOpenIndex");
// key
indexKeySchema.add(new
KeySchemaElement().withAttributeName("IsOpen").withKeyType(KeyType.RANGE)); // Sort
// key
isOpenIndex.setKeySchema(indexKeySchema);
isOpenIndex.setProjection(projection);
localSecondaryIndexes.add(isOpenIndex);
System.out.println("\n***********************************************************
\n");
System.out.println("Querying table " + tableName + "...");
if (indexName == "IsOpenIndex") {
querySpec.withProjectionExpression("OrderCreationDate, ProductCategory,
ProductName, OrderStatus");
while (iterator.hasNext()) {
System.out.println(iterator.next().toJSONPretty());
}
else if (indexName == "OrderCreationDateIndex") {
System.out.println("\nUsing index: '" + indexName + "': Bob's orders that were
placed after 01/31/2015.");
System.out.println("Only the projected attributes are returned\n");
Index index = table.getIndex(indexName);
querySpec.withSelect(Select.ALL_PROJECTED_ATTRIBUTES);
while (iterator.hasNext()) {
System.out.println(iterator.next().toJSONPretty());
}
}
else {
System.out.println("\nNo index: All of Bob's orders, by OrderId:\n");
querySpec.withKeyConditionExpression("CustomerId = :v_custid")
.withValueMap(new ValueMap().withString(":v_custid", "bob@example.com"));
while (iterator.hasNext()) {
System.out.println(iterator.next().toJSONPretty());
}
putItemOutcome = table.putItem(item);
putItemOutcome = table.putItem(item);
putItemOutcome = table.putItem(item);
putItemOutcome = table.putItem(item);
putItemOutcome = table.putItem(item);
putItemOutcome = table.putItem(item);
putItemOutcome = table.putItem(item);
putItemOutcome = table.putItem(item);
putItemOutcome = table.putItem(item);
assert putItemOutcome != null;
}
You can use the AWS SDK for .NET low-level API to create a table with one or more local secondary
indexes, describe the indexes on the table, and perform queries using the indexes. These operations
map to the corresponding low-level DynamoDB API actions. For more information, see .NET Code
Examples (p. 330).
The following are the common steps for table operations using the .NET low-level API.
For example, create a CreateTableRequest object to create a table and an QueryRequest object
to query a table or an index.
3. Execute the appropriate method provided by the client that you created in the preceding step.
The following are the steps to create a table with a local secondary index, using the .NET low-level API.
You must provide the table name, its primary key, and the provisioned throughput values. For the
local secondary index, you must provide the index name, the name and data type of the index sort
key, the key schema for the index, and the attribute projection.
3. Execute the CreateTable method by providing the request object as a parameter.
The following C# code snippet demonstrates the preceding steps. The snippet creates a table (Music)
with a secondary index on the AlbumTitle attribute. The table partition key and sort key, plus the index
sort key, are the only attributes projected into the index.
//ProvisionedThroughput
createTableRequest.ProvisionedThroughput = new ProvisionedThroughput()
{
ReadCapacityUnits = (long)5,
WriteCapacityUnits = (long)5
};
//AttributeDefinitions
List<AttributeDefinition> attributeDefinitions = new List<AttributeDefinition>();
attributeDefinitions.Add(new AttributeDefinition()
{
AttributeName = "Artist",
AttributeType = "S"
});
attributeDefinitions.Add(new AttributeDefinition()
{
AttributeName = "SongTitle",
AttributeType = "S"
});
attributeDefinitions.Add(new AttributeDefinition()
{
AttributeName = "AlbumTitle",
AttributeType = "S"
});
createTableRequest.AttributeDefinitions = attributeDefinitions;
//KeySchema
List<KeySchemaElement> tableKeySchema = new List<KeySchemaElement>();
createTableRequest.KeySchema = tableKeySchema;
You must wait until DynamoDB creates the table and sets the table status to ACTIVE. After that, you can
begin putting data items into the table.
The following are the steps to access local secondary index information a table using the .NET low-level
API.
Example
// This code snippet will work for multiple indexes, even though
// there is only one index in this example.
foreach (LocalSecondaryIndexDescription lsiDescription in localSecondaryIndexes)
{
Console.WriteLine("Info for index " + lsiDescription.IndexName + ":");
if (projection.ProjectionType.ToString().Equals("INCLUDE"))
{
Console.WriteLine("\t\tThe non-key projected attributes are:");
}
}
The only attributes returned are those that have been projected into the index. You could modify this
query to select non-key attributes too, but this would require table fetch activity that is relatively
expensive. For more information about table fetches, see Attribute Projections (p. 534)
The following are the steps to query a local secondary index using the .NET low-level API.
Example
Example: Local Secondary Indexes Using the AWS SDK for .NET Low-Level API
The following C# code example shows how to work with local secondary indexes. The example creates a
table named CustomerOrders with a partition key of CustomerId and a sort key of OrderId. There are two
local secondary indexes on this table:
• OrderCreationDateIndex—the sort key is OrderCreationDate, and the following attributes are projected
into the index:
• ProductCategory
• ProductName
• OrderStatus
• ShipmentTrackingId
• IsOpenIndex—the sort key is IsOpen, and all of the table attributes are projected into the index.
After the CustomerOrders table is created, the program loads the table with data representing customer
orders, and then queries the data using the local secondary indexes. Finally, the program deletes the
CustomerOrders table.
For step-by-step instructions to test the following sample, see .NET Code Examples (p. 330).
Example
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
namespace com.amazonaws.codesamples
{
class LowLevelLocalSecondaryIndexExample
{
private static AmazonDynamoDBClient client = new AmazonDynamoDBClient();
private static string tableName = "CustomerOrders";
Query(null);
Query("IsOpenIndex");
Query("OrderCreationDateIndex");
DeleteTable(tableName);
} },
{ new AttributeDefinition() {
AttributeName = "IsOpen", AttributeType = "N"
}}
};
createTableRequest.AttributeDefinitions = attributeDefinitions;
createTableRequest.KeySchema = tableKeySchema;
// OrderCreationDateIndex
LocalSecondaryIndex orderCreationDateIndex = new LocalSecondaryIndex()
{
IndexName = "OrderCreationDateIndex"
};
orderCreationDateIndex.KeySchema = indexKeySchema;
orderCreationDateIndex.Projection = projection;
localSecondaryIndexes.Add(orderCreationDateIndex);
// IsOpenIndex
LocalSecondaryIndex isOpenIndex
= new LocalSecondaryIndex()
{
IndexName = "IsOpenIndex"
};
isOpenIndex.KeySchema = indexKeySchema;
isOpenIndex.Projection = projection;
localSecondaryIndexes.Add(isOpenIndex);
Console.WriteLine("\n***********************************************************\n");
Console.WriteLine("Querying table " + tableName + "...");
if (indexName == "IsOpenIndex")
{
Console.WriteLine("\nUsing index: '" + indexName
+ "': Bob's orders that are open.");
Console.WriteLine("Only a user-specified list of attributes are returned
\n");
queryRequest.IndexName = indexName;
N = "1"
});
// ProjectionExpression
queryRequest.ProjectionExpression = "OrderCreationDate, ProductCategory,
ProductName, OrderStatus";
}
else if (indexName == "OrderCreationDateIndex")
{
Console.WriteLine("\nUsing index: '" + indexName
+ "': Bob's orders that were placed after 01/31/2013.");
Console.WriteLine("Only the projected attributes are returned\n");
queryRequest.IndexName = indexName;
// Select
queryRequest.Select = "ALL_PROJECTED_ATTRIBUTES";
}
else
{
Console.WriteLine("\nNo index: All of Bob's orders, by OrderId:\n");
}
queryRequest.KeyConditionExpression = keyConditionExpression;
queryRequest.ExpressionAttributeValues = expressionAttributeValues;
{
S = "ORDER RECEIVED"
};
/* no ShipmentTrackingId attribute */
putItemRequest = new PutItemRequest
{
TableName = tableName,
Item = item,
ReturnItemCollectionMetrics = "SIZE"
};
client.PutItem(putItemRequest);
S = "Movie"
};
item["ProductName"] = new AttributeValue
{
S = "Calm Before The Storm"
};
item["OrderStatus"] = new AttributeValue
{
S = "SHIPPING DELAY"
};
item["ShipmentTrackingId"] = new AttributeValue
{
S = "859323"
};
putItemRequest = new PutItemRequest
{
TableName = tableName,
Item = item,
ReturnItemCollectionMetrics = "SIZE"
};
client.PutItem(putItemRequest);
{
N = "3"
};
/* no IsOpen attribute */
item["OrderCreationDate"] = new AttributeValue
{
N = "20130221"
};
item["ProductCategory"] = new AttributeValue
{
S = "Music"
};
item["ProductName"] = new AttributeValue
{
S = "Symphony 9"
};
item["OrderStatus"] = new AttributeValue
{
S = "DELIVERED"
};
item["ShipmentTrackingId"] = new AttributeValue
{
S = "645193"
};
putItemRequest = new PutItemRequest
{
TableName = tableName,
Item = item,
ReturnItemCollectionMetrics = "SIZE"
};
client.PutItem(putItemRequest);
ReturnItemCollectionMetrics = "SIZE"
};
client.PutItem(putItemRequest);
S = "DELIVERED"
};
item["ShipmentTrackingId"] = new AttributeValue
{
S = "893927"
};
putItemRequest = new PutItemRequest
{
TableName = tableName,
Item = item,
ReturnItemCollectionMetrics = "SIZE"
};
client.PutItem(putItemRequest);
while (tablePresent)
{
System.Threading.Thread.Sleep(5000); // Wait 5 seconds.
try
{
var res = client.DescribeTable(new DescribeTableRequest
{
TableName = tableName
});
• An application in one AWS region modifies the data in a DynamoDB table. A second application in
another AWS region reads these data modifications and writes the data to another table, creating a
replica that stays in sync with the original table.
• A popular mobile app modifies data in a DynamoDB table, at the rate of thousands of updates per
second. Another application captures and stores data about these updates, providing near real time
usage metrics for the mobile app.
• A global multi-player game has a multi-master topology, storing data in multiple AWS regions. Each
master stays in sync by consuming and replaying the changes that occur in the remote regions.
• An application automatically sends notifications to the mobile devices of all friends in a group as soon
as one friend uploads a new picture.
• A new customer adds data to a DynamoDB table. This event invokes another application that sends a
welcome email to the new customer.
DynamoDB Streams enables solutions such as these, and many others. DynamoDB Streams captures a
time-ordered sequence of item-level modifications in any DynamoDB table, and stores this information
in a log for up to 24 hours. Applications can access this log and view the data items as they appeared
before and after they were modified, in near real time.
Encryption at rest encrypts the data in DynamoDB streams. For more information, see DynamoDB
Encryption at Rest (p. 706).
A DynamoDB stream is an ordered flow of information about changes to items in an Amazon DynamoDB
table. When you enable a stream on a table, DynamoDB captures information about every modification
to data items in the table.
Whenever an application creates, updates, or deletes items in the table, DynamoDB Streams writes a
stream record with the primary key attribute(s) of the items that were modified. A stream record contains
information about a data modification to a single item in a DynamoDB table. You can configure the
stream so that the stream records capture additional information, such as the "before" and "after"
images of modified items.
DynamoDB Streams writes stream records in near real time, so that you can build applications that
consume these streams and take action based on the contents.
AWS maintains separate endpoints for DynamoDB and DynamoDB Streams. To work with database
tables and indexes, your application will need to access a DynamoDB endpoint. To read and process
DynamoDB Streams records, your application will need to access a DynamoDB Streams endpoint in the
same region.
The AWS SDKs provide separate clients for DynamoDB and DynamoDB Streams. Depending on your
requirements, your application can access a DynamoDB endpoint, a DynamoDB Streams endpoint, or
both at the same time. To connect to both endpoints, your application will need to instantiate two
clients - one for DynamoDB, and one for DynamoDB Streams.
Enabling a Stream
You can enable a stream on a new table when you create it. You can also enable or disable a stream on
an existing table, or change the settings of a stream. DynamoDB Streams operates asynchronously, so
there is no performance impact on a table if you enable a stream.
The easiest way to manage DynamoDB Streams is by using the AWS Management Console.
You can also use the CreateTable or UpdateTable APIs to enable or modify a stream. The
StreamSpecification parameter determines how the stream is configured:
• StreamEnabled—specifies whether a stream is enabled (true) or disabled (false) for the table.
• StreamViewType—specifies the information that will be written to the stream whenever data in the
table is modified:
• KEYS_ONLY—only the key attributes of the modified item.
• NEW_IMAGE—the entire item, as it appears after it was modified.
• OLD_IMAGE—the entire item, as it appeared before it was modified.
• NEW_AND_OLD_IMAGES—both the new and the old images of the item.
You can enable or disable a stream at any time. However, note that you will receive a
ResourceInUseException if you attempt to enable a stream on a table that already has a stream, and you
will receive a ValidationException if you attempt to disable a stream on a table which does not have a
stream.
When you set StreamEnabled to true, DynamoDB creates a new stream with a unique stream
descriptor assigned to it. If you disable and then re-enable a stream on the table, a new stream will be
created with a different stream descriptor.
Every stream is uniquely identified by an Amazon Resource Name (ARN). Here is an example ARN for a
stream on a DynamoDB table named TestTable:
arn:aws:dynamodb:us-west-2:111122223333:table/TestTable/stream/2015-05-11T21:21:33.291
To determine the latest stream descriptor for a table, issue a DynamoDB DescribeTable request and
look for the LatestStreamArn element in the response.
A stream consists of stream records. Each stream record represents a single data modification in the
DynamoDB table to which the stream belongs. Each stream record is assigned a sequence number,
reflecting the order in which the record was published to the stream.
Stream records are organized into groups, or shards. Each shard acts as a container for multiple stream
records, and contains information required for accessing and iterating through these records. The stream
records within a shard are removed automatically after 24 hours.
Shards are ephemeral: They are created and deleted automatically, as needed. Any shard can also split
into multiple new shards; this also occurs automatically. (Note that it is also possible for a parent shard
to have just one child shard.) A shard might split in response to high levels of write activity on its parent
table, so that applications can process records from multiple shards in parallel.
If you disable a stream, any shards that are open will be closed.
Because shards have a lineage (parent and children), an application must always process a parent shard
before it processes a child shard. This will ensure that the stream records are also processed in the correct
order. (If you use the DynamoDB Streams Kinesis Adapter, this is handled for you: Your application will
process the shards and stream records in the correct order, and automatically handle new or expired
shards, as well as shards that split while the application is running. For more information, see Using the
DynamoDB Streams Kinesis Adapter to Process Stream Records (p. 571).)
The following diagram shows the relationship between a stream, shards in the stream, and stream
records in the shards.
Note
If you perform a PutItem or UpdateItem operation that does not change any data in an item,
then DynamoDB Streams will not write a stream record for that operation.
To access a stream and process the stream records within, you must do the following:
• Determine the unique Amazon Resource Name (ARN) of the stream that you want to access.
• Determine which shard(s) in the stream contain the stream records that you are interested in.
• Access the shard(s) and retrieve the stream records that you want.
Note
No more than 2 processes at most should be reading from the same Streams shard at the same
time. Having more than 2 readers per shard may result in throttling.
The DynamoDB Streams API provides the following actions for use by application programs:
• ListStreams—returns a list of stream descriptors for the current account and endpoint. You can
optionally request just the stream descriptors for a particular table name.
• DescribeStream—returns detailed information about a given stream. The output includes a list of
shards associated with the stream, including the shard IDs.
• GetShardIterator—returns a shard iterator, which describes a location within a shard. You can
request that the iterator provide access to the oldest point, the newest point, or a particular point in
the stream.
• GetRecords—returns the stream records from within a given shard. You must provide the shard
iterator returned from a GetShardIterator request.
For complete descriptions of these API actions, including example requests and responses, go to the
Amazon DynamoDB Streams API Reference.
If you disable a stream on a table, the data in the stream will continue to be readable for 24 hours.
After this time, the data expires and the stream records are automatically deleted. Note that there is
no mechanism for manually deleting an existing stream; you just need to wait until the retention limit
expires (24 hours), and all the stream records will be deleted.
Items that are deleted by the Time To Live process after expiration have the following fields:
• Records[<index>].userIdentity.type
"Service"
• Records[<index>].userIdentity.principalId
"dynamodb.amazonaws.com"
The following JSON shows the relevant portion of a single Streams record.
"Records":[
{
...
"userIdentity":{
"type":"Service",
"principalId":"dynamodb.amazonaws.com"
}
...
]}
Items deleted by other users will show the principalId of the account used to delete the items.
The DynamoDB Streams API is intentionally similar to that of Kinesis Streams, a service for real-time
processing of streaming data at massive scale. In both services, data streams are composed of shards,
which are containers for stream records. Both services' APIs contain ListStreams, DescribeStream,
GetShards, and GetShardIterator actions. (Even though these DynamoDB Streams actions are
similar to their counterparts in Kinesis Streams, they are not 100% identical.)
You can write applications for Kinesis Streams using the Kinesis Client Library (KCL). The KCL simplifies
coding by providing useful abstractions above the low-level Kinesis Streams API. For more information
on the KCL, go to the Amazon Kinesis Developer Guide.
As a DynamoDB Streams user, you can leverage the design patterns found within the KCL to process
DynamoDB Streams shards and stream records. To do this, you use the DynamoDB Streams Kinesis
Adapter. The Kinesis Adapter implements the Kinesis Streams interface, so that the KCL can be used for
consuming and processing records from DynamoDB Streams.
The following diagram shows how these libraries interact with one another.
With the DynamoDB Streams Kinesis Adapter in place, you can begin developing against the KCL
interface, with the API calls seamlessly directed at the DynamoDB Streams endpoint.
When your application starts, it calls the KCL to instantiate a worker. You must provide the worker with
configuration information for the application, such as the stream descriptor and AWS credentials, and
the name of a record processor class that you provide. As it runs the code in the record processor, the
worker performs the following tasks:
Note
For a description of the KCL concepts listed above, go to Developing Amazon Kinesis Consumers
Using the Amazon Kinesis Client Library in the Amazon Kinesis Developer Guide.
1. Creates two DynamoDB tables named KCL-Demo-src and KCL-Demo-dst. Each of these tables has a
stream enabled on it.
2. Generates update activity in the source table by adding, updating, and deleting items. This causes
data to be written to the table's stream.
3. Reads the records from the stream, reconstructs them as DynamoDB requests, and applies the
requests to the destination table.
4. Scans the source and destination tables to ensure their contents are identical.
5. Cleans up by deleting the tables.
These steps are described in the following sections, and the complete application is shown at the end of
the walkthrough.
Topics
• Step 1: Create DynamoDB Tables (p. 573)
• Step 2: Generate Update Activity in Source Table (p. 574)
• Step 3: Process the Stream (p. 574)
• Step 4: Ensure Both Tables Have Identical Contents (p. 575)
• Step 5: Clean Up (p. 575)
• Complete Program: DynamoDB Streams Kinesis Adapter (p. 576)
The following code snippet shows the code used for creating both tables.
.withProvisionedThroughput(provisionedThroughput).withStreamSpecification(streamSpecification);
The application defines a helper class with methods that call the PutItem, UpdateItem, and
DeleteItem API actions for writing the data. The following code snippet shows how these methods are
used.
• It defines a record processor class, StreamsRecordProcessor, with methods that comply with the
KCL interface definition: initialize, processRecords, and shutdown. The processRecords
method contains the logic required for reading from the source table's stream and writing to the
destination table.
• It defines a class factory for the record processor class (StreamsRecordProcessorFactory). This is
required for Java programs that use the KCL.
• It instantiates a new KCL Worker, which is associated with the class factory.
• It shuts down the Worker when record processing is complete.
To learn more about the KCL interface definition, go to Developing Amazon Kinesis Consumers Using the
Amazon Kinesis Client Library in the Amazon Kinesis Developer Guide.
The following code snippet shows the main loop in StreamsRecordProcessor. The case statement
determines what action to perform, based on the OperationType that appears in the stream record.
switch (streamRecord.getEventName()) {
case "INSERT":
case "MODIFY":
StreamsAdapterDemoHelper.putItem(dynamoDBClient, tableName,
streamRecord.getDynamodb().getNewImage());
break;
case "REMOVE":
StreamsAdapterDemoHelper.deleteItem(dynamoDBClient, tableName,
streamRecord.getDynamodb().getKeys().get("Id").getN());
}
}
checkpointCounter += 1;
if (checkpointCounter % 10 == 0) {
try {
checkpointer.checkpoint();
}
catch (Exception e) {
e.printStackTrace();
}
}
}
The DemoHelper class contains a ScanTable method that calls the low-level Scan API. The following
code snippet shows how this is used.
if (StreamsAdapterDemoHelper.scanTable(dynamoDBClient, srcTable).getItems()
.equals(StreamsAdapterDemoHelper.scanTable(dynamoDBClient, destTable).getItems())) {
System.out.println("Scan result is equal.");
}
else {
System.out.println("Tables are different!");
}
Step 5: Clean Up
The demo is complete, so the application deletes the source and destination tables. See the following
code snippet.
Even after the tables are deleted, their streams remain available for up to 24 hours, after which they are
automatically deleted.
dynamoDBClient.deleteTable(new DeleteTableRequest().withTableName(srcTable));
dynamoDBClient.deleteTable(new DeleteTableRequest().withTableName(destTable));
Important
To run this program, make sure the client application has access to DynamoDB and CloudWatch
using policies. For more information, see Using Identity-Based Policies (IAM Policies) for Amazon
DynamoDB (p. 719).
• StreamsAdapterDemo.java
• StreamsRecordProcessor.java
• StreamsRecordProcessorFactory.java
• StreamsAdapterDemoHelper.java
StreamsAdapterDemo.java
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package com.amazonaws.codesamples;
import com.amazonaws.auth.AWSCredentialsProvider;
import com.amazonaws.auth.DefaultAWSCredentialsProviderChain;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.cloudwatch.AmazonCloudWatch;
import com.amazonaws.services.cloudwatch.AmazonCloudWatchClientBuilder;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBStreams;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBStreamsClientBuilder;
import com.amazonaws.services.dynamodbv2.model.DeleteTableRequest;
import com.amazonaws.services.dynamodbv2.model.DescribeTableResult;
import com.amazonaws.services.dynamodbv2.streamsadapter.AmazonDynamoDBStreamsAdapterClient;
import com.amazonaws.services.dynamodbv2.streamsadapter.StreamsWorkerFactory;
import com.amazonaws.services.kinesis.clientlibrary.interfaces.v2.IRecordProcessorFactory;
import com.amazonaws.services.kinesis.clientlibrary.lib.worker.InitialPositionInStream;
import
com.amazonaws.services.kinesis.clientlibrary.lib.worker.KinesisClientLibConfiguration;
import com.amazonaws.services.kinesis.clientlibrary.lib.worker.Worker;
/**
* @param args
*/
public static void main(String[] args) throws Exception {
System.out.println("Starting demo...");
dynamoDBClient = AmazonDynamoDBClientBuilder.standard()
.withRegion(awsRegion)
.build();
cloudWatchClient = AmazonCloudWatchClientBuilder.standard()
.withRegion(awsRegion)
.build();
dynamoDBStreamsClient = AmazonDynamoDBStreamsClientBuilder.standard()
.withRegion(awsRegion)
.build();
adapterClient = new AmazonDynamoDBStreamsAdapterClient(dynamoDBStreamsClient);
String srcTable = tablePrefix + "-src";
String destTable = tablePrefix + "-dest";
recordProcessorFactory = new StreamsRecordProcessorFactory(dynamoDBClient,
destTable);
setUpTables();
Thread.sleep(25000);
worker.shutdown();
t.join();
if (StreamsAdapterDemoHelper.scanTable(dynamoDBClient, srcTable).getItems()
.equals(StreamsAdapterDemoHelper.scanTable(dynamoDBClient, destTable).getItems())) {
System.out.println("Scan result is equal.");
}
else {
System.out.println("Tables are different!");
}
System.out.println("Done.");
cleanupAndExit(0);
}
awaitTableCreation(srcTable);
performOps(srcTable);
}
StreamsRecordProcessor.java
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package com.amazonaws.codesamples;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.streamsadapter.model.RecordAdapter;
import com.amazonaws.services.kinesis.clientlibrary.interfaces.v2.IRecordProcessor;
import com.amazonaws.services.kinesis.clientlibrary.lib.worker.ShutdownReason;
import com.amazonaws.services.kinesis.clientlibrary.types.InitializationInput;
import com.amazonaws.services.kinesis.clientlibrary.types.ProcessRecordsInput;
import com.amazonaws.services.kinesis.clientlibrary.types.ShutdownInput;
import com.amazonaws.services.kinesis.model.Record;
import java.nio.charset.Charset;
@Override
public void initialize(InitializationInput initializationInput) {
checkpointCounter = 0;
}
@Override
public void processRecords(ProcessRecordsInput processRecordsInput) {
for (Record record : processRecordsInput.getRecords()) {
String data = new String(record.getData().array(), Charset.forName("UTF-8"));
System.out.println(data);
if (record instanceof RecordAdapter) {
com.amazonaws.services.dynamodbv2.model.Record streamRecord =
((RecordAdapter) record)
.getInternalObject();
switch (streamRecord.getEventName()) {
case "INSERT":
case "MODIFY":
StreamsAdapterDemoHelper.putItem(dynamoDBClient, tableName,
streamRecord.getDynamodb().getNewImage());
break;
case "REMOVE":
StreamsAdapterDemoHelper.deleteItem(dynamoDBClient, tableName,
streamRecord.getDynamodb().getKeys().get("Id").getN());
}
}
checkpointCounter += 1;
if (checkpointCounter % 10 == 0) {
try {
processRecordsInput.getCheckpointer().checkpoint();
}
catch (Exception e) {
e.printStackTrace();
}
}
}
@Override
public void shutdown(ShutdownInput shutdownInput) {
if (shutdownInput.getShutdownReason() == ShutdownReason.TERMINATE) {
try {
shutdownInput.getCheckpointer().checkpoint();
}
catch (Exception e) {
e.printStackTrace();
}
}
}
}
StreamsRecordProcessorFactory.java
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package com.amazonaws.codesamples;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.kinesis.clientlibrary.interfaces.v2.IRecordProcessor;
import com.amazonaws.services.kinesis.clientlibrary.interfaces.v2.IRecordProcessorFactory;
@Override
public IRecordProcessor createProcessor() {
return new StreamsRecordProcessor(dynamoDBClient, tableName);
}
}
StreamsAdapterDemoHelper.java
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package com.amazonaws.codesamples;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.Map;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.model.AttributeAction;
import com.amazonaws.services.dynamodbv2.model.AttributeDefinition;
import com.amazonaws.services.dynamodbv2.model.AttributeValue;
import com.amazonaws.services.dynamodbv2.model.AttributeValueUpdate;
import com.amazonaws.services.dynamodbv2.model.CreateTableRequest;
import com.amazonaws.services.dynamodbv2.model.CreateTableResult;
import com.amazonaws.services.dynamodbv2.model.DeleteItemRequest;
import com.amazonaws.services.dynamodbv2.model.DescribeTableRequest;
import com.amazonaws.services.dynamodbv2.model.DescribeTableResult;
import com.amazonaws.services.dynamodbv2.model.KeySchemaElement;
import com.amazonaws.services.dynamodbv2.model.KeyType;
import com.amazonaws.services.dynamodbv2.model.ProvisionedThroughput;
import com.amazonaws.services.dynamodbv2.model.PutItemRequest;
import com.amazonaws.services.dynamodbv2.model.ResourceInUseException;
import com.amazonaws.services.dynamodbv2.model.ScanRequest;
import com.amazonaws.services.dynamodbv2.model.ScanResult;
import com.amazonaws.services.dynamodbv2.model.StreamSpecification;
import com.amazonaws.services.dynamodbv2.model.StreamViewType;
import com.amazonaws.services.dynamodbv2.model.UpdateItemRequest;
/**
* @return StreamArn
*/
public static String createTable(AmazonDynamoDB client, String tableName) {
java.util.List<AttributeDefinition> attributeDefinitions = new
ArrayList<AttributeDefinition>();
attributeDefinitions.add(new
AttributeDefinition().withAttributeName("Id").withAttributeType("N"));
// key
.withProvisionedThroughput(provisionedThroughput).withStreamSpecification(streamSpecification);
try {
System.out.println("Creating table " + tableName);
CreateTableResult result = client.createTable(createTableRequest);
return result.getTableDescription().getLatestStreamArn();
}
catch (ResourceInUseException e) {
System.out.println("Table already exists.");
return describeTable(client, tableName).getTable().getLatestStreamArn();
}
}
This section contains a Java program that shows DynamoDB Streams in action. The program does the
following:
When you run the program, you will see output similar to the following:
Example
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package com.amazon.codesamples;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import com.amazonaws.AmazonClientException;
import com.amazonaws.auth.DefaultAWSCredentialsProviderChain;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBStreams;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBStreamsClientBuilder;
import com.amazonaws.services.dynamodbv2.model.AttributeAction;
import com.amazonaws.services.dynamodbv2.model.AttributeDefinition;
import com.amazonaws.services.dynamodbv2.model.AttributeValue;
import com.amazonaws.services.dynamodbv2.model.AttributeValueUpdate;
import com.amazonaws.services.dynamodbv2.model.CreateTableRequest;
import com.amazonaws.services.dynamodbv2.model.DescribeStreamRequest;
import com.amazonaws.services.dynamodbv2.model.DescribeStreamResult;
import com.amazonaws.services.dynamodbv2.model.DescribeTableResult;
import com.amazonaws.services.dynamodbv2.model.GetRecordsRequest;
import com.amazonaws.services.dynamodbv2.model.GetRecordsResult;
import com.amazonaws.services.dynamodbv2.model.GetShardIteratorRequest;
import com.amazonaws.services.dynamodbv2.model.GetShardIteratorResult;
import com.amazonaws.services.dynamodbv2.model.KeySchemaElement;
import com.amazonaws.services.dynamodbv2.model.KeyType;
import com.amazonaws.services.dynamodbv2.model.ProvisionedThroughput;
import com.amazonaws.services.dynamodbv2.model.Record;
import com.amazonaws.services.dynamodbv2.model.Shard;
import com.amazonaws.services.dynamodbv2.model.ShardIteratorType;
import com.amazonaws.services.dynamodbv2.model.StreamSpecification;
import com.amazonaws.services.dynamodbv2.model.StreamViewType;
import com.amazonaws.services.dynamodbv2.util.TableUtils;
AmazonDynamoDBStreams streamsClient =
AmazonDynamoDBStreamsClientBuilder
.standard()
.withRegion(Regions.US_EAST_2)
.withCredentials(new DefaultAWSCredentialsProviderChain())
.build();
try {
TableUtils.waitUntilActive(dynamoDBClient, tableName);
} catch (AmazonClientException e) {
e.printStackTrace();
}
describeTableResult.getTable().getLatestStreamArn());
StreamSpecification streamSpec =
describeTableResult.getTable().getStreamSpecification();
System.out.println("Stream enabled: " + streamSpec.getStreamEnabled());
System.out.println("Update view type: " + streamSpec.getStreamViewType());
System.out.println();
// Get all the shard IDs from the stream. Note that DescribeStream returns
// the shard IDs one page at a time.
String lastEvaluatedShardId = null;
do {
DescribeStreamResult describeStreamResult = streamsClient.describeStream(
new DescribeStreamRequest()
.withStreamArn(streamArn)
.withExclusiveStartShardId(lastEvaluatedShardId));
List<Shard> shards = describeStreamResult.getStreamDescription().getShards();
// To prevent running the loop until the Shard is sealed, which will be on
average
// 4 hours, we process only the items that were written into DynamoDB and
then exit.
int processedRecordCount = 0;
while (currentShardIter != null && processedRecordCount < maxItemCount) {
System.out.println(" Shard iterator: " +
currentShardIter.substring(380));
System.out.println("Demo complete");
}
}
Cross-Region Replication
You can create tables that are automatically replicated across two or more AWS Regions, with full
support for multi-master writes. This gives you the ability to build fast, massively scaled applications for
a global user base without having to manage the replication process. For more information, see Global
Tables (p. 612).
Amazon DynamoDB is integrated with AWS Lambda so that you can create triggers—pieces of code that
automatically respond to events in DynamoDB Streams. With triggers, you can build applications that
react to data modifications in DynamoDB tables.
If you enable DynamoDB Streams on a table, you can associate the stream ARN with a Lambda function
that you write. Immediately after an item in the table is modified, a new record appears in the table's
stream. AWS Lambda polls the stream and invokes your Lambda function synchronously when it detects
new stream records.
The Lambda function can perform any actions you specify, such as sending a notification or initiating a
workflow. For example, you can write a Lambda function to simply copy each stream record to persistent
storage, such as Amazon Simple Storage Service (Amazon S3), to create a permanent audit trail of
write activity in your table. Or suppose you have a mobile gaming app that writes to a GameScores
table. Whenever the TopScore attribute of the GameScores table is updated, a corresponding stream
record is written to the table's stream. This event could then trigger a Lambda function that posts a
congratulatory message on a social media network. (The function would simply ignore any stream
records that are not updates to GameScores or that do not modify the TopScore attribute.)
For more information about AWS Lambda, see the AWS Lambda Developer Guide.
In this tutorial, you will create an AWS Lambda trigger to process a stream from a DynamoDB table.
The scenario for this tutorial is Woofer, a simple social network. Woofer users communicate using barks
(short text messages) that are sent to other Woofer users. The following diagram shows the components
and workflow for this application:
1. A user writes an item to a DynamoDB table (BarkTable). Each item in the table represents a bark.
2. A new stream record is written to reflect that a new item has been added to BarkTable.
3. The new stream record triggers an AWS Lambda function (publishNewBark).
4. If the stream record indicates that a new item was added to BarkTable, the Lambda function reads the
data from the stream record and publishes a message to a topic in Amazon Simple Notification Service
(Amazon SNS).
5. The message is received by subscribers to the Amazon SNS topic. (In this tutorial, the only subscriber is
an email address.)
This tutorial uses the AWS Command Line Interface. If you have not done so already, follow the
instructions in the AWS Command Line Interface User Guide install and configure the AWS CLI.
BarkTable will have a stream enabled. Later in this tutorial, you will create a trigger by associating an
AWS Lambda function with the stream.
...
"LatestStreamArn": "arn:aws:dynamodb:region:accountID:table/BarkTable/stream/timestamp
...
Make a note of the region and the accountID, because you will need them for the other steps in
this tutorial.
You will also create a policy for the role. The policy will contain all of the permissions that the Lambda
function will need at runtime.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
3. Create a file named role-policy.json with the following contents. (Replace region and
accountID with your AWS region and account ID.)
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "lambda:InvokeFunction",
"Resource": "arn:aws:lambda:region:accountID:function:publishNewBark*"
},
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:region:accountID:*"
},
{
"Effect": "Allow",
"Action": [
"dynamodb:DescribeStream",
"dynamodb:GetRecords",
"dynamodb:GetShardIterator",
"dynamodb:ListStreams"
],
"Resource": "arn:aws:dynamodb:region:accountID:table/BarkTable/stream/*"
},
{
"Effect": "Allow",
"Action": [
"sns:Publish"
],
"Resource": [
"*"
]
}
]
}
• Execute a Lambda function (publishNewBark). You will create the function later in this tutorial.
• Access CloudWatch Logs. The Lambda function will write diagnostics to CloudWatch Logs at
runtime.
• Read data from the DynamoDB stream for BarkTable.
• Publish messages to Amazon SNS.
2. Type the following command to subscribe an email address to wooferTopic. (Replace region and
accountID with your AWS region and account ID, and replace example@example.com with a valid
email address.)
3. Amazon SNS will send a confirmation message to your email address. Click the Confirm
subscription link in that message to complete the subscription process.
The publishNewBark function processes only the stream events that correspond to new items in
BarkTable. The function reads data from such an event, and then invokes Amazon SNS to publish it.
1. Create a file named publishNewBark.js with the following contents:. (Replace region and
accountID with your AWS region and account ID.)
'use strict';
var AWS = require("aws-sdk");
var sns = new AWS.SNS();
event.Records.forEach((record) => {
console.log('Stream record: ', JSON.stringify(record, null, 2));
if (record.eventName == 'INSERT') {
var who = JSON.stringify(record.dynamodb.NewImage.Username.S);
var when = JSON.stringify(record.dynamodb.NewImage.Timestamp.S);
var what = JSON.stringify(record.dynamodb.NewImage.Message.S);
var params = {
Subject: 'A new bark from ' + who,
Message: 'Woofer user ' + who + ' barked the following at ' + when + ':
\n\n ' + what,
TopicArn: 'arn:aws:sns:region:accountID:wooferTopic'
};
sns.publish(params, function(err, data) {
if (err) {
console.error("Unable to send message. Error JSON:",
JSON.stringify(err, null, 2));
} else {
console.log("Results from sending message: ", JSON.stringify(data,
null, 2));
}
});
}
});
callback(null, `Successfully processed ${event.Records.length} records.`);
};
2. Create a zip file to contain publishNewBark.js. If you have the zip command-line utility you can
type the following command to do this:
3. When you create the Lambda function, you specify the ARN for WooferLambdaRole, which you
created in Step 2: Create a Lambda Execution Role (p. 589). Type the following command to
retrieve this ARN:
...
"Arn": "arn:aws:iam::region:role/service-role/WooferLambdaRole"
...
Now type the following command to create the Lambda function. (Replace roleARN with the ARN
for WooferLambdaRole.)
4. Now you will test publishNewBark to verify that it works. To do this, you will provide input that
resembles a real record from DynamoDB Streams.
{
"Records": [
{
"eventID": "7de3041dd709b024af6f29e4fa13d34c",
"eventName": "INSERT",
"eventVersion": "1.1",
"eventSource": "aws:dynamodb",
"awsRegion": "us-west-2",
API Version 2012-08-10
592
Amazon DynamoDB Developer Guide
DynamoDB Streams and AWS Lambda Triggers
"dynamodb": {
"ApproximateCreationDateTime": 1479499740,
"Keys": {
"Timestamp": {
"S": "2016-11-18:12:09:36"
},
"Username": {
"S": "John Doe"
}
},
"NewImage": {
"Timestamp": {
"S": "2016-11-18:12:09:36"
},
"Message": {
"S": "This is a bark from the Woofer social network"
},
"Username": {
"S": "John Doe"
}
},
"SequenceNumber": "13021600000000001596893679",
"SizeBytes": 112,
"StreamViewType": "NEW_IMAGE"
},
"eventSourceARN": "arn:aws:dynamodb:us-east-1:123456789012:table/BarkTable/
stream/2016-11-16T20:42:48.104"
}
]
}
If the test was successful, you will see the following output:
{
"StatusCode": 200
}
You will also receive a new email message within a few minutes.
Note
AWS Lambda writes diagnostic information to Amazon CloudWatch Logs. If you encounter
errors with your Lambda function, you can use these diagnostics for troubleshooting
purposes:
1. When you create the trigger, you will need to specify the ARN for the BarkTable stream. Type the
following command to retrieve this ARN:
...
"LatestStreamArn": "arn:aws:dynamodb:region:accountID:table/BarkTable/stream/timestamp
...
2. Type the following command to create the trigger. (Replace streamARN with the actual stream
ARN.)
3. You will now test the trigger. Type the following command to add an item to BarkTable:
The Lambda function processes only new items that you add to BarkTable. If you update or delete an
item in the table, the function does nothing.
Note
AWS Lambda writes diagnostic information to Amazon CloudWatch Logs. If you encounter
errors with your Lambda function, you can use these diagnostics for troubleshooting purposes.
Best Practices
An AWS Lambda function runs within a container—an execution environment that is isolated from other
functions. When you run a function for the first time, AWS Lambda creates a new container and begins
executing the function's code.
A Lambda function has a handler that is executed once per invocation. The handler contains the main
business logic for the function. For example, the Lambda function shown in Step 4: Create and Test a
Lambda Function (p. 591) has a handler that can process records in a DynamoDB stream.
You can also provide initialization code that runs one time only—after the container is created, but
before AWS Lambda executes the handler for the first time. The Lambda function shown in Step 4:
Create and Test a Lambda Function (p. 591) has initialization code that imports the SDK for JavaScript
in Node.js, and creates a client for Amazon SNS. These objects should only be defined once, outside of
the handler.
After the function executes, AWS Lambda may opt to reuse the container for subsequent invocations of
the function. In this case, your function handler might be able to reuse the resources that you defined in
your initialization code. (Note that you cannot control how long AWS Lambda will retain the container, or
whether the container will be reused at all.)
• AWS service clients should be instantiated in the initialization code, not in the handler. This will allow
AWS Lambda to reuse existing connections, for the duration of the container's lifetime.
• In general, you do not need to explicitly manage connections or implement connection pooling
because AWS Lambda manages this for you.
For more information, see Best Practices for Working with AWS Lambda Functions in the AWS Lambda
Developer Guide.
Amazon DynamoDB provides on-demand backup capability. It allows you to create full backups of your
tables for long-term retention and archival for regulatory compliance needs. You can back up and restore
your DynamoDB table data anytime with a single click in the AWS Management Console or with a single
API call. Backup and restore actions execute with zero impact on table performance or availability.
On-demand backup and restore scales without degrading the performance or availability of your
applications. It uses a new and unique distributed technology that allows you to complete backups
in seconds regardless of table size. You can create backups that are consistent within seconds across
thousands of partitions without worrying about schedules or long-running backup processes. All backups
are cataloged, easily discoverable, and retained until explicitly deleted.
In addition, on-demand backup and restore operations don't affect performance or API latencies.
Backups are preserved regardless of table deletion. For more information, see Backup and Restore: How
It Works (p. 596).
You can create table backups using the console, the AWS Command Line Interface (AWS CLI), or the
DynamoDB API. For more information, see Backing Up a DynamoDB Table (p. 598).
For information about restoring a table from a backup, see Restoring a DynamoDB Table from a
Backup (p. 600).
Currently, the backup and restore functionality works in the same Region as the source table. DynamoDB
on-demand backups are available at no additional cost beyond the normal pricing that's associated with
backup storage size. For more information about the AWS Region availability and pricing, see Amazon
DynamoDB Pricing.
Backups
When you create an on-demand backup, a time marker of the request is cataloged. The backup is created
asynchronously by applying all changes until the time of the request to the last full table snapshot.
Backup requests are processed instantaneously and become available for restore within minutes.
Note
Each time you create an on-demand backup, the entire table data is backed up. There is no limit
to the number of on-demand backups that can be taken.
All backups in DynamoDB work without consuming any provisioned throughput on the table.
DynamoDB backups do not guarantee causal consistency across items; however, the skew between
updates in a backup is usually much less than a second.
Along with data, the following are also included (and can't be excluded) on the backups:
Restored table items are consistent with LSI projections and eventually consistent with GSI projections.
Note
Currently, backup and restore works only in the same AWS Region as the source table.
You can schedule periodic or future backups by using AWS Lambda functions. For more information, see
A serverless solution to schedule your Amazon DynamoDB On-Demand Backup.
If you don't want to create scheduling scripts and cleanup jobs, you can use AWS Backup to create
backup plans with schedules and retention policies for your DynamoDB tables. AWS Backup will execute
the backups and delete them when they expire.
If using the console, any backups created using AWS Backup will be listed on the Backups tab with the
Backup type set to AWS.
Note
AWS backups cannot be deleted from the DynamoDB console. You can use the AWS backup
console to manage these backups.
To learn how to perform a backup, see Backing Up a DynamoDB Table (p. 598).
Restores
A table is restored without consuming any provisioned throughput on the table. The destination table
is set with the same provisioned read capacity units and write capacity units as the source table, as
recorded at the time the backup was requested. The restore process also restores the local secondary
indexes and the global secondary indexes.
You can only restore the entire table data to a new table from a backup. You can write to the restored
table only after it becomes active.
Note
You can't overwrite an existing table during a restore operation.
The time it takes you to restore a table will vary based on multiple factors, and the restore times are not
always correlated directly to the size of the table. For example, because of parallelization, it is possible
that restoring a 300 GB table could take the same amount of time as restoring a 3 GB table. Here are
some of the considerations for restore times:
• You restore backups to a new table. It can take up to 20 minutes (even if the table is empty) to
perform all the actions to create the new table and initiate the restore process.
• For tables with even data distribution across your primary keys, the restore time is proportional to
the largest single partition by item count and not the overall table size. For the largest partitions with
billions of items, a restore could take less than 10 hours.
• If your source table contains data with significant skew, the time to restore may increase. For example,
if your table’s primary key is using the month of the year for partitioning and all your data is from the
month of December, you have skewed data.
To learn how to perform a restore, see Restoring a DynamoDB Table from a Backup (p. 600).
You can use IAM policies for access control. For more information, see Using IAM with DynamoDB Backup
and Restore (p. 604).
All backup and restore console and API actions are captured and recorded in AWS CloudTrail for logging,
continuous monitoring, and auditing.
Topics
• Creating a Table Backup (Console) (p. 598)
• Creating a Table Backup (AWS CLI) (p. 599)
1. Sign in to the AWS Management Console and open the DynamoDB console at https://
console.aws.amazon.com/dynamodb/.
2. You can create a backup by doing one of the following:
Note
If you create backups using the Backups section in the navigation pane, the table isn't
preselected for you. You have to manually choose the source table name for the backup.
While the backup is being created, the backup status is set to Creating. After the backup is finalized,
the backup status changes to Available.
• Create a backup with the name MusicCollectionBackup for the MusicCollection table:
While the backup is being created, the backup status is set to CREATING:
{
"BackupDetails": {
"BackupName": "MusicCollectionBackup",
"BackupArn": "arn:aws:dynamodb:us-east-1:123456789012:table/MusicCollection/
backup/01489602797149-73d8d5bc",
"BackupStatus": "CREATING",
"BackupCreationDateTime": 1489602797.149
}
}
After the backup is finalized, its BackupStatus should change to AVAILABLE. To confirm this, use
the describe-backup command. You can get the input value of backup-arn from the output of the
previous step or by using the list-backups command.
To keep track of your backups, you can use the list-backups command. It lists all your backups that
are in CREATING or AVAILABLE status:
The list-backups command and the describe-backup command are useful to check information
about the source table of the backup.
Topics
• Restoring a Table from a Backup (Console) (p. 600)
• Restoring a Table from a Backup (AWS CLI) (p. 602)
1. Sign in to the AWS Management Console and open the DynamoDB console at https://
console.aws.amazon.com/dynamodb/.
2. In the navigation pane on the left side of the console, choose Backups.
3. In the list of backups, choose MusicCollectionBackup.
5. Type MusicCollection as the new table name. Confirm the backup name and other backup
details. Then choose Restore table to start the restore process.
The table that is being restored is shown with the status Creating. After the restore process is
finished, the status of the MusicCollection table changes to Active.
1. Confirm the backup that you want to restore by using the list-backups command. This example
uses MusicCollectionBackup.
To get additional details for the backup, use the describe-backup command. You can get the input
backup-arn from the previous step:
2. Restore the table from the backup. In this case, the MusicCollectionBackup restores the
MusicCollection table:
To verify the restore, use the describe-table command to describe the MusicCollection table:
The table that is being restored from the backup is shown with the status Creating. After the restore
process is finished, the status of the MusicCollection table changes to Active.
Important
While a restore is in progress, do not modify or delete your IAM role policy; otherwise,
unexpected behavior can result. For example, suppose that you removed write permissions for a
table while that table is being restored. In this case, the underlying RestoreTableFromBackup
operation would not be able to write any of the restored data to the table. Note that IAM
policies involving source IP restrictions for accessing the target restore table may similarly cause
issues.
After the restore operation is complete, you can modify or delete your IAM role policy.
Topics
• Deleting a Table Backup (Console) (p. 603)
• Deleting a Table Backup (AWS CLI) (p. 604)
1. Sign in to the AWS Management Console and open the DynamoDB console at https://
console.aws.amazon.com/dynamodb/.
2. In the navigation pane on the left side of the console, choose Backups.
3. In the list of backups, choose MusicCollectionBackup.
For more information about using IAM policies in DynamoDB, see Using Identity-Based Policies (IAM
Policies) for Amazon DynamoDB (p. 719).
The following are examples of IAM policies that you can use to configure specific backup and restore
functionality in DynamoDB.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"dynamodb:CreateBackup",
"dynamodb:RestoreTableFromBackup",
"dynamodb:PutItem",
"dynamodb:UpdateItem",
"dynamodb:DeleteItem",
"dynamodb:GetItem",
"dynamodb:Query",
"dynamodb:Scan",
"dynamodb:BatchWriteItem"
],
"Resource": "*"
}
]
}
Important
DynamoDB write permissions are necessary for restore functionality.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["dynamodb:CreateBackup"],
"Resource": "*"
},
{
"Effect": "Deny",
"Action": ["dynamodb:RestoreTableFromBackup"],
"Resource": "*"
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["dynamodb:ListBackups"],
"Resource": "*"
},
{
"Effect": "Deny",
"Action": [
"dynamodb:CreateBackup",
"dynamodb:RestoreTableFromBackup"
],
"Resource": "*"
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["dynamodb:ListBackups"],
"Resource": "*"
},
{
"Effect": "Deny",
"Action": ["dynamodb:DeleteBackup"],
"Resource": "*"
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"dynamodb:DescribeBackup",
"dynamodb:RestoreTableFromBackup",
"dynamodb:PutItem",
"dynamodb:UpdateItem",
"dynamodb:DeleteItem",
"dynamodb:GetItem",
"dynamodb:Query",
"dynamodb:Scan",
"dynamodb:BatchWriteItem"
],
"Resource": "*"
},
{
"Effect": "Deny",
"Action": [
"dynamodb:DeleteBackup"
],
"Resource": "arn:aws:dynamodb:us-east-1:123456789012:table/MusicCollection/
backup/01489173575360-b308cd7d"
}
]
}
Important
DynamoDB write permissions are necessary for restore functionality.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["dynamodb:CreateBackup"],
"Resource": [
"arn:aws:dynamodb:us-east-1:123456789012:table/Movies"
]
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["dynamodb:ListBackups"],
"Resource": "*"
}
]
}
Important
You cannot grant permissions for the ListBackups action on a specific table.
Point-in-time recovery helps protect your Amazon DynamoDB tables from accidental write or delete
operations. With point in time recovery, you don't have to worry about creating, maintaining, or
scheduling on-demand backups. For example, suppose that a test script writes accidentally to a
production DynamoDB table. With point-in-time recovery, you can restore that table to any point in time
during the last 35 days. DynamoDB maintains incremental backups of your table.
In addition, point-in-time operations don't affect performance or API latencies. For more information,
see Point-in-Time Recovery: How It Works (p. 608).
You can restore a DynamoDB table to a point in time using the console, the AWS CLI, or the DynamoDB
API. For more information, see Restoring a DynamoDB Table to a Point in Time (p. 609).
For more information about the AWS Region availability and pricing, see Amazon DynamoDB Pricing.
You can enable point-in-time recovery using the AWS Management Console, AWS Command Line
Interface (AWS CLI), or the DynamoDB API. When it's enabled, point-in-time recovery provides
continuous backups until you explicitly turn it off. For more information, see Restoring a DynamoDB
Table to a Point in Time (p. 609).
After you enable point-in-time recovery, you can restore to any point in time within
EarliestRestorableDateTime and LatestRestorableDateTime. LatestRestorableDateTime
is typically 5 minutes before the current time.
Note
The point-in-time recovery process always restores to a new table.
For EarliestRestorableDateTime, you can restore your table to any point in time during the last 35
days. The retention period is a fixed 35 days (five calendar weeks) and can't be modified. Any number of
users can execute up to four concurrent restores (any type of restore) in a given account.
Important
If you disable point-in-time recovery and later re-enable it on a table, you reset the start time
for which you can recover that table. As a result, you can only immediately restore that table
using the LatestRestorableDateTime.
When you restore using point in time recovery, DynamoDB restores your table data to the state based on
the selected date and time (day:hour:minute:second) to a new table.
Along with data, the following are also included on the new restored table using point in time recovery:
After restoring a table, you must manually set up the following on the restored table:
The time it takes you to restore a table will vary based on multiple factors, and the point in time
restore times are not always correlated directly to the size of the table. For more information, see
Restores (p. 597).
• If you disable point-in-time recovery and later re-enable it on a table, you reset the start time for
which you can recover that table. As a result, you can only immediately restore that table using the
LatestRestorableDateTime.
• If you delete a table with point-in-time recovery enabled, a system backup is automatically created
and is retained for 35 days (at no additional cost). System backups allow you to restore the deleted
table to the state it was in just before the point of deletion. All system backups follow a standard
naming convention: table-name$DeletedTableBackup.
• You can enable point-in-time recovery on each local replica of a global table. When you restore the
table, the backup restores to an independent table that is not part of the global table. For more
information, see Global Tables: How It Works (p. 612).
• You can enable point-in-time recovery on an encrypted table.
• AWS CloudTrail logs all console and API actions for point-in-time recovery to enable logging,
continuous monitoring, and auditing. For more information, see Logging DynamoDB Operations by
Using AWS CloudTrail (p. 774).
If you want to use the AWS CLI, you need to configure it first. For more information, see Accessing
DynamoDB (p. 51).
Topics
• Restoring a Table to a Point in Time (Console) (p. 610)
• Restoring a Table to a Point in Time (AWS CLI) (p. 610)
1. Sign in to the AWS Management Console and open the DynamoDB console at https://
console.aws.amazon.com/dynamodb/.
2. In the navigation pane on the left side of the console, choose Tables.
3. In the list of tables, choose the MusicCollection table.
4. On the Backups tab of the MusicCollection table, in the Point-in-time recovery section, choose
Restore to point-in-time.
5. For the new table name, type MusicCollectionMinutesAgo.
6. To confirm the restorable time, set the Restore date and time to the Latest restore date. Then
choose Restore table to start the restore process.
Note
You can restore to any point in time within Earliest restore date and Latest restore date.
DynamoDB restores your table data to the state based on the selected date and time
(day:hour:minute:second).
The table that is being restored is shown with the status Restoring. After the restore process is
finished, the status of the MusicCollection table changes to Active.
1. Confirm that point-in-time recovery is enabled for the MusicCollection table by using the
describe-continuous-backups command.
Continuous backups (automatically enabled on table creation) and point-in-time recovery are
enabled.
{
"ContinuousBackupsDescription": {
"PointInTimeRecoveryDescription": {
"PointInTimeRecoveryStatus": "ENABLED",
"EarliestRestorableDateTime": 1519257118.0,
"LatestRestorableDateTime": 1520018653.01
},
"ContinuousBackupsStatus": "ENABLED"
}
}
2. Restore the table to a point in time. In this case, the MusicCollection table is restored to the
LatestRestorableDateTime (~5 minutes ago).
Note
You can also restore to a specific point in time. To do this, run the command using the --
restore-date-time argument, and specify a time stamp. You can specify any point in
time during the last 35 days. For example, the following command restores the table to the
EarliestRestorableDateTime.
To verify the restore, use the describe-table command to describe the MusicCollection table:
The table that is being restored is shown with the status Creating and restore in progress as true. After
the restore process is finished, the status of the MusicCollection table changes to Active.
Important
While a restore is in progress, don't modify or delete the AWS Identity and Access Management
(IAM) policies that grant the IAM entity (for example, user, group, or role) permission to perform
the restore. Otherwise, unexpected behavior can result. For example, suppose that you removed
write permissions for a table while that table was being restored. In this case, the underlying
RestoreTableToPointInTime operation can't write any of the restored data to the table.
Note that IAM policies involving source IP restrictions for accessing the target restore table may
similarly cause issues.
You can modify or delete permissions only after the restore operation is completed.
Global Tables
Amazon DynamoDB global tables provide a fully managed solution for deploying a multi-region, multi-
master database, without having to build and maintain your own replication solution. When you create a
global table, you specify the AWS regions where you want the table to be available. DynamoDB performs
all of the necessary tasks to create identical tables in these regions, and propagate ongoing data changes
to all of them.
To illustrate one use case for a global table, suppose that you have a large customer base spread across
three geographic areas—the US east coast, the US west coast, and western Europe. Customers would
need to update their profile information while using your application. To address these requirements,
you could create three identical DynamoDB tables named CustomerProfiles, in three different AWS
regions. These three tables would be entirely separate from each other, and changes to the data in one
table would not be reflected in the other tables. Without a managed replication solution, you could write
code to replicate data changes among these tables; however, this would be a time-consuming and labor-
intensive effort.
Instead of writing your own code, you could create a global table consisting of your three region-specific
CustomerProfiles tables. DynamoDB would then automatically replicate data changes among those
tables, so that changes to CustomerProfiles data in one region would be seamlessly propagated to the
other regions. In addition, if one of the AWS regions were to become temporarily unavailable, your
customers could still access the same CustomerProfiles data in the other regions.
DynamoDB global tables are ideal for massively scaled applications, with globally dispersed users. In
such an environment, users expect very fast application performance. Global tables provide automatic
multi-master replication to AWS regions world-wide, so you can deliver low-latency data access to your
users no matter where they are located.
For information about the AWS Region availability and pricing, see Amazon DynamoDB Pricing.
Topics
• Global Tables: How It Works (p. 612)
• Requirements and Best Practices (p. 614)
• Creating a Global Table (p. 616)
• Monitoring Global Tables (p. 618)
• Using IAM with Global Tables (p. 619)
A replica table (or replica, for short) is a single DynamoDB table that functions as a part of a global table.
Each replica stores the same set of data items. Any given global table can only have one replica table per
region.
1. Create an ordinary DynamoDB table, with DynamoDB Streams enabled, in an AWS region.
2. Repeat step 1 for every other AWS region where you want to replicate your data.
3. Define a DynamoDB global table, based upon the tables that you have created.
The AWS Management Console automates these tasks, so you can create a global table quickly and
easily. (For detailed instructions, see Creating a Global Table (p. 616).)
The resulting DynamoDB global table consists of multiple replica tables, one per region, that DynamoDB
treats as a single unit. Every replica has the same table name and the same primary key schema. When
an application writes data to a replica table in one region, DynamoDB automatically propagates the write
to the other replica tables in the other AWS regions.
Important
Global Tables automatically creates the following attributes for every item to keep your table
data in sync:
• aws:rep:deleting
• aws:rep:updatetime
• aws:rep:updateregion
You should not alter these attributes or create attributes with the same name.
You can add replica tables to the global table, so that it can be available in additional AWS regions. (In
order to do this, the global table must be empty. In other words, none of the replica tables can have any
data in them.)
You can also remove a replica table from a global table. If you do this, then the table is completely
disassociated from the global table. This newly-independent table no longer interacts with the global
table, and data is no longer propagated to or from the global table.
With a global table, each replica table stores the same set of data items. DynamoDB does not support
partial replication of only some of the items.
An application can read and write data to any replica table. If your application only uses eventually
consistent reads, and only issues reads against one AWS region, then it will work without any
modification.
However, if your application requires strongly consistent reads, then it must perform all of its strongly
consistent reads and writes in the same region. DynamoDB does not support strongly consistent
reads across AWS regions; therefore, if you write to one region and read from another region, the read
response might include stale data that doesn't reflect the results of recently-completed writes in the
other region.
Conflicts can arise if applications update the same item in different regions at about the same time. To
ensure eventual consistency, DynamoDB global tables use a “last writer wins” reconciliation between
concurrent updates, where DynamoDB makes a best effort to determine the last writer. With this conflict
resolution mechanism, all of the replicas will agree on the latest update, and converge toward a state in
which they all have identical data.
If a region becomes isolated or degraded, DynamoDB keeps track of any writes that have been
performed, but have not yet been propagated to all of the replica tables. When the region comes back
online, DynamoDB will resume propagating any pending writes from that region to the replica tables in
other regions. It will also resume propagating writes from other replica tables to the region that is now
back online.
Topics
• Requirements for Adding a New Replica Table (p. 614)
• Best Practices and Requirements for Managing Capacity (p. 615)
• The table must have the same partition key as all of the other replicas.
• The table must have the same write capacity management settings specified.
• The table must have the same name as all of the other replicas.
• The table must have DynamoDB Streams enabled, with the stream containing both the new and the
old images of the item.
• None of the new or existing replica tables in the global table can contain any data.
If global secondary indexes are specified, then the following conditions must also be met:
Important
Write capacity settings should be set consistently across all of your global tables’ replica tables
and matching secondary indexes. To update write capacity settings for your global table, we
strongly recommend using the DynamoDB console or the UpdateGlobalTableSettings call.
UpdateGlobalTableSettings applies changes to write capacity settings to all replica tables
and matching secondary indexes in a global table automatically. If you use the UpdateTable,
RegisterScalableTarget, or PutScalingPolicy calls, you should apply the change to
each replica table and matching secondary index individually. For more information, see Amazon
DynamoDB API Reference.
We strongly recommend enabling auto scaling to manage provisioned write capacity settings.
If you prefer to manage write capacity settings manually, you should provision equal replicated
write capacity units to all of your replica tables. You should also provision equal replicated write
capacity units to matching secondary indexes across your global table.
You must also have appropriate AWS Identity and Access Management (IAM) permissions. For
more information, see Using IAM with Global Tables (p. 619).
If you create your replica tables using the AWS Management Console, auto scaling is enabled by default
for each replica table, with default auto scaling settings for managing read capacity units and write
capacity units.
Changes to auto scaling settings for a replica table or secondary index made through the DynamoDB
console or using the UpdateGlobalTableSettings call are applied to all of the replica tables
and matching secondary indexes in the global table automatically. These changes will overwrite any
existing auto scaling settings. This ensures that provisioned write capacity settings are consistent
across the replica tables and secondary indexes in your global table. If you use the UpdateTable,
RegisterScalableTarget, or PutScalingPolicy calls, you should apply the change to each replica
table and matching secondary index individually.
Note
If auto scaling doesn't satisfy your application's capacity changes (unpredictable workload) or if
you don't want to configure its settings (target settings for minimum, maximum, or utilization
threshold), you can use on-demand mode to manage capacity for your global tables. For more
information, see On-Demand Mode (p. 16).
If you enable on-demand mode on a global table, your consumption of replicated write request
units (rWCUs) will be consistent with how rWCUs are provisioned. For example, if you perform
10 writes to a local table that is replicated in two additional Regions, you will consume 60 write
request units (10 + 10 + 10 = 30; 30 x 2 = 60).
The provisioned replicated write capacity units (rWCUs) on every replica table should be set to the
total number of rWCUs needed for application writes across all Regions multiplied by two. This will
accommodate application writes that occur in the local Region and replicated application writes coming
from other Regions. For example, if you expect 5 writes per second to your replica table in Ohio and 5
writes per second to your replica table in N.Virginia, then you should provision 20 WCUs to each replica
table (5 + 5 = 10; 10 x 2 = 20).
To update write capacity settings for your global table, we strongly recommend using the DynamoDB
console or the UpdateGlobalTableSettings call. UpdateGlobalTableSettings applies changes
to write capacity settings to all replica tables and matching secondary indexes in a global table
automatically. If you use the UpdateTable, RegisterScalableTarget, or PutScalingPolicy calls,
you should apply the change to each replica table and matching secondary index individually. For more
information, see Amazon DynamoDB API Reference.
Note
To update the settings (UpdateGlobalTableSettings) for a global table in DynamoDB,
you must have the dynamodb:UpdateGlobalTable, dynamodb:DescribeLimits,
application-autoscaling:DeleteScalingPolicy, and application-
autoscaling:DeregisterScalableTarget permissions. For more information, see Using
IAM with Global Tables (p. 619).
Topics
• Creating a Global Table (Console) (p. 616)
• Creating a Global Table (AWS CLI) (p. 617)
For Primary key type Artist. Choose Add sort key, and type SongTitle. (Artist and SongTitle should
both be strings.)
To create the table, choose Create. This table will serve as the first replica table in a new global
table, and will be the prototype for other replica tables that you add later.
4. Choose the Global Tables tab, and then choose Enable streams. Leave the View type at its default
value (New and old images).
5. Choose Add region, and then choose another region where you want to deploy another replica
table. In this case, choose US West (Oregon), and then choose Continue. This will start the table
creation process in US West (Oregon).
The console will check to ensure that there is no table with the same name in the selected region. (If
a table with the same name does exist, then you must delete the existing table before you can create
a new replica table in that region.)
The Global Table tab for the selected table (and for any other replica tables) will show that the table
is replicated in multiple regions.
6. You will now add another region, so that your global table is replicated and synchronized across the
United States and Europe. To do this, repeat Step 5, but this time specify EU (Frankfurt) instead of
US West (Oregon).
7. You should still be using the AWS Management console in the us-east-2 (US East Ohio) region. For
the Music table, choose the Items tab, and then choose Create Item. For Artist, type item_1. For
SongTitle, type Song Value 1. To write the item, choose Save.
After a short time, the item is replicated across all three regions of your global table. To verify this,
in the AWS Management Console, go to the region selector in the upper right-hand corner and
choose EU (Frankfurt). The Music table in EU (Frankfurt) should contain the new item.
1. Create a new table (Music) in US East (Ohio), with DynamoDB Streams enabled
(NEW_AND_OLD_IMAGES):
3. Create a global table (Music) consisting of replica tables in the us-east-2 and us-east-1 regions.
Note
The global table name (Music) must match the name of each of the replica tables (Music).
For more information, see Requirements and Best Practices (p. 614).
4. Create another table in EU (Ireland), with the same settings as those you created in Step 1 and Step
2:
AttributeName=Artist,AttributeType=S \
AttributeName=SongTitle,AttributeType=S \
--key-schema \
AttributeName=Artist,KeyType=HASH \
AttributeName=SongTitle,KeyType=RANGE \
--provisioned-throughput \
ReadCapacityUnits=10,WriteCapacityUnits=5 \
--stream-specification StreamEnabled=true,StreamViewType=NEW_AND_OLD_IMAGES \
--region eu-west-1
After you have done this, add this new table to the Music global table:
5. To verify that replication is working, add a new item to the Music table in US East (Ohio):
6. Wait for a few seconds, and then check to see if the item has been successfully replicated to US East
(N. Virginia) and EU (Ireland):
During normal operation, ReplicationLatency should be fairly constant. An elevated value for
ReplicationLatency could indicate that updates from one replica are not propagating to other
replica tables in a timely manner. Over time, this could result in other replica tables "falling behind",
as they no longer receive updates consistently. In this case, you should verify that the read capacity
units (RCUs) and write capacity units (WCUs) are identical for each of the replica tables. In addition, the
WCU settings you choose should follow the recommendations in Best Practices and Requirements for
Managing Capacity (p. 615).
ReplicationLatency can increase if an AWS region becomes degraded, and you have a replica table
in that region. In this case, you can temporarily redirect your application's read and write activity to a
different AWS region.
• PendingReplicationCount—the number of item updates that are written to one replica table, but
that have not yet been written to another replica in the global table. PendingReplicationCount is
expressed in number of items, and is emitted for every source- and destination-region pair.
PendingReplicationCount can increase if an AWS region becomes degraded, and you have a
replica table in that region. In this case, you can temporarily redirect your application's read and write
activity to a different AWS region.
For more information, see DynamoDB Metrics and Dimensions (p. 757).
Do not delete this service-linked role. If you do, then all of your global tables will no longer function.
(For more information about service-linked roles, see Using Service-Linked Roles in the IAM User Guide.)
To create and maintain global tables in DynamoDB, you must have the
dynamodb:CreateGlobalTable permission to access each of the following:
If you use an IAM policy to manage access to one replica table, then you should apply an identical policy
to all of the other replicas within that global table. This practice will help you maintain a consistent
permissions model across all of the replica tables.
By using identical IAM policies on all replicas in a global table, you can also avoid granting unintended
read and write access to your global table data. For example, consider a user who has access to only one
replica in a global table. If that user can write to this replica, then DynamoDB will propagate the write
to all of the other replica tables. In effect, the user can (indirectly) write to all of the other replicas in the
global table. This scenario can be avoided by using consistent IAM policies on all of the replica tables.
The following IAM policy grants permissions to allow the CreateGlobalTable action on all tables:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["dynamodb:CreateGlobalTable"],
"Resource": "*"
}
]
}
The following IAM policy grants permissions to allow the UpdateGlobalTableSettings action on all
tables:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"dynamodb:UpdateGlobalTable",
"dynamodb:DescribeLimits",
"application-autoscaling:DeleteScalingPolicy",
"application-autoscaling:DeregisterScalableTarget"
],
"Resource": "*"
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "dynamodb:CreateGlobalTable",
"Resource": [
"arn:aws:dynamodb::123456789012:global-table/Customers",
"arn:aws:dynamodb:us-east-1:123456789012:table/Customers",
"arn:aws:dynamodb:us-west-1:123456789012:table/Customers"
]
}
]
}
You can use the DynamoDB transactional read and write APIs to manage complex business workflows
that require adding, updating, or deleting multiple items as a single, all-or-nothing operation. For
example, a video game developer can ensure that players’ profiles are updated correctly when they
exchange items in a game or make in-game purchases.
With the transaction write API, you can group multiple Put, Update, Delete, and ConditionCheck
actions and submit them as a single TransactWriteItems operation that either succeeds or fails
as a unit. The same is true for multiple Get actions, which you can group and submit as a single
TransactGetItems operation.
There is no additional cost to enable transactions for your DynamoDB tables. You pay only for the reads
or writes that are part of your transaction. DynamoDB performs two underlying reads or writes of every
item in the transaction: one to prepare the transaction and one to commit the transaction. These two
underlying read/write operations are visible in your Amazon CloudWatch metrics.
To get started with DynamoDB transactions, download the latest AWS Software Development Kit
(SDK) or the AWS Command Line Interface (AWS CLI). Then follow the DynamoDB Transactions
Example (p. 630).
The following sections provide a detailed overview of the transaction APIs and how you can use them in
DynamoDB.
Topics
• Amazon DynamoDB Transactions: How It Works (p. 622)
• Using IAM with DynamoDB Transactions (p. 627)
• DynamoDB Transactions Example (p. 630)
Topics
• TransactWriteItems API (p. 623)
• TransactGetItems API (p. 624)
• Isolation Levels for DynamoDB Transactions (p. 624)
• Transaction Conflict Handling in DynamoDB (p. 625)
• Using Transactional APIs in DynamoDB Accelerator (DAX) (p. 626)
• Capacity Management for Transactions (p. 626)
• Best Practices for Transactions (p. 627)
• Using Transactional APIs with Global Tables (p. 627)
• DynamoDB Transactions vs. the AWSLabs Transactions Client Library (p. 627)
TransactWriteItems API
TransactWriteItems is a synchronous and idempotent write operation that groups up to 25 write
actions in a single all-or-nothing operation. These actions can target up to 25 distinct items in one or
more DynamoDB tables within the same AWS account and in the same Region. The aggregate size of the
items in the transaction cannot exceed 4 MB. The actions are completed atomically so that either all of
them succeed or none of them succeeds.
A TransactWriteItems operation differs from a BatchWriteItem operation in that all the actions
it contains must be completed successfully, or no changes are made at all. With a BatchWriteItem
operation, it is possible that only some of the actions in the batch succeed while the others do not.
You cannot target the same item with multiple operations within the same transaction. For example,
you cannot perform a ConditionCheck and also an Update action on the same item in the same
transaction.
• Put — Initiates a PutItem operation to create a new item or replace an old item with a new item,
conditionally or without specifying any condition.
• Update — Initiates an UpdateItem operation to edit an existing item's attributes or add a new item
to the table if it does not already exist. Use this action to add, delete, or update attributes on an
existing item conditionally or without a condition.
• Delete — Initiates a DeleteItem operation to delete a single item in a table identified by its primary
key.
• ConditionCheck — Checks that an item exists or checks the condition of specific attributes of the
item.
Changes made with transactions are propagated to global secondary indexes (GSIs), DynamoDB streams,
and backups eventually, after the transaction completes successfully. Because of eventual consistency,
tables restored from an on-demand or point-in-time-recovery (PITR) backup might contain some but not
all of the changes made by a recent transaction.
Idempotency
You can optionally include a client token when you make a TransactWriteItems call to ensure that
the request is idempotent. Making your transactions idempotent helps prevent application errors if the
same operation is submitted multiple times due to a connection time-out or other connectivity issue.
• A client token is valid for 10 minutes after the request that uses it finishes. After 10 minutes, any
request that uses the same client token is treated as a new request. You should not reuse the same
client token for the same request after 10 minutes.
• If you repeat a request with the same client token within the 10-minute idempotency window but
change some other request parameter, DynamoDB returns an IdempotentParameterMismatch
exception.
For more details on how conflicts with TransactWriteItems operations are handled, see Transaction
Conflict Handling in DynamoDB (p. 625).
TransactGetItems API
TransactGetItems is a synchronous read operation that groups up to 25 Get actions together. These
actions can target up to 25 distinct items in one or more DynamoDB tables within the same AWS account
and Region. The aggregate size of the items in the transaction cannot exceed 4 MB.
The Get actions are performed atomically so that either all of them succeed or all of them fail:
• Get — Initiates a GetItem operation to retrieve a set of attributes for the item with the given primary
key. If no matching item is found, Get does not return any data.
For more details on how conflicts with TransactGetItems operations are handled, see Transaction
Conflict Handling in DynamoDB (p. 625).
SERIALIZABLE
Serializable isolation ensures that the results of multiple concurrent operations are the same as if no
operation begins until the previous one has finished.
• Between any transactional operation and any standard write operation (PutItem, UpdateItem, or
DeleteItem).
• Between any transactional operation and any standard read operation (GetItem).
• Between a TransactWriteItems operation and a TransactGetItems operation.
Although there is serializable isolation between transactional operations, and each individual standard
writes in a BatchWriteItem operation, there is no serializable isolation between the transaction and
the BatchWriteItem operation as a unit.
Similarly, the isolation level between a transactional operation and individual GetItems in a
BatchGetItem operation is serializable. But the isolation level between the transaction and the
BatchGetItem operation as a unit is read-committed.
READ-COMMITTED
Read-committed isolation ensures that read operations always return committed values for an item.
Read-committed isolation does not prevent modifications of the item immediately after the read
operation.
The isolation level is read-committed between any transactional operation and any read operation that
involves multiple standard reads (BatchGetItem, Query, or Scan). If a transactional write updates an
item in the middle of a BatchGetItem, Query, or Scan operation, the read operation returns the new
committed value.
Summary Table
To summarize, the following table shows the isolation levels between a transaction operation
(TransactWriteItems or TransactGetItems) and other operations.
DeleteItem Serializable
PutItem Serializable
UpdateItem Serializable
GetItem Serializable
BatchGetItem Read-committed*
Query Read-committed
Scan Read-committed
Levels marked with an asterisk (*) apply to the operation as a unit. However, individual actions within
those operations have a serializable isolation level.
Note
• When a PutItem, UpdateItem, or DeleteItem request is rejected, the request fails with a
TransactionConflictException.
• If any item-level request within TransactWriteItems or TransactGetItems is rejected,
the request fails with a TransactionCanceledException.
If you are using the AWS SDK for Java, the exception contains the list of CancellationReasons,
ordered according to the list of items in the TransactItems request parameter. For other
languages, a string representation of the list is included in the exception’s error message.
• If an ongoing TransactWriteItems or TransactGetItems operation conflicts with a
concurrent GetItem request, both operations can succeed.
The TransactionConflict CloudWatch metric is incremented for each failed item-level request.
TransactGetItems calls are passed through DAX without the items being cached locally. This is the
same behavior as for strongly consistent read APIs in DAX.
Plan for the additional reads and writes required by transactional APIs when you are provisioning
capacity to your tables. For example, suppose that your application executes one transaction per second,
and each transaction writes three 500-byte items in your table. Each item requires two write capacity
units (WCUs): one to prepare the transaction and one to commit the transaction. Therefore, you would
need to provision six WCUs to the table.
If you were using DynamoDB Accelerator (DAX) in the previous example, you would also use two read
capacity units (RCUs) for each item in the TransactWriteItems call. So you would need to provision
six additional RCUs to the table.
Similarly, if your application executes one read transaction per second, and each transaction reads three
500-byte items in your table, you would need to provision six read capacity units (RCUs) to the table.
Reading each item require two RCUs: one to prepare the transaction and one to commit the transaction.
• Enable automatic scaling on your tables, or ensure that you have provisioned enough throughput
capacity to perform the two read or write operations for every item in your transaction.
• If you are not using an AWS provided SDK, include a ClientRequestToken attribute when you make
a TransactWriteItems call to ensure that the request is idempotent.
• Don't group operations together in a transaction if it's not necessary. For example, if a single
transaction with 10 operations can be broken up into multiple transactions without compromising
the application correctness, we recommend splitting up the transaction. Simpler transactions improve
throughput and are more likely to succeed.
• Multiple transactions updating the same items simultaneously can cause conflicts that cancel the
transactions. We recommend following DynamoDB best practices for data modeling to minimize such
conflicts.
• If a set of attributes is often updated across multiple items as part of a single transaction, consider
grouping the attributes into a single item to reduce the scope of the transaction.
• Avoid using transactions for ingesting data in bulk. For bulk writes, it is better to use
BatchWriteItem.
Permissions for Put, Update, Delete, and Get actions are governed by the permissions used for the
underlying PutItem, UpdateItem, DeleteItem, and GetItem operations. For the ConditionCheck
action, you can use the dynamodb:ConditionCheckItem permission in IAM policies.
The following are examples of IAM policies that you can use to configure the DynamoDB transactions.
{
"Effect": "Deny",
"Action": [
"dynamodb:ConditionCheckItem",
"dynamodb:PutItem",
"dynamodb:UpdateItem",
"dynamodb:DeleteItem",
"dynamodb:GetItem"
],
"Resource": [
"arn:aws:dynamodb:*:*:table/table04"
],
"Condition": {
"ForAnyValue:StringEquals": {
"dynamodb:EnclosingOperation": [
"TransactWriteItems",
"TransactGetItems"
]
}
}
},
{
"Effect": "Allow",
"Action": [
"dynamodb:PutItem",
"dynamodb:DeleteItem",
"dynamodb:GetItem",
"dynamodb:UpdateItem"
],
"Resource": [
"arn:aws:dynamodb:*:*:table/table04"
]
}
]
}
• Customers — This table stores details about the marketplace customers. Its primary key is a
CustomerId unique identifier.
• ProductCatalog — This table stores details such as price and availability about the products for sale
in the marketplace. Its primary key is a ProductId unique identifier.
• Orders — This table stores details about orders from the marketplace. Its primary key is an OrderId
unique identifier.
Making an Order
The following code snippets illustrate how to use DynamoDB transactions to coordinate the multiple
steps that are required to create and process an order. Using a single all-or-nothing operation ensures
that if any part of the transaction fails, no actions in the transaction are executed and no changes are
made.
In this example, you set up an order from a customer whose customerId is 09e8e9c8-ec48 and then
execute it as a single transaction using the following simple order-processing workflow:
.withReturnValuesOnConditionCheckFailure(ReturnValuesOnConditionCheckFailure.ALL_OLD);
Additional Examples
• Using transactions from DynamoDBMapper
Amazon DynamoDB is designed for scale and performance. In most cases, the DynamoDB response times
can be measured in single-digit milliseconds. However, there are certain use cases that require response
times in microseconds. For these use cases, DynamoDB Accelerator (DAX) delivers fast response times for
accessing eventually consistent data.
DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory
performance for demanding applications. DAX addresses three core scenarios:
1. As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by
an order of magnitude, from single-digit milliseconds to microseconds.
2. DAX reduces operational and application complexity by providing a managed service that is API-
compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with
an existing application.
3. For read-heavy or bursty workloads, DAX provides increased throughput and potential operational
cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for
applications that require repeated reads for individual keys.
DAX supports server-side encryption. With encryption at rest, the data persisted by DAX on disk will
be encrypted. DAX writes data to disk as part of propagating changes from the primary node to read
replicas. For more information, see DAX Encryption at Rest (p. 712).
• Applications that require the fastest possible response time for reads. Some examples include real-
time bidding, social gaming, and trading applications. DAX delivers fast, in-memory read performance
for these use cases.
• Applications that read a small number of items more frequently than others. For example, consider an
e-commerce system that has a one-day sale on a popular product. During the sale, demand for that
product (and its data in DynamoDB) would sharply increase, compared to all of the other products. To
mitigate the impacts of a "hot" key and a non-uniform data distribution, you could offload the read
activity to a DAX cache until the one-day sale is over.
• Applications that are read-intensive, but are also cost-sensitive. With DynamoDB, you provision the
number of reads per second that your application requires. If read activity increases, you can increase
your tables' provisioned read throughput (at an additional cost). Alternatively, you can offload the
activity from your application to a DAX cluster, and reduce the amount of read capacity units you'd
need to purchase otherwise.
• Applications that require repeated reads against a large set of data. Such an application could
potentially divert database resources from other applications. For example, a long-running analysis of
regional weather data could temporarily consume all of the read capacity in a DynamoDB table, which
would negatively impact other applications that need to access the same data. With DAX, the weather
analysis could be performed against cached data instead.
• Applications that require strongly consistent reads (or cannot tolerate eventually consistent reads).
• Applications that do not require microsecond response times for reads, or that do not need to offload
repeated read activity from underlying tables.
• Applications that are write-intensive, or that do not perform much read activity.
• Applications that are already using a different caching solution with DynamoDB, and are using their
own client-side logic for working with that caching solution.
Usage Notes
• For a list of AWS regions where DAX is available, refer to https://aws.amazon.com/dynamodb/pricing.
• DAX supports applications written in Go, Java, Node.js, Python and .NET, using AWS-provided clients
for those programming languages.
• DAX does not support Transport Layer Security (TLS).
• DAX is only available for the EC2-VPC platform. (There is no support for the EC2-Classic platform.)
• DAX clusters maintain metadata about the attribute names of items they store, and that metadata is
maintained indefinitely (even after the item has expired or been evicted from the cache). Applications
that use an unbounded number of attribute names can, over time, cause memory exhaustion in the
DAX cluster. This limitation applies only to top-level attribute names, not nested attribute names.
Examples of problematic top-level attribute names include timestamps, UUIDs, and session IDs.
Note that this limitation only applies to attribute names, not their values. Items like this are not a
problem:
{
"Id": 123,
"Title": "Bicycle 123",
"CreationDate": "2017-10-24T01:02:03+00:00"
}
But items like this are, if there are enough of them and they each have a different timestamp:
{
"Id": 123,
"Title": "Bicycle 123",
"2017-10-24T01:02:03+00:00": "created"
Concepts
DAX is designed to run within an Amazon Virtual Private Cloud environment (Amazon VPC). The Amazon
VPC service defines a virtual network that closely resembles a traditional data center. With an Amazon
VPC, you have control over its IP address range, subnets, routing tables, network gateways, and security
settings. You can launch a DAX cluster in your virtual network, and control access to the cluster by using
Amazon VPC security groups.
Note
If you created your AWS account after 2013-12-04, then you already have a default VPC in each
AWS region. A default VPC is ready for you to use—you can immediately start using your default
VPC without having to perform any additional configuration steps.
For more information, see Your Default VPC and Subnets in the Amazon VPC User Guide.
To create a DAX cluster, you use the AWS Management Console. Unless you specify otherwise, your DAX
cluster will run within your default VPC.
To run your application, you launch an Amazon EC2 instance into your Amazon VPC, and then deploy
your application (with the DAX client) on the EC2 instance. At runtime, the DAX client directs all of your
application's DynamoDB API requests to the DAX cluster. If DAX can process one of these API requests
directly, it does so; otherwise, it passes the request through to DynamoDB. Finally, the DAX cluster
returns the results to your application.
Your application can access DAX by specifying the endpoint for the DAX cluster. The DAX client software
works with the cluster endpoint to perform intelligent load-balancing and routing, so that incoming
requests are evenly distributed across all of the nodes in the cluster.
Read Operations
DAX can respond to the following API calls:
• GetItem
• BatchGetItem
• Query
• Scan
If the request specifies eventually consistent reads (the default behavior), it attempts to read the item
from DAX:
• If DAX has the item available (a cache hit), DAX returns the item to the application without accessing
DynamoDB.
• If DAX does not have the item available (a cache miss), DAX passes the request through to DynamoDB.
When it receives the response from DynamoDB, DAX returns the results to the application—but it also
writes the results to the cache on the primary node.
Note
If there are any read replicas in the cluster, DAX automatically keeps the replicas in sync with the
primary node. For more information, see Clusters (p. 638).
If the request specifies strongly consistent reads, DAX passes the request through to DynamoDB. The
results from DynamoDB are not cached in DAX; instead, they are simply returned to the application.
Write Operations
The following DAX API operations are considered "write-through":
• BatchWriteItem
• UpdateItem
• DeleteItem
• PutItem
With these operations, data is first written to the DynamoDB table, and then to the DAX cluster. The
operation is successful only if the data is successfully written to both the table and to DAX.
Other Operations
DAX does not recognize any DynamoDB operations for managing tables (such as CreateTable,
UpdateTable, and so on). If your application needs to perform these operations, it will need to access
DynamoDB directly rather than using DAX.
Item Cache
DAX maintains an item cache to store the results from GetItem and BatchGetItem operations. The
items in the cache represent eventually consistent data from DynamoDB, and are stored by their primary
key values.
When an application sends a GetItem or BatchGetItem request, DAX attempts to read the items
directly from the item cache using the specified key values. If the items are found (cache hit), DAX returns
them to the application immediately. If the items are not found (cache miss), DAX sends the request to
DynamoDB. DynamoDB processes the requests using eventually consistent reads, and returns the items
to DAX. DAX stores them in the item cache, and then returns them to the application.
The item cache has a time-to-live setting (TTL), which is 5 minutes by default. DAX assigns a timestamp
to every item that it writes to the item cache. An item expires if it has remained in the cache for longer
than the TTL setting. If you issue a GetItem request on an expired item, this is considered a cache miss,
so DAX will send the GetItem request to DynamoDB.
Note
You can specify the TTL setting for the item cache when you create a new DAX cluster. For more
information, see Managing DAX Clusters (p. 696).)
DAX also maintains a least recently used list (LRU) for the item cache. The LRU list keeps track of when
an item was first written to the cache, and when the item was last read from the cache. If the item cache
becomes full, DAX will evict older items (even if they have not expired yet) to make room for new items.
The LRU algorithm is always enabled for the item cache, and is not user-configurable.
Query Cache
DAX also maintains a query cache to store the results from Query and Scan operations. The items in this
cache represent result sets from queries and scans on DynamoDB tables. These result sets are stored by
their parameter values.
When an application sends a Query or Scan request, DAX attempts to read a matching result set from
the query cache using the specified parameter values. If the result set is found (cache hit), DAX returns
it to the application immediately. If the result set is not found (cache miss), DAX sends the request to
DynamoDB. DynamoDB processes the requests using eventually consistent reads, and returns the result
set to DAX. DAX stores it in the item cache, and then returns it to the application.
Note
You can specify the TTL setting for the query cache when you create a new DAX cluster. For
more information, see Managing DAX Clusters (p. 696).)
DAX also maintains a least recently used list (LRU) for the query cache. The LRU list keeps track of when
a result set was first written to the cache, and when the result was last read from the cache. If the query
cache becomes full, DAX will evict older result sets (even if they have not expired yet) to make room for
new result sets. The LRU algorithm is always enabled for the query cache, and is not user-configurable.
Topics
• Nodes (p. 638)
• Clusters (p. 638)
• Regions and Availability Zones (p. 639)
• Parameter Groups (p. 639)
• Security Groups (p. 639)
• Cluster ARN (p. 639)
Nodes
A node is the smallest building block of a DAX cluster. Each node runs an instance of the DAX software,
and maintains a single replica of the cached data.
• By adding more nodes to the cluster. This will increase the overall read throughput of the cluster.
• By using a larger node type. Larger node types provide more capacity and can increase throughput.
(Note that you must create a new cluster with the new node type.)
Every node within a cluster is of the same node type, and runs the same DAX caching software. For a list
of available node types, see https://aws.amazon.com/dynamodb/pricing.
Clusters
A cluster is a logical grouping of one or more nodes that DAX manages as a unit. One of the nodes in the
cluster is designated as the primary node, and the other nodes (if any) are read replicas.
When changes are made to cached data on the primary node, DAX propagates the changes to all of the
read replica nodes.
However, unlike the primary node, read replicas do not write to DynamoDB.
• Scalability. If you have a large number of application clients that need to access DAX concurrently, you
can add more replicas for read-scaling. DAX will spread the load evenly across all of the nodes in the
cluster. (Another way to increase throughput is to use larger cache node types.)
• High availability. In the event of a primary node failure, DAX automatically fails over to a read replica
and designates it as the new primary. If a replica node fails, other nodes in the DAX cluster will still be
able to serve requests until the failed node can be recovered. For maximum fault tolerance, you should
deploy read replicas in separate Availability Zones. This configuration ensures that your DAX cluster
can continue to function, even if an entire Availability Zone should become unavailable.
A DAX cluster can support up to ten nodes per cluster (the primary node, plus a maximum of nine read
replicas).
Important
For production usage, we strongly recommend using DAX with at least three nodes, where each
node is placed in different Availability Zones. Three nodes are required for a DAX cluster to be
fault-tolerant.
A DAX cluster can be deployed with one or two nodes for development or test workloads. A
one and two node clusters are not fault-tolerant and we do not recommend less than three
nodes for production use. If one or two node clusters encounter software or hardware errors,
the cluster can become unavailable or lose cached data.
Each region is designed to be completely isolated from the other regions. Within each region are
multiple Availability Zones. By launching your nodes in different Availability Zones, you are able to
achieve the greatest possible fault tolerance.
Important
Do not place all of your cluster's nodes in a single Availability Zone. In this configuration, your
DAX cluster will become unavailable in case of an Availability Zone failure.
For production usage, we strongly recommend using DAX with at least three nodes, where each
node is placed in different Availability Zones. Three nodes are required for a DAX cluster to be
fault-tolerant.
A DAX cluster can be deployed with one or two nodes for development or test workloads. A
one and two node clusters are not fault-tolerant and we do not recommend less than three
nodes for production use. If one or two node clusters encounter software or hardware errors,
the cluster can become unavailable or lose cached data.
Parameter Groups
Parameter groups are used to manage runtime settings for DAX clusters. DAX has several parameters that
you can use to optimize performance (such as defining a TTL policy for cached data). A parameter group
is a named set of parameters that you can apply to a cluster, thereby guaranteeing that all of the nodes
in that cluster are configured in exactly the same way.
Security Groups
A DAX cluster runs in an Amazon VPC environment—a virtual network that is dedicated to your AWS
account, and is isolated from other Amazon VPCs. A security group acts as a virtual firewall for your VPC,
allowing you to control inbound and outbound network traffic.
When you launch a cluster in your VPC, you add an ingress rule to your security group to allow incoming
network traffic. The ingress rule specifies the protocol (TCP) and port number (8111) for your cluster.
After you add this ingress rule to your security group, your applications running within your VPC can
access the DAX cluster.
Cluster ARN
Every DAX cluster is assigned an Amazon Resource Identifier (ARN). The ARN format is:
arn:aws:dax:region:accountID:cache/clusterName
You use the cluster ARN in an IAM policy to define permissions for DAX API actions. For more
information, see Identity and Access Management in DAX (p. 745).
Cluster Endpoint
Every DAX cluster provides a cluster endpoint for use by your application. By accessing the cluster using
its endpoint, your application does not need to know the host names and port numbers of individual
nodes in the cluster. Your application automatically "knows" all of the nodes in the cluster, even if you
add or remove read replicas.
myDAXcluster.2cmrwl.clustercfg.dax.use1.cache.amazonaws.com:8111
Node Endpoints
Each of the individual nodes in a DAX cluster has its own host name and point number. Here is an
example of a node endpoint:
myDAXcluster-a.2cmrwl.clustercfg.dax.use1.cache.amazonaws.com:8111
Your application can access a node directly, using its endpoint; however, we recommend that you treat
the DAX cluster as a single unit, and access it using the cluster endpoint instead. The cluster endpoint
insulates your application from having to maintain a list of nodes, and keeping that list up to date when
you add or remove nodes from the cluster.
Subnet Groups
Access to DAX cluster nodes is restricted to applications running on Amazon EC2 instances within an
Amazon Virtual Private Cloud (Amazon VPC) environment. You can use subnet groups to grant cluster
access from Amazon EC2 instances running on specific subnets. A subnet group is a collection of subnets
(typically private) that you can designate for your clusters running in an Amazon Virtual Private Cloud
(VPC) environment.
When you create a DAX cluster, you must specify a subnet group. DAX uses that subnet group to select a
subnet and IP addresses within that subnet to associate with your nodes.
Events
DAX records significant events within your clusters, such as a failure to add a node, success in adding a
node, or changes to security groups. By monitoring key events, you can know the current state of your
clusters and, depending upon the event, be able to take corrective action. You can access these events
using the AWS Management Console, or the DescribeEvents action in the DAX management API.
You can also request that notifications be sent to a specific Amazon SNS topic, so that you will know
immediately when an event occurs in your DAX cluster.
Maintenance Window
Every cluster has a weekly maintenance window during which any system changes are applied. If you
don't specify a preferred maintenance window when you create or modify a cache cluster, DAX assigns a
60-minute maintenance window on a randomly selected day of the week.
The 60-minute maintenance window is selected at random from an 8-hour block of time per region. The
following table lists the time blocks for each region from which the default maintenance windows are
assigned.
The maintenance window should fall at the time of lowest usage and thus might need modification from
time to time. You can specify a time range of up to 24 hours in duration during which any maintenance
activities you have requested should occur.
After you have created your DAX cluster, you will be able to access it from an Amazon EC2 instance
running in the same Amazon VPC. You will then be able to use your DAX cluster with an application
program. (For more information, see Using the DAX Client in an Application (p. 655).)
Topics
• Creating an IAM service role for DAX to access DynamoDB (p. 641)
• AWS CLI (p. 643)
• AWS Management Console (p. 646)
If you are using the AWS Management Console, the workflow for creating a cluster checks for the
presence of a pre-existing DAX service role; if none is found, the console creates a new service role
for you. For more information, see Step 2: Create a DAX Cluster (p. 647) in AWS Management
Console (p. 646).
If you are using the AWS CLI, you will need to specify a DAX service role that you have created previously.
Otherwise, you will need to create a new service role beforehand. For more information, see Step 1:
Create an IAM service role for DAX to access DynamoDB (p. 643) in AWS CLI (p. 643).
Otherwise, you will need to add the following permissions to your IAM policy so that your IAM user can
create the service role:
• iam:CreateRole
• iam:CreatePolicy
• iam:AttachRolePolicy
• iam:PassRole
You should attach these permissions to the user that is attempting to perform the action.
Note
The iam:CreateRole, iam:CreatePolicy, iam:AttachPolicy and iam:PassRole
permissions are not included in the AWS-managed policies for DynamoDB. This is by design,
because these permissions provide the possibility of privilege escalation: A user could use these
permissions to create a new administrator policy, and then attach that policy to an existing
role. For this reason, you (the administrator of your DAX cluster) must explicitly add these
permissions to your policy.
Troubleshooting
If your user policy is missing the iam:CreateRole, iam:CreatePolicy, and iam:AttachPolicy
permissions, you will encounter error messages. The following table shows these messages, and how to
correct the problems.
For more information about IAM policies required for DAX cluster administration, see Identity and Access
Management in DAX (p. 745).
AWS CLI
Topics
• Step 1: Create an IAM service role for DAX to access DynamoDB (p. 643)
• Step 2: Create a Subnet Group (p. 644)
• Step 3: Create a DAX Cluster (p. 645)
• Step 4: Configure Security Group Inbound Rules (p. 646)
This section describes how to create a DAX cluster using the AWS Command Line Interface (AWS CLI). If
you have not already done so, you will need to install and configure the AWS CLI. To do this, go to the
AWS Command Line Interface User Guide and follow these instructions:
Important
To manage DAX clusters with the AWS CLI, please install or upgrade to version 1.11.110 or
higher.
Note
All of the AWS CLI examples use the us-west-2 region and fictitious account IDs.
In this step, you will create an IAM policy, and then attach that policy to an IAM role. This will enable you
to assign the role to a DAX cluster so that it can perform DynamoDB operations on your behalf.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "dax.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"dynamodb:*"
],
"Effect": "Allow",
"Resource": [
"arn:aws:dynamodb:us-west-2:accountID:*"
]
}
]
}
Replace accountID with your AWS account ID. To find your AWS account ID, go to the upper right-
hand portion of the AWS Management Console and choose your login ID. Your AWS account ID
appears in the drop-down menu. (In the ARN, accountID must be a twelve-digit number. Do not
use hyphens or any other punctuation.)
4. Create an IAM policy for the service role:
In the output, take note of the ARN for the policy you created. For example:
arn:aws:iam::123456789012:policy/DAXServicePolicyForDynamoDBAccess
5. Attach the policy to the service role:
Replace arn with the actual role ARN from the previous step.
DAX is designed to run within an Amazon Virtual Private Cloud environment (Amazon VPC). If you
created your AWS account after 2013-12-04, then you already have a default VPC in each AWS region.
For more information, see Your Default VPC and Subnets in the Amazon VPC User Guide.
As part of the creation process for a DAX cluster, you must specify a subnet group. A subnet group is a
collection of one or more subnets within your VPC. When you create your DAX cluster, the nodes will be
deployed to the subnets within the subnet group.
1. To determine the identifier for your default VPC, type the following command:
In the output, take note of the identifier for your default VPC. For example:
vpc-12345678
2. Determine the subnets IDs associated with your default VPC:
Replace vpcID with your actual VPC ID. For example: vpc-12345678
In the output, take note of the subnet identifiers. For example: subnet-11111111
3. Create the subnet group. Ensure that you specify at least one subnet ID in the --subnet-ids
parameter:
1. Get the Amazon Resource Name (ARN) for your service role:
In the output, take note of the service role ARN. For example:
arn:aws:iam::123456789012:role/DAXServiceRoleForDynamoDBAccess
2. You are now ready to create your DAX cluster:
All of the nodes in the cluster will be of type dax.r3.large (--node-type). There will be three nodes
(--replication-factor)—one primary node and two replicas.
API Version 2012-08-10
645
Amazon DynamoDB Developer Guide
AWS Management Console
1. To determine the default security group identifier, type the following command:
Replace vpcID with your actual VPC ID (from Step 2: Create a Subnet Group (p. 644)).
In the output, take note of the security group identifier. For example: sg-01234567
2. Now do this:
Topics
• Step 1: Create a Subnet Group (p. 646)
• Step 2: Create a DAX Cluster (p. 647)
• Step 3: Configure Security Group Inbound Rules (p. 648)
DAX is designed to run within an Amazon Virtual Private Cloud environment (Amazon VPC). If you
created your AWS account after 2013-12-04, then you already have a default VPC in each AWS region.
For more information, see Your Default VPC and Subnets in the Amazon VPC User Guide.
As part of the creation process for a DAX cluster, you must specify a subnet group. A subnet group is a
collection of one or more subnets within your VPC. When you create your DAX cluster, the nodes will be
deployed to the subnets within the subnet group.
When the settings are as you want them, click Create subnet group.
do not recommend less than three nodes for production use. If one or two
node clusters encounter software or hardware errors, the cluster can become
unavailable or lose cached data.
e. Encryption—choose enable encryption for your DAX cluster to help protect data at rest. For
more information, see DAX Encryption at Rest (p. 712).
f. IAM service role for DynamoDB access—choose Create new, and enter the following
information:
• IAM role name—type a name for an IAM role. For example: DAXServiceRole. The console will
create a new IAM role , and your DAX cluster assume this role at runtime.
• IAM policy name—type a name for an IAM policy. For example: DAXServicePolicy. The console
will create a new IAM policy, and attach the policy to the IAM role.
• IAM role policy—choose Read/Write. This will allow the DAX cluster to perform read and
write operations in DynamoDB.
• Target DynamoDB table—choose All tables.
g. Subnet group—choose the subnet group that you created in Step 1: Create a Subnet
Group (p. 646).
h. Security Groups—choose default.
i. A separate service role for DAX to access EC2 is also required, DAX will automatically create this
service role for you. See Using Service-Linked Roles for DAX.
5. When the settings are as you want them, choose Launch cluster.
On the Clusters screen, your DAX cluster will be listed with a status of Creating.
Note
Creating the cluster will take several minutes. When the cluster is ready, its status changes to
Available.
In the meantime, you can proceed to Step 3: Configure Security Group Inbound Rules (p. 648)
and follow the instructions there.
Source—type default, and then choose the identifier for your default security group.
In many use cases, the way that your application uses DAX will affect the consistency of data within the
DAX cluster, as well as the consistency of data between DAX and DynamoDB.
Topics
• Consistency Among DAX Cluster Nodes (p. 649)
• DAX Item Cache Behavior (p. 649)
• DAX Query Cache Behavior (p. 651)
• Strongly Consistent and Transactional Reads (p. 652)
• Negative Caching (p. 652)
• Strategies for Writes (p. 653)
When your DAX cluster is running, it will replicate the data among all of the nodes in the cluster
(assuming that you have provisioned more than one node). Consider an application that performs a
successful UpdateItem using DAX. This causes the item cache in the primary node to be modified with
the new value; that value will then be replicated to all of the other nodes in the cluster. This replication is
eventually consistent, and usually takes less than one second to complete.
In this scenario, it is possible for two clients to read the same key from the same DAX cluster but receive
different values, depending on the node that each client accessed. The nodes will all be consistent when
the update has been fully replicated throughout all of the nodes in the cluster. (Note that this behavior is
similar to the eventually consistent nature of DynamoDB.)
If you are building an application that uses DAX, that application should be designed in such a way that it
can tolerate eventually consistent data.
Consistency of Reads
With Amazon DynamoDB, the GetItem operation performs an eventually consistent read by default. If
you use UpdateItem with the DynamoDB client, and then attempt to read the same item immediately
afterward, you might see the data as it appeared prior to the update. This is due to propagation delay
across all of the DynamoDB storage locations. Consistency is usually reached within seconds, so if you
retry the read, you will likely see the updated item.
When you use GetItem with the DAX client, the operation (in this case, an eventually consistent read)
proceeds as follows:
1. The DAX client issues a GetItem request. DAX attempts to read the requested item from the item
cache. If the item is in the cache (cache hit), DAX returns it to the application.
2. If the item is not available (cache miss), DAX performs an eventually consistent GetItem operation
against DynamoDB.
3. DynamoDB returns the requested item, and DAX stores it in the item cache.
4. DAX returns the item to the application.
5. (not shown) If the DAX cluster contains more than one node, the item is replicated to all of the other
nodes in the cluster.
The item will remain in the DAX item cache, subject to the TTL setting and LRU algorithm for the cache
(see Concepts (p. 635)). However, during this period, DAX will not re-read the item from DynamoDB.
If someone else updates the item using a DynamoDB client, bypassing DAX entirely, then a GetItem
request using the DAX client will yield different results from the same GetItem request using the
DynamoDB client. In this scenario, DAX and DynamoDB will hold inconsistent values for the same key
until the TTL for the DAX item has expired.
If an application modifies data in an underlying DynamoDB table, bypassing DAX, the application will
need to anticipate and tolerate data inconsistencies that might arise.
Note
In addition to GetItem, the DAX client also supports BatchGetItem requests. BatchGetItem
is essentially a wrapper around one or more GetItem requests, so DAX treats each of these as
an individual GetItem operation.
Consistency of Writes
DAX is a write-through cache, which simplifies the process of keeping the DAX item cache consistent with
the underlying DynamoDB tables.
The DAX client supports the same write API operations as DynamoDB (PutItem, UpdateItem,
DeleteItem, and BatchWriteItem, and TransactWriteItems). When you use these operations with
the DAX client, the items are modified in both DAX and DynamoDB. DAX updates the items in its item
cache, regardless of the TTL value for these items.
For example, suppose you issue a GetItem request from the DAX client to read an item from the
ProductCatalog table. (The partition key is Id; there is no sort key.) You retrieve the item whose Id is 101;
the QuantityOnHand value for that item is 42. DAX stores the item in its item cache with a specific TTL;
for this example, let us assume that the TTL is ten minutes. Three minutes later, another application
uses the DAX client to update the same item, so that its QuantityOnHand value is now 41. Assuming that
the item is not updated again, any subsequent reads of the same item during the next ten minutes will
return the cached value for QuantityOnHand (41).
DAX supports the following write operations: PutItem, UpdateItem, DeleteItem, BatchWriteItem,
and TransactWriteItems.
When you send a PutItem, UpdateItem, DeleteItem, or BatchWriteItem request to DAX, it does
the following:
If a write to DynamoDB fails for any reason, including throttling, then the item will not be cached in DAX
and the exception for the failure will be returned to the requester. This ensures that data is not written to
the DAX cache unless it is first written successfully to DynamoDB.
Note
Every write to DAX alters the state of the item cache; however, writes to the item cache do not
affect the query cache. (The DAX item cache and query cache serve different purposes, and
operate independently from one another.)
Consistency of Query-Update-Query
Updates to the item cache, or to the underlying DynamoDB table, do not invalidate or modify the results
stored in the query cache.
To illustrate, consider the following scenario where an application is working with a table named
DocumentRevisions, which has DocId as its partition key and RevisionNumber as its sort key.
1. A client issues a Query for DocId 101, for all items with RevisionNumber is greater than or equal to 5.
DAX stores the result set in the query cache, and returns the result set to the user.
2. The client issues a PutItem request for DocId 101 with a RevisionNumber value of 20.
3. The client issues the same Query as described in step 1 (DocId 101 and RevisionNumber >= 5).
In this scenario, the cached result set for the Query issued in step 3 will be identical to the result set that
was cached in step 1. The reason is that DAX does not invalidate Query or Scan result sets based upon
updates to individual items. The PutItem operation from step 2 will only be reflected in the DAX query
cache when the TTL for the Query expires.
Your application should consider the TTL value for the query cache, and how long your application is
able to tolerate inconsistent results between the query cache and the item cache.
DAX handles TransactGetItems requests the same way it handles strongly consistent reads. DAX
passes all TransactGetItems requests to DynamoDB. When it receives a response from DynamoDB,
DAX returns the results to the client, but it does not cache the results.
Negative Caching
DAX supports negative cache entries, in both the item cache and the query cache. A negative cache entry
occurs when DAX cannot find requested items in an underlying DynamoDB table. Instead of generating
an error, DAX caches an empty result and returns that result to the user.
For example, suppose that an application sends a GetItem request to a DAX cluster, and that there is
no matching item in the DAX item cache. This will cause DAX to read the corresponding item from the
underlying DynamoDB table. If the item does not exist in DynamoDB, then DAX will store an empty item
in its item cache, and then return the empty item to the application. Now suppose the application sends
another GetItem request for the same item. DAX will find the empty item in the item cache, and return
it to the application immediately. It does not consult DynamoDB at all.
A negative cache entry will remain in the DAX item cache until its item TTL has expired, LRU is invoked,
or until the item is modified using PutItem, UpdateItem or DeleteItem.
The DAX query cache handles negative cache results in a similar way. If an application performs a Query
or Scan, and the DAX query cache does not contain a cached result, then DAX sends the request to
DynamoDB. If there are no matching items in the result set, then DAX stores an empty result set in the
query cache, and returns the empty result set to the application. Subsequent Query or Scan requests
will yield the same (empty) result set, until the TTL for that result set has expired.
For applications that are sensitive to latency, writing through DAX incurs an extra network hop, so a write
to DAX will be a little slower than a write directly to DynamoDB. If your application is sensitive to write
latency, you can reduce the latency by writing directly to DynamoDB instead. (For more information,
see Write-Around (p. 654).)
For write-intensive applications (such as those that perform bulk data loading), it might not be desirable
to write all of the data through DAX because only a very small percentage of that data is ever read by
the application. When you write large amounts of data through DAX, it must invoke its LRU algorithm to
make room in the cache for the new items to be read. This diminishes the effectiveness of DAX as a read
cache.
When you write an item to DAX, the item cache state is altered to accommodate the new item. (For
example, DAX might need to evict older data from the item cache to make room for the new item.) The
new item remains in the item cache, subject to the cache's LRU algorithm and the TTL setting for the
cache. As long as the item persists in the item cache, DAX will not re-read the item from DynamoDB.
Write-Through
The DAX item cache implements a write-through policy (see How DAX Processes Writes (p. 651)).
When you write an item, DAX ensures that the cached item is synchronized with the item as it exists in
DynamoDB. This is helpful for applications that need to re-read an item immediately after writing it.
However, if other applications write directly to a DynamoDB table, the item in the DAX item cache will no
longer be in sync with DynamoDB.
To illustrate, consider two users (Alice and Bob) who are working with the ProductCatalog table. Alice
accesses the table using DAX, but Bob bypasses DAX and accesses the table directly in DynamoDB.
1. Alice updates an item in the ProductCatalog table. DAX forwards the request to DynamoDB, and the
update succeeds. DAX then writes the item to its item cache, and returns a successful response to
Alice. From that point on, until the item is ultimately evicted from the cache, any user who reads the
item from DAX will see the item with Alice's update.
2. A short time later, Bob updates the same ProductCatalog item that Alice wrote—however, Bob updates
the item directly in DynamoDB. DAX does not automatically refresh its item cache in response to
updates via DynamoDB; therefore, DAX users do not see Bob's update.
3. Alice reads the item from DAX again. The item is in the item cache, so DAX returns it to Alice without
accessing the DynamoDB table.
In this scenario, Alice and Bob will see different representations of the same ProductCatalog item. This
will be the case until DAX evicts the item from the item cache, or until another user updates the same
item again using DAX.
Write-Around
If your application needs to write large quantities of data (such as a bulk data load), it might make sense
to bypass DAX and write the data directly to DynamoDB. Such a write-around strategy will reduce write
latency; however, the item cache will not remain in sync with the data in DynamoDB.
If you decide to use a write-around strategy, remember that DAX will populate its item cache whenever
applications use the DAX client to read data. This can be advantageous in some cases, because it ensures
that only the most frequently-read data is cached (as opposed to the most-frequently written data).
For example, consider a user (Charlie) who wants to work with a different table, the GameScores table
using DAX. The partition key for GameScores is UserId, so all of Charlie's scores would have the same
UserId.
1. Charlie wants to retrieve all of his scores, so he sends a Query to DAX. Assuming that this query has
not been issued before, DAX forwards the query to DynamoDB for processing, stores the results in the
DAX query cache, and then returns the results to Charlie. The result set will remain available in the
query cache until it is evicted.
2. Now suppose that Charlie plays the Meteor Blasters game and achieves a high score. Charlie sends an
UpdateItem request to DynamoDB, modifying an item in the GameScores table.
3. Finally, Charlie decides to rerun his earlier Query to retrieve all of his data from GameScores. Charlie
does not see his high score for Meteor Blasters in the results. This is because the query results come
from the query cache, not the item cache. (The two caches are independent from one another, so a
change in one cache does not affect the other cache.)
DAX does not refresh result sets in the query cache with the most current data from DynamoDB. Each
result set in the query cache is current as of the time that the Query or Scan operation was performed.
Thus, Charlie's Query results do not reflect his PutItem operation. This will be the case until DAX evicts
the result set from the query cache.
• http://dax-sdk.s3-website-us-west-2.amazonaws.com
This section demonstrates how to launch an Amazon EC2 instance in your default Amazon VPC, connect
to the instance, and run a sample application. It also provides information about how to modify your
existing application so that it can use your DAX cluster.
Topics
• Tutorial: Running a Sample Application Using DAX (p. 655)
• Modifying an Existing Application to Use DAX (p. 693)
Topics
• Step 1: Launch an Amazon EC2 Instance (p. 655)
• Step 2: Create an IAM User and Policy (p. 656)
• Step 3: Configure Your Amazon EC2 Instance (p. 658)
• Step 4: Run a Sample Application (p. 658)
1. Sign in to the AWS Management Console and open the Amazon EC2 console at https://
console.aws.amazon.com/ec2/.
2. Choose Launch Instance, and do the following:
• At the top of the list of AMIs, find the Amazon Linux AMI and choose Select.
• Choose Launch.
3. In the Select an existing key pair or create a new key pair window, do one of the following:
• If you do not have an Amazon EC2 key pair, choose Create a new key pair and follow the
instructions. You are asked to download a private key file (.pem file); you need this file later when
you log in to your Amazon EC2 instance.
• If you already have an existing Amazon EC2 key pair, go to Select a key pair and choose your key
pair from the list. You must already have the private key file (.pem file) available in order to log in
to your Amazon EC2 instance.
4. When you have configured your key pair, choose Launch Instances.
5. In the console navigation pane, choose EC2 Dashboard and then choose the instance that you
launched. In the lower pane, on the Description tab, find the Public DNS for your instance, for
example: ec2-11-22-33-44.us-west-2.compute.amazonaws.com. Make a note of this
public DNS name, because you need it for the next step (Step 3: Configure Your Amazon EC2
Instance (p. 658)).
Note
It takes a few minutes for your Amazon EC2 instance to become available. In the meantime,
you can proceed to Step 2: Create an IAM User and Policy (p. 656) and follow the instructions
there.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"dax:*"
],
"Effect": "Allow",
"Resource": [
"*"
]
},
{
"Action": [
"dynamodb:*"
],
"Effect": "Allow",
"Resource": [
"*"
]
}
]
}
You need to specify your private key file (.pem file) and the public DNS name of your instance. (See
Step 1: Launch an Amazon EC2 Instance (p. 655)).
aws configure
After launching and configuring your Amazon EC2 instance, you can test the functionality of DAX
using one of the available sample applications. For more information, see Step 4: Run a Sample
Application (p. 658).
Topics
• DAX SDK for Go (p. 658)
• Java and DAX (p. 661)
• .NET and DAX (p. 669)
• Node.js and DAX (p. 678)
• Python and DAX (p. 686)
Note
The preceding commands set the environment variables for your current session only.
To make these settings permanent, add the commands in the ~/.bash_profile file.
c. Test that Golang is installed and running correctly:
go version
go get github.com/aws/aws-dax-go
go get github.com/aws-samples/aws-dax-go-sample
The first program creates a DynamoDB table named TryDaxGoTable. The second program writes
data to the table.
5. Run the following Golang programs:
Take note of the timing information—the number of milliseconds required for the GetItem, Query,
and Scan tests.
6. In the previous step, you ran the programs against the DynamoDB endpoint. Now, run the programs
again—but this time, the GetItem, Query, and Scan operations are processed by your DAX cluster.
To determine the endpoint for your DAX cluster, choose one of the following:
• Using the DynamoDB console — Choose your DAX cluster. The cluster endpoint is shown on the
console; for example:
mycluster.frfx8h.clustercfg.dax.usw2.amazonaws.com:8111
The cluster endpoint port and address are shown in the output; for example:
{
"Port": 8111,
"Address":"mycluster.frfx8h.clustercfg.dax.usw2.amazonaws.com"
}
Now run the programs again—but this time, specify the cluster endpoint as a command line
parameter:
Look at the rest of the output, and take note of the timing information. The elapsed times for
GetItem, Query, and Scan should be significantly lower with DAX than with DynamoDB.
7. Run the following Golang program to delete TryDaxGoTable:
2. Download the AWS SDK for Java (.zip file), and then extract it:
wget http://sdk-for-java.amazonwebservices.com/latest/aws-java-sdk.zip
unzip aws-java-sdk.zip
3. Download the latest version of the DAX Java client (.jar file):
wget http://dax-sdk.s3-website-us-west-2.amazonaws.com/java/DaxJavaClient-latest.jar
Note
The client for the DAX SDK for Java is available on Apache Maven. For more information,
see Using client as Apache Maven dependency (p. 663), later in this topic.
4. Set your CLASSPATH variable:
export SDKVERSION=sdkVersion
export CLASSPATH=.:./DaxJavaClient-latest.jar:aws-java-sdk-$SDKVERSION/lib/aws-java-
sdk-$SDKVERSION.jar:aws-java-sdk-$SDKVERSION/third-party/lib/*
Replace sdkVersion with the actual version number of the AWS SDK for Java, for example:
1.11.112.
5. Download the sample program source code (.zip file):
wget http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/samples/TryDax.zip
unzip TryDax.zip
javac TryDax*.java
java TryDax
Take note of the timing information—the number of milliseconds required for the GetItem, Query,
and Scan tests.
8. In the previous step, you ran the program against the Amazon DynamoDB endpoint. Now run the
program again, but this time the GetItem, Query, and Scan operations are processed by your DAX
cluster.
To determine the endpoint for your DAX cluster, choose one of the following:
• Using the DynamoDB console — Choose your DAX cluster. The cluster endpoint is shown on the
console; for example:
mycluster.frfx8h.clustercfg.dax.usw2.amazonaws.com:8111
The cluster endpoint port and address are shown in the output; for example:
{
"Port": 8111,
"Address":"mycluster.frfx8h.clustercfg.dax.usw2.amazonaws.com"
}
Now run the program again—but this time, specify the cluster endpoint as a command line
parameter:
Look at the rest of the output, and take note of the timing information. The elapsed times for
GetItem, Query, and Scan should be significantly lower with DAX than with DynamoDB.
For more information about this program, see the following sections:
1. Download and install Apache Maven. For more information, see Downloading Apache Maven and
Installing Apache Maven.
2. Add the client Maven dependency to your application's Project Object Model (POM) file:
<!--Dependency:-->
<dependencies>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>amazon-dax-client</artifactId>
<version>x.x.x.x</version>
</dependency>
</dependencies>
Replace x.x.x.x with the actual version number of the client. For example: 1.0.200704.0.
TryDax.java
The TryDax.java file contains the main method. If you run the program with no command line
parameters, it creates an Amazon DynamoDB client and uses that client for all API operations. If you
specify a DAX cluster endpoint on the command line, the program also creates a DAX client and uses it
for GetItem, Query, and Scan operations.
• Use the DAX client instead of the Amazon DynamoDB client (see Java and DAX (p. 661)).
• Choose a different name for the test table.
• Modify the number of items written by changing the helper.writeData parameters. The second
parameter is the number of partition keys, and the third parameter is the number of sort keys. By
default, the program uses 1–10 for partition key values, and 1–10 for sort key values, for a total of 100
items written to the table. (For more information, see TryDaxHelper.java (p. 665).)
• Modify the number of GetItem, Query, and Scan tests, and modify their parameters.
• Comment out the lines containing helper.createTable and helper.deleteTable (if you don't
want to create and delete the table each time you run the program).
Note
To run this program, you can setup Maven to use the client for the DAX SDK for Java and the
AWS SDK for Java as dependencies. For more information, see Using client as Apache Maven
dependency (p. 663).
Alternatively, you can download and include both the DAX Java client and the AWS SDK for
Java in your classpath. See Java and DAX (p. 661) for an example of setting your CLASSPATH
variable.
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
import com.amazonaws.services.dynamodbv2.document.DynamoDB;
System.out.println("Creating table...");
helper.createTable(tableName, ddbClient);
System.out.println("Populating table...");
helper.writeData(tableName, ddbClient, 10, 10);
// GetItem
tests.getItemTest(tableName, testClient, 1, 10, 5);
// Query
tests.queryTest(tableName, testClient, 5, 2, 9, 5);
// Scan
tests.scanTest(tableName, testClient, 5);
helper.deleteTable(tableName, ddbClient);
}
TryDaxHelper.java
The getDynamoDBClient and getDaxClient methods provide DynamoDB and DAX clients. For
control plane operations (CreateTable, DeleteTable) and write operations, the program uses the
DynamoDB client. If you specify a DAX cluster endpoint, the main program creates a DAX client for
performing read operations (GetItem, Query, Scan).
The other TryDaxHelper methods (createTable, writeData, deleteTable) are for setting up and
tearing down the DynamoDB table and its data.
Note
To run this program, you can setup Maven to use the client for the DAX SDK for Java and the
AWS SDK for Java as dependencies. For more information, see Using client as Apache Maven
dependency (p. 663).
Alternatively, you can download and include both the DAX Java client and the AWS SDK for
Java in your classpath. See Java and DAX (p. 661) for an example of setting your CLASSPATH
variable.
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
import java.util.Arrays;
import com.amazon.dax.client.dynamodbv2.AmazonDaxClientBuilder;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import com.amazonaws.services.dynamodbv2.document.DynamoDB;
import com.amazonaws.services.dynamodbv2.document.Item;
import com.amazonaws.services.dynamodbv2.document.Table;
import com.amazonaws.services.dynamodbv2.model.AttributeDefinition;
import com.amazonaws.services.dynamodbv2.model.KeySchemaElement;
import com.amazonaws.services.dynamodbv2.model.KeyType;
import com.amazonaws.services.dynamodbv2.model.ProvisionedThroughput;
import com.amazonaws.services.dynamodbv2.model.ScalarAttributeType;
import com.amazonaws.util.EC2MetadataUtils;
DynamoDB getDynamoDBClient() {
System.out.println("Creating a DynamoDB client");
AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard()
.withRegion(region)
.build();
return new DynamoDB(client);
}
table = client.createTable(tableName,
Arrays.asList(
new KeySchemaElement("pk", KeyType.HASH), // Partition key
new KeySchemaElement("sk", KeyType.RANGE)), // Sort key
Arrays.asList(
new AttributeDefinition("pk", ScalarAttributeType.N),
new AttributeDefinition("sk", ScalarAttributeType.N)),
new ProvisionedThroughput(10L, 10L));
table.waitForActive();
System.out.println("Successfully created table. Table status: " +
table.getDescription().getTableStatus());
} catch (Exception e) {
System.err.println("Unable to create table: ");
e.printStackTrace();
}
}
try {
for (Integer ipk = 1; ipk <= pkmax; ipk++) {
System.out.println(("Writing " + skmax + " items for partition key: " +
ipk));
for (Integer isk = 1; isk <= skmax; isk++) {
table.putItem(new Item()
.withPrimaryKey("pk", ipk, "sk", isk)
.withString("someData", someData));
}
}
} catch (Exception e) {
System.err.println("Unable to write item:");
e.printStackTrace();
}
}
} catch (Exception e) {
System.err.println("Unable to delete table: ");
e.printStackTrace();
}
}
TryDaxTests.java
The TryDaxTests.java file contains methods that perform read operations against a test table in
Amazon DynamoDB. These methods are not concerned with how they access the data (using either the
DynamoDB client or the DAX client), so there is no need to modify the application logic.
Note
To run this program, you can setup Maven to use the client for the DAX SDK for Java and the
AWS SDK for Java as dependencies. For more information, see Using client as Apache Maven
dependency (p. 663).
Alternatively, you can download and include both the DAX Java client and the AWS SDK for
Java in your classpath. See Java and DAX (p. 661) for an example of setting your CLASSPATH
variable.
/**
import com.amazonaws.services.dynamodbv2.document.DynamoDB;
import com.amazonaws.services.dynamodbv2.document.Item;
import com.amazonaws.services.dynamodbv2.document.ItemCollection;
import com.amazonaws.services.dynamodbv2.document.QueryOutcome;
import com.amazonaws.services.dynamodbv2.document.ScanOutcome;
import com.amazonaws.services.dynamodbv2.document.Table;
import com.amazonaws.services.dynamodbv2.document.spec.QuerySpec;
void getItemTest(String tableName, DynamoDB client, int pk, int sk, int iterations) {
long startTime, endTime;
System.out.println("GetItem test - partition key " + pk + " and sort keys 1-" +
sk);
Table table = client.getTable(tableName);
void queryTest(String tableName, DynamoDB client, int pk, int sk1, int sk2, int
iterations) {
long startTime, endTime;
System.out.println("Query test - partition key " + pk + " and sort keys between " +
sk1 + " and " + sk2);
Table table = client.getTable(tableName);
startTime = System.nanoTime();
ItemCollection<QueryOutcome> items = table.query(spec);
try {
Iterator<Item> iter = items.iterator();
while (iter.hasNext()) {
iter.next();
}
} catch (Exception e) {
System.err.println("Unable to query table:");
e.printStackTrace();
}
endTime = System.nanoTime();
printTime(startTime, endTime, iterations);
}
}
mkdir dotnet
tar zxvf dotnet-sdk-N.N.N-linux-x64.tar.gz -C dotnet
Replace N.N.N with the actual version number of the .NET Core SDK, for example: 2.1.4
3. Verify the installation:
alias dotnet=$HOME/dotnet/dotnet
dotnet --version
This should print the version number of the .NET Core SDK.
Note
Instead of the version number, you might receive the following error:
error: libunwind.so.8: cannot open shared object file: No such file
or directory
To resolve the error, install the libunwind package:
After you do this, you should be able to run the dotnet --version command without
any errors.
4. Create a new .NET project:
This requires a few minutes to perform a one-time-only setup. When it completes, run the sample
project:
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>netcoreapp2.0</TargetFramework>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="AWSSDK.DAX.Client" Version="1.0.0" />
</ItemGroup>
</Project>
wget http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/samples/TryDax.zip
unzip TryDax.zip
7. Now run the sample programs, one at a time. For each program, copy its contents into the myApp/
Program.cs, and then run the MyApp project.
cp 01-CreateTable.cs myApp/Program.cs
dotnet run --project myApp
cp 02-Write-Data.cs myApp/Program.cs
dotnet run --project myApp
The first program creates a DynamoDB table named TryDaxTable. The second program writes data
to the table.
8. Now run some programs to perform GetItem, Query, and Scan operations on your DAX cluster. To
determine the endpoint for your DAX cluster, choose one of the following:
• Using the DynamoDB console — Choose your DAX cluster. The cluster endpoint is shown on the
console; for example:
mycluster.frfx8h.clustercfg.dax.usw2.amazonaws.com:8111
The cluster endpoint port and address are shown in the output; for example:
{
"Port": 8111,
"Address":"mycluster.frfx8h.clustercfg.dax.usw2.amazonaws.com"
}
Now run the following programs, specifying your cluster endpoint as a command line parameter.
(Replace the sample endpoint with your actual DAX cluster endpoint.)
cp 03-GetItem-Test.cs myApp/Program.cs
dotnet run --project myApp mycluster.frfx8h.clustercfg.dax.usw2.amazonaws.com:8111
cp 04-Query-Test.cs myApp/Program.cs
dotnet run --project myApp mycluster.frfx8h.clustercfg.dax.usw2.amazonaws.com:8111
cp 05-Scan-Test.cs myApp/Program.cs
dotnet run --project myApp mycluster.frfx8h.clustercfg.dax.usw2.amazonaws.com:8111
Take note of the timing information—the number of milliseconds required for the GetItem, Query,
and Scan tests.
9. Run the following .NET program to delete TryDaxTable:
cp 06-DeleteTable.cs myApp/Program.cs
dotnet run --project myApp
For more information about these programs, see the following sections:
01-CreateTable.cs
The 01-CreateTable.cs program creates a table (TryDaxTable). The remaining .NET programs in
this section depend on this table.
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
using Amazon.DynamoDBv2.Model;
using System.Collections.Generic;
using System;
using Amazon.DynamoDBv2;
namespace ClientTest
{
class Program
{
static void Main(string[] args)
{
02-Write-Data.cs
The 02-Write-Data.cs program writes test data to TryDaxTable.
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
using Amazon.DynamoDBv2.Model;
using System.Collections.Generic;
using System;
using Amazon.DynamoDBv2;
namespace ClientTest
{
class Program
{
static void Main(string[] args)
{
}
}
03-GetItem-Test.cs
The 03-GetItem-Test.cs program performs GetItem operations on TryDaxTable.
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
using Amazon.Runtime;
using Amazon.DAX;
using Amazon.DynamoDBv2.Model;
using System.Collections.Generic;
using System;
using Amazon.DynamoDBv2;
using Amazon;
namespace ClientTest
{
class Program
{
static void Main(string[] args)
{
};
var client = new ClusterDaxClient(clientConfig);
var pk = 1;
var sk = 10;
var iterations = 5;
{
for (var isk = 1; isk <= sk; isk++)
{
var request = new GetItemRequest()
{
TableName = tableName,
Key = new Dictionary<string, AttributeValue>() {
{"pk", new AttributeValue {N = ipk.ToString()} },
{"sk", new AttributeValue {N = isk.ToString() } }
}
};
var response = client.GetItemAsync(request).Result;
Console.WriteLine("GetItem succeeded for pk: " + ipk + ", sk: " +
isk);
}
}
04-Query-Test.cs
The 04-Query-Test.cs program performs Query operations on TryDaxTable.
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
using Amazon.Runtime;
using Amazon.DAX;
using Amazon.DynamoDBv2.Model;
using System.Collections.Generic;
using System;
using Amazon.DynamoDBv2;
using Amazon;
namespace ClientTest
{
class Program
{
static void Main(string[] args)
{
};
var client = new ClusterDaxClient(clientConfig);
var pk = 5;
var sk1 = 2;
var sk2 = 9;
var iterations = 5;
05-Scan-Test.cs
The 05-Scan-Test.cs program performs Scan operations on TryDaxTable.
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
using Amazon.Runtime;
using Amazon.DAX;
using Amazon.DynamoDBv2.Model;
using System.Collections.Generic;
using System;
using Amazon.DynamoDBv2;
using Amazon;
namespace ClientTest
{
class Program
{
static void Main(string[] args)
{
};
var client = new ClusterDaxClient(clientConfig);
var iterations = 5;
06-DeleteTable.cs
The 06-DeleteTable.cs program deletes TryDaxTable. Run this program after you are done testing.
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
using Amazon.DynamoDBv2.Model;
using System;
using Amazon.DynamoDBv2;
namespace ClientTest
{
class Program
{
static void Main(string[] args)
{
b. Activate nvm:
. ~/.nvm/nvm.sh
wget http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/samples/TryDax.zip
unzip TryDax.zip
node 01-create-table.js
node 02-write-data.js
The first program creates a DynamoDB table named TryDaxTable. The second program writes data to
the table.
5. Run the following Node.js programs:
node 03-getitem-test.js
node 04-query-test.js
node 05-scan-test.js
Take note of the timing information—the number of milliseconds required for the GetItem, Query
and Scan tests.
6. In the previous step, you ran the programs against the DynamoDB endpoint. You will now run the
programs again, but this time the GetItem, Query and Scan operations will be processed by your
DAX cluster.
To determine the endpoint for your DAX cluster, choose one of the following:
• Using the DynamoDB console—choose your DAX cluster. The cluster endpoint is shown in the
console. For example:
mycluster.frfx8h.clustercfg.dax.usw2.amazonaws.com:8111
The cluster endpoint port and address are shown in the output. For example:
{
"Port": 8111,
"Address":"mycluster.frfx8h.clustercfg.dax.usw2.amazonaws.com"
}
Now run the programs again—but this time, specify the cluster endpoint as a command line
parameter:
Look at the rest of the output, and take note of the timing information. The elapsed times for
GetItem, Query and Scan should be significantly lower with DAX than with DynamoDB.
7. Run the following Node.js program to delete TryDaxTable:
node 06-delete-table
For more information about these programs, see the following sections:
01-create-table.js
The 01-create-table.js program creates a table (TryDaxTable). The remaining Node.js programs in
this section depend on this table.
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
const AmazonDaxClient = require('amazon-dax-client');
var AWS = require("aws-sdk");
AWS.config.update({
region: region
});
var params = {
TableName : tableName,
KeySchema: [
{ AttributeName: "pk", KeyType: "HASH"}, //Partition key
{ AttributeName: "sk", KeyType: "RANGE" } //Sort key
],
AttributeDefinitions: [
{ AttributeName: "pk", AttributeType: "N" },
{ AttributeName: "sk", AttributeType: "N" }
],
ProvisionedThroughput: {
ReadCapacityUnits: 10,
WriteCapacityUnits: 10
}
};
02-write-data.js
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
const AmazonDaxClient = require('amazon-dax-client');
var AWS = require("aws-sdk");
AWS.config.update({
region: region
});
//
//put item
}
}
03-getitem-test.js
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
const AmazonDaxClient = require('amazon-dax-client');
var AWS = require("aws-sdk");
AWS.config.update({
region: region
});
if (process.argv.length > 2) {
var dax = new AmazonDaxClient({endpoints: [process.argv[2]], region: region})
daxClient = new AWS.DynamoDB.DocumentClient({service: dax });
}
var pk = 1;
var sk = 10;
var iterations = 5;
var params = {
TableName: tableName,
Key:{
"pk": ipk,
"sk": isk
}
};
04-query-test.js
The 04-query-test.js program performs Query operations on TryDaxTable.
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
AWS.config.update({
region: region
});
if (process.argv.length > 2) {
var dax = new AmazonDaxClient({endpoints: [process.argv[2]], region: region})
daxClient = new AWS.DynamoDB.DocumentClient({service: dax });
}
var pk = 5;
var sk1 = 2;
var sk2 = 9;
var iterations = 5;
var params = {
TableName: tableName,
KeyConditionExpression: "pk = :pkval and sk between :skval1 and :skval2",
ExpressionAttributeValues: {
":pkval":pk,
":skval1":sk1,
":skval2":sk2
}
};
05-scan-test.js
The 05-scan-test.js program performs Scan operations on TryDaxTable.
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
const AmazonDaxClient = require('amazon-dax-client');
var AWS = require("aws-sdk");
AWS.config.update({
region: region
});
if (process.argv.length > 2) {
var dax = new AmazonDaxClient({endpoints: [process.argv[2]], region: region})
daxClient = new AWS.DynamoDB.DocumentClient({service: dax });
}
var iterations = 5;
var params = {
TableName: tableName
};
var startTime = new Date().getTime();
for (var i = 0; i < iterations; i++) {
06-delete-table.js
The 06-delete-table.js program deletes TryDaxTable. Run this program after you are done testing.
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
AWS.config.update({
region: region
});
var params = {
TableName : tableName
};
wget http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/samples/TryDax.zip
unzip TryDax.zip
python 01-create-table.py
python 02-write-data.py
The first program creates a DynamoDB table named TryDaxTable. The second program writes data
to the table.
4. Run the following Python programs:
python 03-getitem-test.py
python 04-query-test.py
python 05-scan-test.py
Take note of the timing information—the number of milliseconds required for the GetItem, Query,
and Scan tests.
5. In the previous step, you ran the programs against the Amazon DynamoDB endpoint. Now run the
programs again, but this time the GetItem, Query, and Scan operations are processed by your DAX
cluster.
To determine the endpoint for your DAX cluster, choose one of the following:
• Using the DynamoDB console — Choose your DAX cluster. The cluster endpoint is shown on the
console; for example:
mycluster.frfx8h.clustercfg.dax.usw2.amazonaws.com:8111
The cluster endpoint port and address are shown in the output; for example:
{
"Port": 8111,
"Address":"mycluster.frfx8h.clustercfg.dax.usw2.amazonaws.com"
}
Now run the programs again—but this time, specify the cluster endpoint as a command line
parameter:
Look at the rest of the output, and take note of the timing information. The elapsed times for
GetItem, Query, and Scan should be significantly lower with DAX than with DynamoDB.
6. Run the following Python program to delete TryDaxTable:
python 06-delete-table.py
For more information about these programs, see the following sections:
01-create-table.py
The 01-create-table.py program creates a table (TryDaxTable). The remaining Python programs in
this section depend on this table.
#
# Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# This file is licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
#
#!/usr/bin/env python3
from __future__ import print_function
import os
import amazondax
import botocore.session
session = botocore.session.get_session()
dynamodb = session.create_client('dynamodb', region_name=region) # low-level client
table_name = "TryDaxTable"
params = {
'TableName' : table_name,
'KeySchema': [
{ 'AttributeName': "pk", 'KeyType': "HASH"}, # Partition key
{ 'AttributeName': "sk", 'KeyType': "RANGE" } # Sort key
],
'AttributeDefinitions': [
{ 'AttributeName': "pk", 'AttributeType': "N" },
{ 'AttributeName': "sk", 'AttributeType': "N" }
],
'ProvisionedThroughput': {
'ReadCapacityUnits': 10,
'WriteCapacityUnits': 10
}
}
02-write-data.py
import os
import boto3
table_name = 'TryDaxTable'
some_data = 'X' * 1000
pk_max = 10
sk_max = 10
dynamodb.put_item(**params)
print(f'PutItem ({ipk}, {isk}) succeeded')
03-getitem-test.py
The 03-getitem-test.py program performs GetItem operations on TryDaxTable.
#
# Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# This file is licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
#
#!/usr/bin/env python
from __future__ import print_function
session = botocore.session.get_session()
dynamodb = session.create_client('dynamodb', region_name=region) # low-level client
table_name = "TryDaxTable"
if len(sys.argv) > 1:
endpoint = sys.argv[1]
dax = amazondax.AmazonDaxClient(session, region_name=region, endpoints=[endpoint])
client = dax
else:
client = dynamodb
pk = 10
sk = 10
iterations = 50
start = time.time()
for i in range(iterations):
for ipk in range(1, pk+1):
for isk in range(1, sk+1):
params = {
'TableName': table_name,
'Key': {
"pk": {'N': str(ipk)},
"sk": {'N': str(isk)}
}
}
result = client.get_item(**params)
print('.', end='', file=sys.stdout); sys.stdout.flush()
print()
end = time.time()
print('Total time: {} sec - Avg time: {} sec'.format(end - start, (end-start)/iterations))
04-query-test.py
#
# Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# This file is licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
#
#!/usr/bin/env python
from __future__ import print_function
session = botocore.session.get_session()
dynamodb = session.create_client('dynamodb', region_name=region) # low-level client
table_name = "TryDaxTable"
if len(sys.argv) > 1:
endpoint = sys.argv[1]
dax = amazondax.AmazonDaxClient(session, region_name=region, endpoints=[endpoint])
client = dax
else:
client = dynamodb
pk = 5
sk1 = 2
sk2 = 9
iterations = 5
params = {
'TableName': table_name,
'KeyConditionExpression': 'pk = :pkval and sk between :skval1 and :skval2',
'ExpressionAttributeValues': {
":pkval": {'N': str(pk)},
":skval1": {'N': str(sk1)},
":skval2": {'N': str(sk2)}
}
}
start = time.time()
for i in range(iterations):
result = client.query(**params)
end = time.time()
print('Total time: {} sec - Avg time: {} sec'.format(end - start, (end-start)/iterations))
05-scan-test.py
#
# Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# This file is licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
#
#!/usr/bin/env python
from __future__ import print_function
session = botocore.session.get_session()
table_name = "TryDaxTable"
if len(sys.argv) > 1:
endpoint = sys.argv[1]
dax = amazondax.AmazonDaxClient(session, region_name=region, endpoints=[endpoint])
client = dax
else:
client = dynamodb
iterations = 5
params = {
'TableName': table_name
}
start = time.time()
for i in range(iterations):
result = client.scan(**params)
end = time.time()
print('Total time: {} sec - Avg time: {} sec'.format(end - start, (end-start)/iterations))
06-delete-table.py
The 06-delete-table.py program deletes TryDaxTable. Run this program after you are done
testing DAX functionality.
#
# Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# This file is licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
#
#!/usr/bin/env python3
from __future__ import print_function
import os
import amazondax
import botocore.session
session = botocore.session.get_session()
dynamodb = session.create_client('dynamodb', region_name=region) # low-level client
table_name = "TryDaxTable"
params = {
'TableName' : table_name
}
Suppose that you have a DynamoDB table named Music. The partition key for the table is Artist, and its
sort key is SongTitle. The following program reads an item directly from the Music table:
import java.util.HashMap;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import com.amazonaws.services.dynamodbv2.model.AttributeValue;
import com.amazonaws.services.dynamodbv2.model.GetItemRequest;
import com.amazonaws.services.dynamodbv2.model.GetItemResult;
try {
System.out.println("Attempting to read the item...");
GetItemResult result = client.getItem(request);
System.out.println("GetItem succeeded: " + result);
} catch (Exception e) {
System.err.println("Unable to read item");
System.err.println(e.getMessage());
}
}
}
To modify the program, you replace the DynamoDB client with a DAX client:
import java.util.HashMap;
import com.amazon.dax.client.dynamodbv2.AmazonDaxClient;
import com.amazon.dax.client.dynamodbv2.ClientConfig;
import com.amazon.dax.client.dynamodbv2.ClusterDaxClient;
import com.amazonaws.services.dynamodbv2.model.AttributeValue;
import com.amazonaws.services.dynamodbv2.model.GetItemRequest;
import com.amazonaws.services.dynamodbv2.model.GetItemResult;
.withEndpoints("mydaxcluster.2cmrwl.clustercfg.dax.use1.cache.amazonaws.com:8111");
AmazonDaxClient client = new ClusterDaxClient(daxConfig);
/*
** ...
** Remaining code omitted (it is identical)
** ...
*/
}
}
The document interface can also be used with the low-level DAX client, as shown below:
import com.amazon.dax.client.dynamodbv2.AmazonDaxClient;
import com.amazon.dax.client.dynamodbv2.ClientConfig;
import com.amazon.dax.client.dynamodbv2.ClusterDaxClient;
import com.amazonaws.services.dynamodbv2.document.DynamoDB;
import com.amazonaws.services.dynamodbv2.document.GetItemOutcome;
import com.amazonaws.services.dynamodbv2.document.Table;
.withEndpoints("mydaxcluster.2cmrwl.clustercfg.dax.use1.cache.amazonaws.com:8111")
.withRegion("us-east-1");
AmazonDaxClient client = new ClusterDaxClient(daxConfig);
try {
System.out.println("Attempting to read the item...");
GetItemOutcome outcome = table.tgetItemOutcome(
"Artist", "No One You Know",
"SongTitle", "Scared of My Shadow");
System.out.println(outcome.getItem());
System.out.println("GetItem succeeded: " + outcome);
} catch (Exception e) {
System.err.println("Unable to read item");
System.err.println(e.getMessage());
}
}
}
The following program shows how to use ClusterDaxAsyncClient, along with Java Future, to
implement a non-blocking solution.
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
import java.util.HashMap;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.Future;
import com.amazon.dax.client.dynamodbv2.ClientConfig;
import com.amazon.dax.client.dynamodbv2.ClusterDaxAsyncClient;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.handlers.AsyncHandler;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBAsync;
import com.amazonaws.services.dynamodbv2.model.AttributeValue;
import com.amazonaws.services.dynamodbv2.model.GetItemRequest;
import com.amazonaws.services.dynamodbv2.model.GetItemResult;
// Java Futures
Future<GetItemResult> call = client.getItemAsync(request);
while (!call.isDone()) {
// Do other processing while you're waiting for the response
System.out.println("Doing something else for a few seconds...");
Thread.sleep(3000);
}
// The results should be ready by now
try {
call.get();
// Async callbacks
call = client.getItemAsync(request, new AsyncHandler<GetItemRequest, GetItemResult>() {
@Override
public void onSuccess(GetItemRequest request, GetItemResult getItemResult) {
System.out.println("Result: " + getItemResult);
}
@Override
public void onError(Exception e) {
System.out.println("Unable to read item");
System.err.println(e.getMessage());
// Callers can also test if exception is an instance of
// AmazonServiceException or AmazonClientException and cast
// it to get additional information
}
});
call.get();
}
}
Topics
• IAM Permissions for Managing a DAX Cluster (p. 696)
• Customizing DAX Cluster Settings (p. 698)
• Configuring TTL Settings (p. 700)
• Tagging Support for DAX (p. 701)
• AWS CloudTrail Integration (p. 701)
• Deleting a DAX Cluster (p. 702)
The following discussion focuses on access control for the DAX management APIs (see Amazon
DynamoDB Accelerator in the Amazon DynamoDB API Reference).
Note
For more detailed information about managing IAM permissions, see the following:
• IAM and creating DAX clusters: Creating a DAX Cluster (p. 641).
• IAM and DAX data plane operations:Identity and Access Management in DAX (p. 745).
For the DAX management APIs, you cannot scope API actions to a specific resource. The Resource
element must be set to "*". (Note that this is different from DAX data plane API operations, such as
GetItem, Query, and Scan. Data plane operations are exposed through the DAX client, and those
operations can be scoped to specific resources.)
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"dax:*"
],
"Effect": "Allow",
"Resource": [
"arn:aws:dax:us-west-2:123456789012:cache/DAXCluster01"
]
}
]
}
Suppose that the intent of this policy is to allow DAX management API calls for the cluster
DAXCluster01— and only that cluster.
Now suppose that a user issues the following AWS CLI command:
This command will fail with a "Not Authorized" exception, because the underlying DescribeClusters
API call cannot be scoped to a specific cluster. Even though the policy is syntactically valid, the command
fails because the Resource element must be set to "*". However, if the user runs a program that
sends DAX data plane calls (such as GetItem or Query) to DAXCluster01, those calls will succeed. This is
because DAX data plane APIs can be scoped to specific resources (in this case, DAXCluster01).
If you want to write a single comprehensive IAM policy, encompassing both DAX management APIs and
DAX data plane APIs, we suggest that you include two distinct statements in the policy document. One
of these statements should address the DAX data plane APIs, while the other statement addresses the
DAX management APIs.
The following example policy shows this approach. Note how the DAXDataAPIs statement is scoped to
the DAXCluster01 resource, but the resource for DAXManagementAPIs must be "*". (The actions shown
in each statement are for illustration only; you can customize them as needed for your application.)
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DAXDataAPIs",
"Action": [
"dax:GetItem",
"dax:BatchGetItem",
"dax:Query",
"dax:Scan",
"dax:PutItem",
"dax:UpdateItem",
"dax:DeleteItem",
"dax:BatchWriteItem",
"dax:DefineAttributeList",
"dax:DefineAttributeListId",
"dax:DefineKeySchema",
"dax:Endpoints"
],
"Effect": "Allow",
"Resource": [
"arn:aws:dax:us-west-2:123456789012:cache/DAXCluster01"
]},
{
"Sid": "DAXManagementAPIs",
"Action": [
"dax:CreateParameterGroup",
"dax:CreateSubnetGroup",
"dax:DecreaseReplicationFactor",
"dax:DeleteCluster",
"dax:DeleteParameterGroup",
"dax:DeleteSubnetGroup",
"dax:DescribeClusters",
"dax:DescribeDefaultParameters",
"dax:DescribeEvents",
"dax:DescribeParameterGroups",
"dax:DescribeParameters",
"dax:DescribeSubnetGroups",
"dax:IncreaseReplicationFactor",
"dax:ListTags",
"dax:RebootNode",
"dax:TagResource",
"dax:UntagResource",
"dax:UpdateCluster",
"dax:UpdateParameterGroup",
"dax:UpdateSubnetGroup"
],
"Effect": "Allow",
"Resource": [
"*"
]
}
]
}
You cannot change these settings on a DAX cluster that is currently running. However, for new clusters,
you can customize the settings at creation time. To do this in the AWS Management Console, clear Use
default settings to modify the following settings:
• Network and Security—allows you to run individual DAX cluster nodes in different Availability Zones
(AZs) within the current AWS region. If you choose No Preference, the nodes will be distributed among
AZs automatically.
• Parameter Group—a named set of parameters that are applied to every node in the cluster. You can
use a parameter group to specify cache time-to-live (TTL) behavior.
• Maintenance Window—a weekly time period during which software upgrades and patches are applied
to the nodes in the cluster. You can choose the start day, start time, and duration of the maintenance
window. If you choose No Preference, the maintenance window will be selected at random from an 8-
hour block of time per region. (For more information, see Maintenance Window (p. 640).)
When a maintenance event occurs, DAX can notify you using Amazon Simple Notification Service
(Amazon SNS). To configure notifications, choose an option from the Topic for SNS notification selector.
You can create a new Amazon SNS topic, or use an existing topic. (For more information on setting up
and subscribing to an Amazon SNS topic, see Getting Started with Amazon Simple Notification Service in
the Amazon Simple Notification Service Developer Guide.)
Read Scaling
With read scaling, you can improve throughput by adding more read replicas to the cluster. A single DAX
cluster supports up to 9 read replicas, and you can add or remove replicas while the cluster is running.
The following AWS CLI examples show how to increase or decrease the number of nodes. The --new-
replication-factor argument specifies the total number of nodes in the cluster—one of the nodes is
the primary node, and the other nodes are read replicas.
Note
The cluster status changes to modifying when you modify the replication factor. When the
status changes to available, the cluster is ready for use again.
Node Types
If you have a large working set of data, your application might benefit from using larger node types.
Larger nodes can enable the cluster to store more data in memory, reducing cache misses and improving
overall application performance of the application. (Note that all of the nodes in a DAX cluster must be
of the same type.)
You cannot modify the node types on a running DAX cluster. Instead, you must create a new cluster with
the desired node type. For a list of supported node types, see Nodes (p. 638).
You can create a new DAX cluster using the AWS Management Console or the AWS CLI. (For the latter,
use the --node-type parameter to specify the node type.)
For more information, see Item Cache (p. 636) and Query Cache (p. 637)
The default time to live (TTL) for each of these caches is 5 minutes. If you want to use different
TTL settings, you can launch a DAX cluster using a custom parameter group. To do this in the AWS
Management Console, choose DAX | Parameter groups in the navigation pane.
You can also perform these tasks using the AWS CLI. The following example shows how to launch a
new DAX cluster using a custom parameter group. In this example, the item cache TTL will be set to 10
minutes and the query cache TTL will be set to 3 minutes.
You can now launch a new DAX cluster with this parameter group:
Note
You cannot modify a parameter group that is being used by a running DAX instance.
When the settings are as you want them, choose Apply Changes.
AWS CLI
When you use the AWS CLI to manage DAX cluster tags, you must first determine the Amazon Resource
Name (ARN) for the cluster. The following example shows how to determine the ARN for a cluster named
MyDAXCluster:
cluster components such as nodes, subnet groups, and parameter groups. For more information, see
Logging DynamoDB Operations by Using AWS CloudTrail (p. 774).
You can delete a DAX cluster from the AWS Management Console or from the AWS CLI. Here is an
example:
A service-linked role makes setting up DAX easier because you don’t have to manually add the necessary
permissions. DAX defines the permissions of its service-linked roles, and unless defined otherwise, only
DAX can assume its roles. The defined permissions include the trust policy and the permissions policy,
and that permissions policy cannot be attached to any other IAM entity.
You can delete the roles only after first deleting their related resources. This protects your DAX resources
because you can't inadvertently remove permission to access the resources.
For information about other services that support service-linked roles, see AWS Services That Work with
IAM and look for the services that have Yes in the Service-Linked Role column. Choose a Yes with a link
to view the service-linked role documentation for that service.
The AWSServiceRoleForDAX service-linked role trusts the following services to assume the role:
• dax.amazonaws.com
The role permissions policy allows DAX to complete the following actions on the specified resources:
• Actions on ec2:
• AuthorizeSecurityGroupIngress
• CreateNetworkInterface
• CreateSecurityGroup
• DeleteNetworkInterface
• DeleteSecurityGroup
• DescribeAvailabilityZones
• DescribeNetworkInterfaces
• DescribeSecurityGroups
• DescribeSubnets
• DescribeVpcs
• ModifyNetworkInterfaceAttribute
• RevokeSecurityGroupIngress
You must configure permissions to allow an IAM entity (such as a user, group, or role) to create, edit, or
delete a service-linked role. For more information, see Service-Linked Role Permissions in the IAM User
Guide.
Add the following policy statement to the permissions for that IAM entity:
{
"Effect": "Allow",
"Action": [
"iam:CreateServiceLinkedRole"
],
"Resource": "*",
"Condition": {"StringLike": {"iam:AWSServiceName": "dax.amazonaws.com"}}
}
If you delete this service-linked role, and then need to create it again, you can use the same process to
recreate the role in your account. When you create an instance or a cluster, DAX creates the service-linked
role for you again.
To check whether the service-linked role has an active session in the IAM console
1. Sign in to the AWS Management Console and open the IAM console at https://
console.aws.amazon.com/iam/.
2. In the navigation pane of the IAM console, choose Roles. Then choose the name (not the check box)
of the AWSServiceRoleForDAX role.
3. On the Summary page for the selected role, choose the Access Advisor tab.
4. On the Access Advisor tab, review recent activity for the service-linked role.
Note
If you are unsure whether DAX is using the AWSServiceRoleForDAX role, you can try to
delete the role. If the service is using the role, then the deletion fails and you can view the
regions where the role is being used. If the role is being used, then you must delete your
DAX clusters before you can delete the role. You cannot revoke the session for a service-
linked role.
If you want to remove the AWSServiceRoleForDAX role, you must first delete all of your DAX clusters.
Use the IAM console, the IAM CLI, or the IAM API to delete the AWSServiceRoleForDAX service-linked
role. For more information, see Deleting a Service-Linked Role in the IAM User Guide.
Security is a shared responsibility between AWS and you. The shared responsibility model describes this
as security of the cloud and security in the cloud:
• Security of the cloud – AWS is responsible for protecting the infrastructure that runs AWS services
in the AWS Cloud. AWS also provides you with services that you can use securely. The effectiveness
of our security is regularly tested and verified by third-party auditors as part of the AWS compliance
programs. To learn about the compliance programs that apply to DynamoDB, see AWS Services in
Scope by Compliance Program.
• Security in the cloud – Your responsibility is determined by the AWS service that you use. You are also
responsible for other factors including the sensitivity of your data, your organization’s requirements,
and applicable laws and regulations.
This documentation will help you understand how to apply the shared responsibility model when using
DynamoDB. The following topics show you how to configure DynamoDB to meet your security and
compliance objectives. You'll also learn how to use other AWS services that can help you to monitor and
secure your DynamoDB resources.
Topics
• Data Protection in DynamoDB (p. 705)
• Identity and Access Management (p. 714)
• Logging and Monitoring (p. 755)
• Compliance Validation for DynamoDB (p. 793)
• Resilience and Disaster Recovery in Amazon DynamoDB (p. 794)
• Infrastructure Security in Amazon DynamoDB (p. 794)
• Configuration and Vulnerability Analysis in Amazon DynamoDB (p. 802)
• Security Best Practices for Amazon DynamoDB (p. 802)
DynamoDB protects user data stored at rest and also data in transit between on-premises clients and
DynamoDB, and between DynamoDB and other AWS resources within the same AWS Region.
Topics
• DynamoDB Encryption at Rest (p. 706)
• DAX Encryption at Rest (p. 712)
DynamoDB encryption at rest provides an additional layer of data protection by securing your data
in the encrypted table, including its primary key, local and global secondary indexes, streams, global
tables, backups, and DynamoDB Accelerator (DAX) clusters whenever the data is stored in durable media.
Organizational policies, industry or government regulations, and compliance requirements often require
the use of encryption at rest to increase the data security of your applications.
Encryption at rest integrates with AWS KMS for managing the encryption key that is used to encrypt
your tables.
When creating a new table, you can choose one of the following customer master keys (CMK) to encrypt
your table:
• AWS owned CMK – Default encryption type. The key is owned by DynamoDB (no additional charge).
• AWS managed CMK – The key is stored in your account and is managed by AWS KMS (AWS KMS
charges apply).
When you access an encrypted table, DynamoDB decrypts the table data transparently. You can switch
between the AWS owned CMK and AWS managed CMK at any given time. You don't have to change
any code or applications to use or manage encrypted tables. DynamoDB continues to deliver the
same single-digit millisecond latency that you have come to expect, and all DynamoDB queries work
seamlessly on your encrypted data.
For more information, see Encryption at Rest: How It Works (p. 706) and the section called “Usage
Notes” (p. 707).
You can create an encrypted table or switch the encryption keys on an existing table using the AWS
Management Console, AWS Command Line Interface (AWS CLI), or the Amazon DynamoDB API. To learn
how, see Managing Encrypted Tables (p. 708).
Note
Encryption at rest is generally available in all commercial AWS Regions and AWS GovCloud (US),
but is not currently supported in the following AWS Regions:
• China (Beijing)
• China (Ningxia)
Encryption at rest using the AWS owned CMK is offered at no additional cost. However, AWS KMS
charges apply for an AWS managed CMK. For more information about pricing, see Amazon DynamoDB
Pricing.
Encryption at rest integrates with AWS Key Management Service (AWS KMS) for managing the
encryption key that is used to encrypt your tables.
When creating a new table or switching the encryption keys on an existing table, you can choose one of
the following customer master keys (CMK):
• AWS owned CMK – Default encryption type. The key is owned by DynamoDB (no additional charge).
• AWS managed CMK – The key is stored in your account and is managed by AWS KMS (AWS KMS
charges apply).
• You can view the CMK and its key policy. (You cannot change the key policy.)
• You can audit the encryption and decryption of your DynamoDB table by examining the DynamoDB
API calls to AWS KMS using AWS CloudTrail.
DynamoDB doesn't call AWS KMS for every single DynamoDB operation. The key is refreshed once every
5 minutes per client connection with active traffic.
Ensure that you have configured the SDK to reuse connections. Otherwise, you will experience latencies
from DynamoDB having to re-establish new AWS KMS cache entries for each DynamoDB operation. In
addition, you might potentially have to face higher AWS KMS and CloudTrail costs. For example, to do
this using the Node.js SDK, you can create a new HTTPS agent with keepAlive turned on. For more
information, see Configuring maxSockets in Node.js in the AWS SDK for JavaScript Developer Guide.
For more information about managing permissions of the AWS managed CMK, see Authorizing Use of
the Service Default Key.
You can use AWS CloudTrail and Amazon CloudWatch Logs to track the requests that DynamoDB sends
to AWS KMS on your behalf. For more information, see Monitoring DynamoDB Interaction with AWS KMS
in the AWS Key Management Service Developer Guide.
• Server-side encryption at rest is enabled on all DynamoDB table data and cannot be disabled. You
cannot encrypt only a subset of items in a table.
• DynamoDB has encrypted all existing tables that were previously unencrypted by using the AWS
owned customer master key (CMK).
• On the AWS Management Console, the encryption type is KMS when you use the AWS managed CMK
to encrypt your data. The encryption type is DEFAULT when you use the AWS owned CMK. In the
Amazon DynamoDB API, the encryption type is KMS when you use the AWS managed CMK. In the
absence of encryption type, your data is encrypted using the AWS owned CMK.
• You can switch between the default encryption key, AWS owned CMK, and AWS managed CMK at
any given time. You can use the console, the AWS Command Line Interface (AWS CLI), or the Amazon
DynamoDB API to switch the encryption keys.
• Encryption at rest only encrypts data while it is static (at rest) on a persistent storage media. If data
security is a concern for data in transit or data in use, you need to take additional measures:
• Data in transit: All your data in DynamoDB is encrypted in transit (except the data in DAX). By
default, communications to and from DynamoDB use the HTTPS protocol, which protects network
traffic by using Secure Sockets Layer (SSL)/Transport Layer Security (TLS) encryption.
• Data-in-use: Protect your data before sending it to DynamoDB using client-side encryption. For more
information, see Client-Side and Server-Side Encryption in the Amazon DynamoDB Encryption Client
Developer Guide.
• You can use streams with encrypted tables. Encryption at rest encrypts the data in DynamoDB streams.
For more information, see Capturing Table Activity with DynamoDB Streams (p. 566).
• You can use global tables with encrypted tables. Encryption at rest encrypts the data in global tables.
For more information, see Global Tables (p. 612).
• You can use backup and restore features with encrypted tables. Your backups are encrypted, and the
table that is restored from this backup also has encryption enabled. For more information, see On-
Demand Backup and Restore for DynamoDB (p. 596).
• You can enable encryption at rest for your DynamoDB Accelerator (DAX) clusters. For more
information, see DAX Encryption at Rest (p. 712).
Topics
• Creating an Encrypted Table (p. 708)
• Updating an Encryption Key (p. 710)
1. Sign in to the AWS Management Console and open the DynamoDB console at https://
console.aws.amazon.com/dynamodb/.
2. In the navigation pane on the left side of the console, choose Tables.
3. Choose Create Table. For the Table name, enter Music. For the primary key, enter Artist, and for
the sort key, enter SongTitle, both as strings.
4. In Table settings, make sure that Use default settings is not selected.
Note
If Use default settings is selected, tables are encrypted at rest with the AWS owned
customer master key (CMK) at no additional cost.
5. Under Encryption at rest, choose an encryption type:
• Default – AWS owned CMK. The key is owned by DynamoDB (no additional charge).
• KMS – AWS managed CMK. The key is stored in your account and is managed by AWS Key
Management Service (AWS KMS) (AWS KMS charges apply).
6. Choose Create to create the encrypted table. To confirm the encryption type, check the table details
on the Overview tab.
Use the AWS CLI to create a table with the default AWS owned CMK or the AWS managed CMK for
DynamoDB (aws/dynamodb).
Note
This table is now encrypted using the default AWS owned CMK in the DynamoDB service
account.
To create an encrypted table with the AWS managed CMK for DynamoDB
The SSEDescription status of the table description is set to ENABLED and the SSEType is KMS.
"SSEDescription": {
"SSEType": "KMS",
"Status": "ENABLED",
"KMSMasterKeyArn": "arn:aws:kms:us-east-1:123456789012:key/abcd1234-abcd-1234-a123-
ab1234a1b234"
},
}
1. Sign in to the AWS Management Console and open the DynamoDB console at https://
console.aws.amazon.com/dynamodb/.
2. In the navigation pane on the left side of the console, choose Tables.
3. Choose the table that you want to update, and then choose the Overview tab.
4. Choose Manage Encryption.
• Default – AWS owned CMK. The key is owned by DynamoDB (no additional charge).
• KMS – AWS managed CMK. The key is stored in your account and is managed by AWS KMS (AWS
KMS charges apply).
Then choose Save to update the encrypted table. To confirm the encryption type, check the table
details under the Overview tab.
The following examples show how to update an encrypted table using the AWS CLI.
Note
This table is now encrypted using the default AWS owned CMK in the DynamoDB service
account.
To update an encrypted table with the AWS managed CMK for DynamoDB
The SSEDescription status of the table description is set to ENABLED and the SSEType is KMS:
"SSEDescription": {
"SSEType": "KMS",
"Status": "ENABLED",
"KMSMasterKeyArn": "arn:aws:kms:us-east-1:123456789012:key/abcd1234-abcd-1234-a123-
ab1234a1b234"
},
}
With encryption at rest, the data persisted by DAX on disk is encrypted using 256-bit Advanced
Encryption Standard, also known as AES-256 encryption. DAX writes data to disk as part of propagating
changes from the primary node to read replicas.
DAX encryption at rest automatically integrates with AWS Key Management Service (AWS KMS) for
managing the single service default key that is used to encrypt your clusters. If a service default key
doesn't exist when you create your encrypted DAX cluster, AWS KMS automatically creates a new key for
you. This key is used with encrypted clusters that are created in the future. AWS KMS combines secure,
highly available hardware and software to provide a key management system scaled for the cloud.
After your data is encrypted, DAX handles the decryption of your data transparently with minimal impact
on performance. You don't need to modify your applications to use encryption.
Note
DAX does not call AWS KMS for every single DAX operation. DAX only uses the key at the cluster
launch. Even if access is revoked, DAX can still access the data until the cluster is shut down.
Customer-specified AWS KMS keys are not supported.
DAX encryption at rest is available for the following cluster node types.
dax.r4.xlarge
dax.r4.2xlarge
dax.r4.4xlarge
dax.r4.8xlarge
dax.r4.16xlarge
dax.t2.medium
Important
DAX encryption at rest is not supported for dax.r3.* node types.
You cannot enable or disable encryption at rest after a cluster has been created. You must re-create the
cluster to enable encryption at rest if it was not enabled at creation.
DAX encryption at rest is offered at no additional cost (AWS KMS encryption key usage charges apply).
For information about pricing, see Amazon DynamoDB Pricing.
1. Sign in to the AWS Management Console and open the DynamoDB console at https://
console.aws.amazon.com/dynamodb/.
2. In the navigation pane on the left side of the console under DAX, choose Clusters.
3. Choose Create cluster. For the Cluster name, enter a short name for your cluster. Choose the node
type for all of the nodes in the cluster, and for the cluster size, use 3 nodes. In Encryption, make
sure that Enable encryption is selected.
4. After choosing the IAM role, subnet group, security groups, and cluster settings, choose Launch
cluster.
To confirm that the cluster is encrypted, check the cluster details under the Clusters pane. Encryption
should be ENABLED.
• An AWS Site-to-Site VPN connection. For more information, see What is AWS Site-to-Site VPN? in the
AWS Site-to-Site VPN User Guide.
• An AWS Direct Connect connection. For more information, see What is AWS Direct Connect? in the
AWS Direct Connect User Guide.
Access to DynamoDB via the network is through AWS published APIs. Clients must support Transport
Layer Security (TLS) 1.0. We recommend TLS 1.2 or above. Clients must also support cipher suites
with Perfect Forward Secrecy (PFS), such as Ephemeral Diffie-Hellman (DHE) or Elliptic Curve Diffie-
Hellman Ephemeral (ECDHE). Most modern systems such as Java 7 and later support these modes.
Additionally, you must sign requests using an access key ID and a secret access key that are associated
with an IAM principal, or you can use the AWS Security Token Service (STS) to generate temporary
security credentials to sign requests.
Topics
• Identity and Access Management in Amazon DynamoDB (p. 714)
• Identity and Access Management in DAX (p. 745)
Authentication
You can access AWS as any of the following types of identities:
• AWS account root user – When you first create an AWS account, you begin with a single sign-in
identity that has complete access to all AWS services and resources in the account. This identity is
called the AWS account root user and is accessed by signing in with the email address and password
that you used to create the account. We strongly recommend that you do not use the root user for
your everyday tasks, even the administrative ones. Instead, adhere to the best practice of using the
root user only to create your first IAM user. Then securely lock away the root user credentials and use
them to perform only a few account and service management tasks.
• IAM user – An IAM user is an identity within your AWS account that has specific custom permissions
(for example, permissions to create a table in DynamoDB). You can use an IAM user name and
password to sign in to secure AWS webpages like the AWS Management Console, AWS Discussion
Forums, or the AWS Support Center.
In addition to a user name and password, you can also generate access keys for each user. You can
use these keys when you access AWS services programmatically, either through one of the several
SDKs or by using the AWS Command Line Interface (CLI). The SDK and CLI tools use the access keys
to cryptographically sign your request. If you don’t use AWS tools, you must sign the request yourself.
DynamoDB supports Signature Version 4, a protocol for authenticating inbound API requests. For more
information about authenticating requests, see Signature Version 4 Signing Process in the AWS General
Reference.
• IAM role – An IAM role is an IAM identity that you can create in your account that has specific
permissions. An IAM role is similar to an IAM user in that it is an AWS identity with permissions policies
that determine what the identity can and cannot do in AWS. However, instead of being uniquely
associated with one person, a role is intended to be assumable by anyone who needs it. Also, a role
does not have standard long-term credentials such as a password or access keys associated with it.
Instead, when you assume a role, it provides you with temporary security credentials for your role
session. IAM roles with temporary credentials are useful in the following situations:
• Federated user access – Instead of creating an IAM user, you can use existing identities from AWS
Directory Service, your enterprise user directory, or a web identity provider. These are known as
federated users. AWS assigns a role to a federated user when access is requested through an identity
provider. For more information about federated users, see Federated Users and Roles in the IAM User
Guide.
• AWS service access – A service role is an IAM role that a service assumes to perform actions in your
account on your behalf. When you set up some AWS service environments, you must define a role
for the service to assume. This service role must include all the permissions that are required for
the service to access the AWS resources that it needs. Service roles vary from service to service, but
many allow you to choose your permissions as long as you meet the documented requirements
for that service. Service roles provide access only within your account and cannot be used to grant
access to services in other accounts. You can create, modify, and delete a service role from within
IAM. For example, you can create a role that allows Amazon Redshift to access an Amazon S3 bucket
on your behalf and then load data from that bucket into an Amazon Redshift cluster. For more
information, see Creating a Role to Delegate Permissions to an AWS Service in the IAM User Guide.
• Applications running on Amazon EC2 – You can use an IAM role to manage temporary credentials
for applications that are running on an EC2 instance and making AWS CLI or AWS API requests. This
is preferable to storing access keys within the EC2 instance. To assign an AWS role to an EC2 instance
and make it available to all of its applications, you create an instance profile that is attached to
the instance. An instance profile contains the role and enables programs that are running on the
EC2 instance to get temporary credentials. For more information, see Using an IAM Role to Grant
Permissions to Applications Running on Amazon EC2 Instances in the IAM User Guide.
Access Control
You can have valid credentials to authenticate your requests, but unless you have permissions you cannot
create or access Amazon DynamoDB resources. For example, you must have permissions to create an
Amazon DynamoDB table.
The following sections describe how to manage permissions for Amazon DynamoDB. We recommend
that you read the overview first.
When granting permissions, you decide who is getting the permissions, the resources they get
permissions for, and the specific actions that you want to allow on those resources.
Topics
• Amazon DynamoDB Resources and Operations (p. 716)
• Understanding Resource Ownership (p. 717)
• Managing Access to Resources (p. 717)
• Specifying Policy Elements: Actions, Effects, and Principals (p. 718)
• Specifying Conditions in a Policy (p. 719)
These resources and subresources have unique Amazon Resource Names (ARNs) associated with them, as
shown in the following table.
Table arn:aws:dynamodb:region:account-id:table/table-name
Index arn:aws:dynamodb:region:account-id:table/table-name/
index/index-name
Stream arn:aws:dynamodb:region:account-id:table/table-name/
stream/stream-label
DynamoDB provides a set of operations to work with DynamoDB resources. For a list of available
operations, see Amazon DynamoDB Actions.
• If you use your AWS account root user credentials to create a table, your AWS account is the owner of
the resource (in DynamoDB, the resource is a table).
• If you create an IAM user in your AWS account and grant permissions to create a table to that user,
the user can create a table. However, your AWS account, to which the user belongs, owns the table
resource.
• If you create an IAM role in your AWS account with permissions to create a table, anyone who can
assume the role can create a table. Your AWS account, to which the role belongs, owns the table
resource.
Policies attached to an IAM identity are referred to as identity-based policies (IAM policies). Policies
attached to a resource are referred to as resource-based policies. DynamoDB supports only identity-based
policies (IAM policies).
Topics
• Identity-Based Policies (IAM Policies) (p. 717)
• Resource-Based Policies (p. 718)
You can attach policies to IAM identities. For example, you can do the following:
• Attach a permissions policy to a user or a group in your account – To grant a user permissions to
create an Amazon DynamoDB resource, such as a table, you can attach a permissions policy to a user
or group that the user belongs to.
• Attach a permissions policy to a role (grant cross-account permissions) – You can attach an
identity-based permissions policy to an IAM role to grant cross-account permissions. For example,
the administrator in account A can create a role to grant cross-account permissions to another AWS
account (for example, account B) or an AWS service as follows:
1. Account A administrator creates an IAM role and attaches a permissions policy to the role that
grants permissions on resources in account A.
2. Account A administrator attaches a trust policy to the role identifying account B as the principal
who can assume the role.
3. Account B administrator can then delegate permissions to assume the role to any users in account B.
Doing this allows users in account B to create or access resources in account A. The principal in the
trust policy can also be an AWS service principal if you want to grant an AWS service permissions to
assume the role.
For more information about using IAM to delegate permissions, see Access Management in the IAM
User Guide.
The following is an example policy that grants permissions for one DynamoDB action
(dynamodb:ListTables). The wildcard character (*) in the Resource value means that you can use this
action to obtain the names of all the tables owned by the AWS account in the current AWS Region.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListTables",
"Effect": "Allow",
"Action": [
"dynamodb:ListTables"
],
"Resource": "*"
}
]
}
For more information about using identity-based policies with DynamoDB, see Using Identity-Based
Policies (IAM Policies) for Amazon DynamoDB (p. 719). For more information about users, groups, roles,
and permissions, see Identities (Users, Groups, and Roles) in the IAM User Guide.
Resource-Based Policies
Other services, such as Amazon S3, also support resource-based permissions policies. For example, you
can attach a policy to an S3 bucket to manage access permissions to that bucket. DynamoDB doesn't
support resource-based policies.
• Resource – You use an Amazon Resource Name (ARN) to identify the resource that the policy applies
to. For more information, see Amazon DynamoDB Resources and Operations (p. 716).
• Action – You use action keywords to identify resource operations that you want to allow or deny. For
example, dynamodb:Query allows the user permissions to perform the DynamoDB Query operation.
• Effect – You specify the effect, either allow or deny, when the user requests the specific action. If you
don't explicitly grant access to (allow) a resource, access is implicitly denied. You can also explicitly
deny access to a resource, which you might do to make sure that a user cannot access it, even if a
different policy grants access.
• Principal – In identity-based policies (IAM policies), the user that the policy is attached to is the
implicit principal. For resource-based policies, you specify the user, account, service, or other entity
that you want to receive permissions (applies to resource-based policies only). DynamoDB doesn't
support resource-based policies.
To learn more about IAM policy syntax and descriptions, see AWS IAM Policy Reference in the IAM User
Guide.
For a list showing all of the Amazon DynamoDB API operations and the resources that they apply to, see
DynamoDB API Permissions: Actions, Resources, and Conditions Reference (p. 726).
To express conditions, you use predefined condition keys. There are AWS-wide condition keys and
DynamoDB–specific keys that you can use as appropriate. For a complete list of AWS-wide keys, see
Available Keys for Conditions in the IAM User Guide. For a complete list of DynamoDB–specific keys, see
Using IAM Policy Conditions for Fine-Grained Access Control (p. 730).
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DescribeQueryScanBooksTable",
"Effect": "Allow",
"Action": [
"dynamodb:DescribeTable",
"dynamodb:Query",
"dynamodb:Scan"
],
"Resource": "arn:aws:dynamodb:us-west-2:account-id:table/Books"
}
]
}
The policy has one statement that grants permissions for three DynamoDB actions
(dynamodb:DescribeTable, dynamodb:Query, and dynamodb:Scan) on a table in the us-west-2
Region, which is owned by the AWS account specified by account-id. The Amazon Resource Name
(ARN) in the Resource value specifies the table to which the permissions apply.
If you create an IAM policy that is more restrictive than the minimum required permissions, the console
won't function as intended for users with that IAM policy. To ensure that those users can still use the
DynamoDB console, also attach the AmazonDynamoDBReadOnlyAccess managed policy to the user, as
described in AWS Managed (Predefined) Policies for Amazon DynamoDB (p. 720).
You don't need to allow minimum console permissions for users that are making calls only to the AWS
CLI or the Amazon DynamoDB API.
The following AWS managed policies, which you can attach to users in your account, are specific to
DynamoDB and are grouped by use case scenario:
Note
You can review these permissions policies by signing in to the IAM console and searching for
specific policies there.
You can also create your own custom IAM policies to allow permissions for DynamoDB actions
and resources. You can attach these custom policies to the IAM users or groups that require those
permissions.
Examples
• Example 1: Allow a User to Perform Any DynamoDB Actions on a Table (p. 721)
• Example 2: Allow Read-Only Access on Items in a Table (p. 721)
• Example 3: Allow Put, Update, and Delete Operations on a Specific Table (p. 721)
• Example 4: Allow Access to a Specific Table and All of Its Indexes (p. 722)
• Example 5: Set Up Permissions Policies for Separate Test and Production Environments (p. 722)
• Example 6: Prevent a User from Purchasing Reserved Capacity Offerings (p. 724)
• Example 7: Allow Read Access for a DynamoDB Stream Only (Not for the Table) (p. 724)
• Example 8: Allow an AWS Lambda Function to Process DynamoDB Stream Records (p. 725)
The following permissions policy grants permissions for all DynamoDB actions on a table. The ARN value
specified in the Resource identifies a table in a specific Region.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllAPIActionsOnBooks",
"Effect": "Allow",
"Action": "dynamodb:*",
"Resource": "arn:aws:dynamodb:us-west-2:123456789012:table/Books"
}
]
}
Note
If you replace the table name in the resource ARN (Books) with a wildcard character (*) , you
allow any DynamoDB actions on all tables in the account. Carefully consider the security
implications if you decide to do this.
The following permissions policy grants permissions for the GetItem and BatchGetItem DynamoDB
actions only and thereby sets read-only access to a table.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ReadOnlyAPIActionsOnBooks",
"Effect": "Allow",
"Action": [
"dynamodb:GetItem",
"dynamodb:BatchGetItem"
],
"Resource": "arn:aws:dynamodb:us-west-2:123456789012:table/Books"
}
]
}
The following permissions policy grants permissions for the PutItem, UpdateItem, and DeleteItem
actions on a specific DynamoDB table.
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PutUpdateDeleteOnBooks",
"Effect": "Allow",
"Action": [
"dynamodb:PutItem",
"dynamodb:UpdateItem",
"dynamodb:DeleteItem"
],
"Resource": "arn:aws:dynamodb:us-west-2:123456789012:table/Books"
}
]
}
The following permissions policy grants permissions for all of the DynamoDB actions on a table (Book)
and all of the table's indexes. For more information about how indexes work, see Improving Data Access
with Secondary Indexes (p. 493).
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AccessAllIndexesOnBooks",
"Effect": "Allow",
"Action": [
"dynamodb:*"
],
"Resource": [
"arn:aws:dynamodb:us-west-2:123456789012:table/Books",
"arn:aws:dynamodb:us-west-2:123456789012:table/Books/index/*"
]
}
]
}
Example 5: Set Up Permissions Policies for Separate Test and Production Environments
Suppose that you have separate test and production environments where each environment maintains
its own version of a table named ProductCatalog . If you create these ProductCatalog tables
from the same AWS account, testing work might affect the production environment because of the
way that permissions are set up. (For example, the limits on concurrent create and delete actions are
set at the AWS account level.) As a result, each action in the test environment reduces the number of
actions that are available in your production environment. There is also a risk that the code in your test
environment might accidentally access tables in the production environment. To prevent these issues,
consider creating separate AWS accounts for your production and test environments.
Suppose further that you have two developers, Bob and Alice, who are testing the ProductCatalog
table. Instead of creating a separate AWS account for every developer, your developers can share the
same test account. in this test account, you can create a copy of the same table for each developer to
work on, such as Alice_ProductCatalog and Bob_ProductCatalog. In this case, you can create IAM
users Alice and Bob in the AWS account that you created for the test environment. You can then grant
permissions to these users to perform DynamoDB actions on the tables that they own.
• Create a separate policy for each user and then attach each policy to its user separately. For example,
you can attach the following policy to user Alice to allow her access to all DynamoDB actions on the
Alice_ProductCatalog table:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllAPIActionsOnAliceTable",
"Effect": "Allow",
"Action": [
"dynamodb:*"
],
"Resource": "arn:aws:dynamodb:us-west-2:123456789012:table/
Alice_ProductCatalog"
}
]
}
Then, you can create a similar policy with a different resource (Bob_ProductCatalog table) for user
Bob.
• Instead of attaching policies to individual users, you can use IAM policy variables to write a
single policy and attach it to a group. You need to create a group and, for this example, add both
users Alice and user Bob to the group. The following example grants permissions to perform
all DynamoDB actions on the ${aws:username}_ProductCatalog table. The policy variable
${aws:username} is replaced by the requester's user name when the policy is evaluated. For
example, if Alice sends a request to add an item, the action is allowed only if Alice is adding items to
the Alice_ProductCatalog table.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllAPIActionsOnUserSpecificTable",
"Effect": "Allow",
"Action": [
"dynamodb:*"
],
"Resource": "arn:aws:dynamodb:us-west-2:123456789012:table/
${aws:username}_ProductCatalog"
},
{
"Sid": "AdditionalPrivileges",
"Effect": "Allow",
"Action": [
"dynamodb:ListTables",
"dynamodb:DescribeTable",
"cloudwatch:*",
"sns:*"
],
"Resource": "*"
}
]
}
Note
When using IAM policy variables, you must explicitly specify the 2012-10-17 version of
the access policy language in the policy. The default version of the access policy language
(2008-10-17) does not support policy variables.
Instead of identifying a specific table as a resource, you can use a wildcard character (*) to grant
permissions on all tables where the name is prefixed with the name of the IAM user that is making the
request, as shown following.
"Resource":"arn:aws:dynamodb:us-west-2:123456789012:table/${aws:username}_*"
DynamoDB provides the following API operations for controlling access to reserved capacity
management:
The AWS Management Console uses these API operations to display reserved capacity information and
to make purchases. You cannot call these operations from an application program, because they can be
accessed only from the console. However, you can allow or deny access to these operations in an IAM
permissions policy.
The following policy allows users to view reserved capacity offerings and current purchases using the
AWS Management Console—but new purchases are denied.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowReservedCapacityDescriptions",
"Effect": "Allow",
"Action": [
"dynamodb:DescribeReservedCapacity",
"dynamodb:DescribeReservedCapacityOfferings"
],
"Resource": "arn:aws:dynamodb:us-west-2:123456789012:*"
},
{
"Sid": "DenyReservedCapacityPurchases",
"Effect": "Deny",
"Action": "dynamodb:PurchaseReservedCapacityOfferings",
"Resource": "arn:aws:dynamodb:us-west-2:123456789012:*"
}
]
}
Example 7: Allow Read Access for a DynamoDB Stream Only (Not for the Table)
When you enable DynamoDB Streams on a table, it captures information about every modification
to data items in the table. For more information, see Capturing Table Activity with DynamoDB
Streams (p. 566).
In some cases, you might want to prevent an application from reading data from a DynamoDB table,
while still allowing access to that table's stream. For example, you can configure AWS Lambda to poll
the stream and invoke a Lambda function when item updates are detected, and then perform additional
processing.
The following actions are available for controlling access to DynamoDB Streams:
• dynamodb:DescribeStream
• dynamodb:GetRecords
• dynamodb:GetShardIterator
• dynamodb:ListStreams
The following example creates a policy that grants users permissions to access the streams on a table
named GameScores. The final wildcard character (*) in the ARN matches any stream ID associated with
that table.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AccessGameScoresStreamOnly",
"Effect": "Allow",
"Action": [
"dynamodb:DescribeStream",
"dynamodb:GetRecords",
"dynamodb:GetShardIterator",
"dynamodb:ListStreams"
],
"Resource": "arn:aws:dynamodb:us-west-2:123456789012:table/GameScores/stream/*"
}
]
}
Note that this policy permits access to the streams on the GameScores table, but not to the table itself.
To grant permissions to Lambda, you use the permissions policy that is associated with the Lambda
function's IAM role (execution role), which you specify when you create the Lambda function.
For example, you can associate the following permissions policy with the execution role to grant Lambda
permissions to perform the DynamoDB Streams actions listed.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowLambdaFunctionInvocation",
"Effect": "Allow",
"Action": [
"lambda:InvokeFunction"
],
"Resource": [
"*"
]
},
{
"Sid": "AllAPIAccessForDynamoDBStreams",
"Effect": "Allow",
"Action": [
"dynamodb:GetRecords",
"dynamodb:GetShardIterator",
"dynamodb:DescribeStream",
"dynamodb:ListStreams"
],
"Resource": "*"
}
]
}
For more information, see AWS Lambda Permission Model in the AWS Lambda Developer Guide.
You can use AWS-wide condition keys in your DynamoDB policies to express conditions. For a complete
list of AWS-wide keys, see Available Keys in the IAM User Guide.
In addition to the AWS-wide condition keys, DynamoDB has its own specific keys that you can
use in conditions. For more information, see Using IAM Policy Conditions for Fine-Grained Access
Control (p. 730).
Note
To specify an action, use the dynamodb: prefix followed by the API operation name (for
example, dynamodb:CreateTable).
BatchGetItem
Action(s): dynamodb:BatchGetItem
Resource:
arn:aws:dynamodb:region:account-id:table/table-name
or
arn:aws:dynamodb:region:account-id:table/*
BatchWriteItem
Action(s): dynamodb:BatchWriteItem
Resource:
arn:aws:dynamodb:region:account-id:table/table-name
or
arn:aws:dynamodb:region:account-id:table/*
ConditionCheck
Action(s): dynamodb:ConditionCheck
Resource:
arn:aws:dynamodb:region:account-id:table/table-name
or
arn:aws:dynamodb:region:account-id:table/*
CreateTable
Action(s): dynamodb:CreateTable
Resource:
arn:aws:dynamodb:region:account-id:table/table-name
or
arn:aws:dynamodb:region:account-id:table/*
DeleteItem
Action(s): dynamodb:DeleteItem
Resource:
arn:aws:dynamodb:region:account-id:table/table-name
or
arn:aws:dynamodb:region:account-id:table/*
DeleteTable
Action(s): dynamodb:DeleteTable
Resource:
arn:aws:dynamodb:region:account-id:table/table-name
or
arn:aws:dynamodb:region:account-id:table/*
DescribeReservedCapacity
Action(s): dynamodb:DeleteTable
Resource:
arn:aws:dynamodb:region:account-id:*
DescribeReservedCapacityOfferings
Action(s): dynamodb:DescribeReservedCapacityOfferings
Resource:
arn:aws:dynamodb:region:account-id:*
DescribeStream
Action(s): dynamodb:DescribeStream
Resource:
arn:aws:dynamodb:region:account-id:table/table-name/stream/stream-label
or
arn:aws:dynamodb:region:account-id:table/table-name/stream/*
DescribeTable
Action(s): dynamodb:DescribeTable
Resource:
arn:aws:dynamodb:region:account-id:table/table-name
or
arn:aws:dynamodb:region:account-id:table/*
EnclosingOperation
Action(s): dynamodb:EnclosingOperation
Resource:
arn:aws:dynamodb:region:account-id:table/table-name
or
arn:aws:dynamodb:region:account-id:table/*
GetItem
Action(s): dynamodb:GetItem
Resource:
arn:aws:dynamodb:region:account-id:table/table-name
or
arn:aws:dynamodb:region:account-id:table/*
GetRecords
Action(s): dynamodb:GetRecords
Resource:
arn:aws:dynamodb:region:account-id:table/table-name/stream/stream-label
or
arn:aws:dynamodb:region:account-id:table/table-name/stream/*
GetShardIterator
Action(s): dynamodb:GetShardIterator
Resource:
arn:aws:dynamodb:region:account-id:table/table-name/stream/stream-label
or
arn:aws:dynamodb:region:account-id:table/table-name/stream/*
ListStreams
Action(s): dynamodb:ListStreams
Resource:
arn:aws:dynamodb:region:account-id:table/table-name/stream/*
or
arn:aws:dynamodb:region:account-id:table/*/stream/*
ListTables
Action(s): dynamodb:ListTables
Resource:
*
PurchaseReservedCapacityOfferings
Action(s): dynamodb:PurchaseReservedCapacityOfferings
Resource:
arn:aws:dynamodb:region:account-id:*
PutItem
Action(s): dynamodb:PutItem
Resource:
arn:aws:dynamodb:region:account-id:table/table-name
or
arn:aws:dynamodb:region:account-id:table/*
Query
Action(s): dynamodb:Query
Resource:
To query a table:arn:aws:dynamodb:region:account-id:table/table-name
or:
arn:aws:dynamodb:region:account-id:table/table-name
To query an index:
arn:aws:dynamodb:region:account-id:table/table-name/index/index-name
or:
arn:aws:dynamodb:region:account-id:table/table-name/index/*
Scan
Action(s): dynamodb:Scan
Resource:
To scan a table:
arn:aws:dynamodb:region:account-id:table/table-name
or:
arn:aws:dynamodb:region:account-id:table/table-name
To scan an index:
arn:aws:dynamodb:region:account-id:table/table-name/index/index-name
or:
arn:aws:dynamodb:region:account-id:table/table-name/index/*
UpdateItem
Action(s): dynamodb:UpdateItem
Resource:
arn:aws:dynamodb:region:account-id:table/table-name
or
arn:aws:dynamodb:region:account-id:table/*
UpdateTable
Action(s): dynamodb:UpdateTable
Resource:
arn:aws:dynamodb:region:account-id:table/table-name
or
arn:aws:dynamodb:region:account-id:table/*
Related Topics
• Access Control (p. 715)
• Using IAM Policy Conditions for Fine-Grained Access Control (p. 730)
Overview
In DynamoDB, you have the option to specify conditions when granting permissions using an IAM policy
(see Access Control (p. 715)). For example, you can:
• Grant permissions to allow users read-only access to certain items and attributes in a table or a
secondary index.
• Grant permissions to allow users write-only access to certain attributes in a table, based upon the
identity of that user.
In DynamoDB, you can specify conditions in an IAM policy using condition keys, as illustrated in the use
case in the following section.
Note
Some AWS services also support tag-based conditions; however, DynamoDB does not.
• Grant permissions on a table, but restrict access to specific items in that table based on certain primary
key values. An example might be a social networking app for games, where all users' saved game data
is stored in a single table, but no users can access data items that they do not own, as shown in the
following illustration:
• Hide information so that only a subset of attributes is visible to the user. An example might be an app
that displays flight data for nearby airports, based on the user's location. Airline names, arrival and
departure times, and flight numbers are all displayed. However, attributes such as pilot names or the
number of passengers are hidden, as shown in the following illustration:
To implement this kind of fine-grained access control, you write an IAM permissions policy that specifies
conditions for accessing security credentials and the associated permissions. You then apply the policy to
IAM users, groups, or roles that you create using the IAM console. Your IAM policy can restrict access to
individual items in a table, access to the attributes in those items, or both at the same time.
You can optionally use web identity federation to control access by users who are authenticated by Login
with Amazon, Facebook, or Google. For more information, see Using Web Identity Federation (p. 739).
You use the IAM Condition element to implement a fine-grained access control policy. By adding a
Condition element to a permissions policy, you can allow or deny access to items and attributes in
DynamoDB tables and indexes, based upon your particular business requirements.
As an example, consider a mobile gaming app that lets players select from and play a variety of different
games. The app uses a DynamoDB table named GameScores to keep track of high scores and other user
data. Each item in the table is uniquely identified by a user ID and the name of the game that the user
played. The GameScores table has a primary key consisting of a partition key (UserId) and sort key
(GameTitle). Users only have access to game data associated with their user ID. A user who wants to
play a game must belong to an IAM role named GameRole, which has a security policy attached to it.
To manage user permissions in this app, you could write a permissions policy such as the following:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowAccessToOnlyItemsMatchingUserID",
"Effect": "Allow",
"Action": [
"dynamodb:GetItem",
"dynamodb:BatchGetItem",
"dynamodb:Query",
"dynamodb:PutItem",
"dynamodb:UpdateItem",
"dynamodb:DeleteItem",
"dynamodb:BatchWriteItem"
],
"Resource": [
"arn:aws:dynamodb:us-west-2:123456789012:table/GameScores"
],
"Condition": {
"ForAllValues:StringEquals": {
"dynamodb:LeadingKeys": [
"${www.amazon.com:user_id}"
],
"dynamodb:Attributes": [
"UserId",
"GameTitle",
"Wins",
"Losses",
"TopScore",
"TopScoreDateTime"
]
},
"StringEqualsIfExists": {
"dynamodb:Select": "SPECIFIC_ATTRIBUTES"
}
}
}
]
}
In addition to granting permissions for specific DynamoDB actions (Action element) on the
GameScores table (Resource element), the Condition element uses the following condition keys
specific to DynamoDB that limit the permissions as follows:
• dynamodb:LeadingKeys – This condition key allows users to access only the items where
the partition key value matches their user ID. This ID, ${www.amazon.com:user_id}, is a
substitution variable. For more information about substitution variables, see Using Web Identity
Federation (p. 739).
• dynamodb:Attributes – This condition key limits access to the specified attributes so that only
the actions listed in the permissions policy can return values for these attributes. In addition, the
StringEqualsIfExists clause ensures that the app must always provide a list of specific attributes
to act upon and that the app can't request all attributes.
When an IAM policy is evaluated, the result is always either true (access is allowed) or false (access is
denied). If any part of the Condition element is false, the entire policy evaluates to false and access is
denied.
Important
If you use dynamodb:Attributes, you must specify the names of all of the primary key
and index key attributes for the table and any secondary indexes that are listed in the policy.
Otherwise, DynamoDB can't use these key attributes to perform the requested action.
IAM policy documents can contain only the following Unicode characters: horizontal tab (U+0009),
linefeed (U+000A), carriage return (U+000D), and characters in the range U+0020 to U+00FF.
Note
Condition keys are case-sensitive.
The following table shows the DynamoDB service-specific condition keys that apply to DynamoDB.
dynamodb:LeadingKeysRepresents the first key attribute of a table—in other words, the partition
key. The key name LeadingKeys is plural, even if the key is used with
single-item actions. In addition, you must use the ForAllValues modifier
when using LeadingKeys in a condition.
dynamodb:Select Represents the Select parameter of a Query or Scan request. Select can
be any of the following values:
• ALL_ATTRIBUTES
• ALL_PROJECTED_ATTRIBUTES
• SPECIFIC_ATTRIBUTES
• COUNT
dynamodb:Attributes Represents a list of the attribute names in a request, or the attributes that
are returned from a request. Attributes values are named the same way
and have the same meaning as the parameters for certain DynamoDB API
actions, as shown following:
• AttributesToGet
• ALL_OLD
• UPDATED_OLD
• ALL_NEW
• UPDATED_NEW
• NONE
• TOTAL
• NONE
1: Grant Permissions That Limit Access to Items with a Specific Partition Key Value
The following permissions policy grants permissions that allow a set of DynamoDB actions on the
GamesScore table. It uses the dynamodb:LeadingKeys condition key to limit user actions only on the
items whose UserID partition key value matches the Login with Amazon unique user ID for this app.
Important
The list of actions does not include permissions for Scan because Scan returns all items
regardless of the leading keys.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "FullAccessToUserItems",
"Effect": "Allow",
"Action": [
"dynamodb:GetItem",
"dynamodb:BatchGetItem",
"dynamodb:Query",
"dynamodb:PutItem",
"dynamodb:UpdateItem",
"dynamodb:DeleteItem",
"dynamodb:BatchWriteItem"
],
"Resource": [
"arn:aws:dynamodb:us-west-2:123456789012:table/GameScores"
],
"Condition": {
"ForAllValues:StringEquals": {
"dynamodb:LeadingKeys": [
"${www.amazon.com:user_id}"
]
}
}
}
]
}
Note
When using policy variables, you must explicitly specify version 2012-10-17 in the policy. The
default version of the access policy language, 2008-10-17, does not support policy variables.
To implement read-only access, you can remove any actions that can modify the data. In the following
policy, only those actions that provide read-only access are included in the condition.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ReadOnlyAccessToUserItems",
"Effect": "Allow",
"Action": [
"dynamodb:GetItem",
"dynamodb:BatchGetItem",
"dynamodb:Query"
],
"Resource": [
"arn:aws:dynamodb:us-west-2:123456789012:table/GameScores"
],
"Condition": {
"ForAllValues:StringEquals": {
"dynamodb:LeadingKeys": [
"${www.amazon.com:user_id}"
]
}
}
}
]
}
Important
If you use dynamodb:Attributes, you must specify the names of all of the primary key
and index key attributes, for the table and any secondary indexes that are listed in the policy.
Otherwise, DynamoDB can't use these key attributes to perform the requested action.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "LimitAccessToSpecificAttributes",
"Effect": "Allow",
"Action": [
"dynamodb:UpdateItem",
"dynamodb:GetItem",
"dynamodb:Query",
"dynamodb:BatchGetItem",
"dynamodb:Scan"
],
"Resource": [
"arn:aws:dynamodb:us-west-2:123456789012:table/GameScores"
],
"Condition": {
"ForAllValues:StringEquals": {
"dynamodb:Attributes": [
"UserId",
"TopScore"
]
},
"StringEqualsIfExists": {
"dynamodb:Select": "SPECIFIC_ATTRIBUTES",
"dynamodb:ReturnValues": [
"NONE",
"UPDATED_OLD",
"UPDATED_NEW"
]
}
}
}
]
}
Note
The policy takes an allow list (sometimes known as whitelist) approach, which allows access to a
named set of attributes. You can write an equivalent policy that denies access to other attributes
instead. We don't recommend this deny list (or blacklist) approach. Users can determine the
names of these denied attributes by repeatedly issuing requests for all possible attribute
names, eventually finding an attribute that they aren't allowed to access. To avoid this, follow
the principle of least privilege, as explained in Wikipedia at http://en.wikipedia.org/wiki/
Principle_of_least_privilege, and use an allow list approach to enumerate all of the allowed
values, rather than specifying the denied attributes.
This policy doesn't permit PutItem, DeleteItem, or BatchWriteItem. These actions always replace
the entire previous item, which would allow users to delete the previous values for attributes that they
are not allowed to access.
• If the user specifies the Select parameter, then its value must be SPECIFIC_ATTRIBUTES. This
requirement prevents the API action from returning any attributes that aren't allowed, such as from an
index projection.
• If the user specifies the ReturnValues parameter, then its value must be NONE, UPDATED_OLD, or
UPDATED_NEW. This is required because the UpdateItem action also performs implicit read operations
to check whether an item exists before replacing it, and so that previous attribute values can be
returned if requested. Restricting ReturnValues in this way ensures that users can only read or write
the allowed attributes.
• The StringEqualsIfExists clause assures that only one of these parameters — Select or
ReturnValues — can be used per request, in the context of the allowed actions.
• To allow only read actions, you can remove UpdateItem from the list of allowed actions. Because
none of the remaining actions accept ReturnValues, you can remove ReturnValues from the
condition. You can also change StringEqualsIfExists to StringEquals because the Select
parameter always has a value (ALL_ATTRIBUTES, unless otherwise specified).
• To allow only write actions, you can remove everything except UpdateItem from the list of allowed
actions. Because UpdateItem does not use the Select parameter, you can remove Select from
the condition. You must also change StringEqualsIfExists to StringEquals because the
ReturnValues parameter always has a value (NONE unless otherwise specified).
• To allow all attributes whose name matches a pattern, use StringLike instead of StringEquals,
and use a multi-character pattern match wildcard character (*).
The following permissions policy limits user access to updating only the specific attributes identified by
the dynamodb:Attributes condition key. The StringNotLike condition prevents an application from
updating the attributes specified using the dynamodb:Attributes condition key.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PreventUpdatesOnCertainAttributes",
"Effect": "Allow",
"Action": [
"dynamodb:UpdateItem"
],
"Resource": "arn:aws:dynamodb:us-west-2:123456789012:table/GameScores",
"Condition": {
"ForAllValues:StringNotLike": {
"dynamodb:Attributes": [
"FreeGamesAvailable",
"BossLevelUnlocked"
]
},
"StringEquals": {
"dynamodb:ReturnValues": [
"NONE",
"UPDATED_OLD",
"UPDATED_NEW"
]
}
}
}
]
}
• The UpdateItem action, like other write actions, requires read access to the items so that it can
return values before and after the update. In the policy, you limit the action to accessing only the
attributes that are allowed to be updated by specifying the dynamodb:ReturnValues condition
key. The condition key restricts ReturnValues in the request to specify only NONE, UPDATED_OLD, or
UPDATED_NEW and doesn't include ALL_OLD or ALL_NEW.
• The PutItem and DeleteItem actions replace an entire item, and thus allows applications to modify
any attributes. So when limiting an application to updating only specific attributes, you should not
grant permission for these APIs.
To require the application to specify a list of attributes in the query, the policy also specifies the
dynamodb:Select condition key to require that the Select parameter of the DynamoDB Query action
is SPECIFIC_ATTRIBUTES. The list of attributes is limited to a specific list that is provided using the
dynamodb:Attributes condition key.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "QueryOnlyProjectedIndexAttributes",
"Effect": "Allow",
"Action": [
"dynamodb:Query"
],
"Resource": [
"arn:aws:dynamodb:us-west-2:123456789012:table/GameScores/index/
TopScoreDateTimeIndex"
],
"Condition": {
"ForAllValues:StringEquals": {
"dynamodb:Attributes": [
"TopScoreDateTime",
"GameTitle",
"Wins",
"Losses",
"Attempts"
]
},
"StringEquals": {
"dynamodb:Select": "SPECIFIC_ATTRIBUTES"
}
}
}
]
}
The following permissions policy is similar, but the query must request all of the attributes that have
been projected into the index.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "QueryAllIndexAttributes",
"Effect": "Allow",
"Action": [
"dynamodb:Query"
],
"Resource": [
"arn:aws:dynamodb:us-west-2:123456789012:table/GameScores/index/
TopScoreDateTimeIndex"
],
"Condition": {
"StringEquals": {
"dynamodb:Select": "ALL_PROJECTED_ATTRIBUTES"
}
}
}
]
}
5: Grant Permissions to Limit Access to Certain Attributes and Partition Key Values
The following permissions policy allows specific DynamoDB actions (specified in the Action
element) on a table and a table index (specified in the Resource element). The policy uses the
dynamodb:LeadingKeys condition key to restrict permissions to only the items whose partition key
value matches the user’s Facebook ID.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "LimitAccessToCertainAttributesAndKeyValues",
"Effect": "Allow",
"Action": [
"dynamodb:UpdateItem",
"dynamodb:GetItem",
"dynamodb:Query",
"dynamodb:BatchGetItem"
],
"Resource": [
"arn:aws:dynamodb:us-west-2:123456789012:table/GameScores",
"arn:aws:dynamodb:us-west-2:123456789012:table/GameScores/index/
TopScoreDateTimeIndex"
],
"Condition": {
"ForAllValues:StringEquals": {
"dynamodb:LeadingKeys": [
"${graph.facebook.com:id}"
],
"dynamodb:Attributes": [
"attribute-A",
"attribute-B"
]
},
"StringEqualsIfExists": {
"dynamodb:Select": "SPECIFIC_ATTRIBUTES",
"dynamodb:ReturnValues": [
"NONE",
"UPDATED_OLD",
"UPDATED_NEW"
]
}
}
}
]
}
• Write actions allowed by the policy (UpdateItem) can only modify attribute-A or attribute-B.
• Because the policy allows UpdateItem, an application can insert new items, and the hidden attributes
will be null in the new items. If these attributes are projected into TopScoreDateTimeIndex , the
policy has the added benefit of preventing queries that cause fetches from the table.
• Applications cannot read any attributes other than those listed in dynamodb:Attributes. With this
policy in place, an application must set the Select parameter to SPECIFIC_ATTRIBUTES in read
requests, and only whitelisted attributes can be requested. For write requests, the application cannot
set ReturnValues to ALL_OLD or ALL_NEW and it cannot perform conditional write operations based
on any other attributes.
Related Topics
• Access Control (p. 715)
• DynamoDB API Permissions: Actions, Resources, and Conditions Reference (p. 726)
credentials from AWS Security Token Service (AWS STS). The app can then use these credentials to access
AWS services.
• The Web Identity Federation Playground is an interactive website that lets you walk through the
process of authenticating via Login with Amazon, Facebook, or Google, getting temporary security
credentials, and using those credentials to make a request to AWS.
• The post Web Identity Federation using the AWS SDK for .NET on the AWS Developer blog walks
through how to use web identity federation with Facebook. It includes code snippets in C# that show
how to assume an IAM role with web identity and how to use temporary security credentials to access
an AWS resource.
• The AWS SDK for iOS and the AWS SDK for Android contain sample apps. They include code that
shows how to invoke the identity providers, and then how to use the information from these providers
to get and use temporary security credentials.
• The article Web Identity Federation with Mobile Applications discusses web identity federation and
shows an example of how to use web identity federation to access an AWS resource.
Table Name Primary Key Type Partition Key Name Sort Key Name and
and Type Type
Now suppose that a mobile gaming app uses this table, and that app needs to support thousands, or
even millions, of users. At this scale, it becomes very difficult to manage individual app users, and to
guarantee that each user can only access their own data in the GameScores table. Fortunately, many
users already have accounts with a third-party identity provider, such as Facebook, Google, or Login with
Amazon. So it makes sense to use one of these providers for authentication tasks.
To do this using web identity federation, the app developer must register the app with an identity
provider (such as Login with Amazon) and obtain a unique app ID. Next, the developer needs to create
an IAM role. (For this example, this role is named GameRole.) The role must have an IAM policy document
attached to it, specifying the conditions under which the app can access GameScores table.
When a user wants to play a game, they sign in to their Login with Amazon account from within the
gaming app. The app then calls AWS Security Token Service (AWS STS), providing the Login with Amazon
app ID and requesting membership in GameRole. AWS STS returns temporary AWS credentials to the app
and allows it to access the GameScores table, subject to the GameRole policy document.
1. The app calls a third-party identity provider to authenticate the user and the app. The identity
provider returns a web identity token to the app.
2. The app calls AWS STS and passes the web identity token as input. AWS STS authorizes the app and
gives it temporary AWS access credentials. The app is allowed to assume an IAM role (GameRole) and
access AWS resources in accordance with the role's security policy.
3. The app calls DynamoDB to access the GameScores table. Because it has assumed the GameRole, the
app is subject to the security policy associated with that role. The policy document prevents the app
from accessing data that does not belong to the user.
Once again, here is the security policy for GameRole that was shown in Using IAM Policy Conditions for
Fine-Grained Access Control (p. 730):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowAccessToOnlyItemsMatchingUserID",
"Effect": "Allow",
"Action": [
"dynamodb:GetItem",
"dynamodb:BatchGetItem",
"dynamodb:Query",
"dynamodb:PutItem",
"dynamodb:UpdateItem",
"dynamodb:DeleteItem",
"dynamodb:BatchWriteItem"
],
"Resource": [
"arn:aws:dynamodb:us-west-2:123456789012:table/GameScores"
],
"Condition": {
"ForAllValues:StringEquals": {
"dynamodb:LeadingKeys": [
"${www.amazon.com:user_id}"
],
"dynamodb:Attributes": [
"UserId",
"GameTitle",
"Wins",
"Losses",
"TopScore",
"TopScoreDateTime"
]
},
"StringEqualsIfExists": {
"dynamodb:Select": "SPECIFIC_ATTRIBUTES"
}
}
}
]
}
The Condition clause determines which items in GameScores are visible to the app. It does this by
comparing the Login with Amazon ID to the UserId partition key values in GameScores. Only the items
belonging to the current user can be processed using one of DynamoDB actions that are listed in this
policy. Other items in the table cannot be accessed. Furthermore, only the specific attributes listed in the
policy can be accessed.
1. Sign up as a developer with a third-party identity provider. The following external links provide
information about signing up with supported identity providers:
• Login with Amazon Developer Center
• Registration on the Facebook site
• Using OAuth 2.0 to Access Google APIs on the Google site
2. Register your app with the identity provider. When you do this, the provider gives you an ID that's
unique to your app. If you want your app to work with multiple identity providers, you need to obtain
an app ID from each provider.
3. Create one or more IAM roles. You need one role for each identity provider for each app. For
example, you might create a role that can be assumed by an app where the user signed in using Login
with Amazon, a second role for the same app where the user has signed in using Facebook, and a third
role for the app where users sign in using Google.
As part of the role creation process, you need to attach an IAM policy to the role. Your policy
document should define the DynamoDB resources required by your app, and the permissions for
accessing those resources.
For more information, see About Web Identity Federation in IAM User Guide.
Note
As an alternative to AWS Security Token Service, you can use Amazon Cognito. Amazon
Cognito is the preferred service for managing temporary credentials for mobile apps. For more
information, see the following pages:
The DynamoDB console can help you create an IAM policy for use with web identity federation. To
do this, you choose a DynamoDB table and specify the identity provider, actions, and attributes to be
included in the policy. The DynamoDB console then generates a policy that you can attach to an IAM
role.
1. Sign in to the AWS Management Console and open the DynamoDB console at https://
console.aws.amazon.com/dynamodb/.
2. In the navigation pane, choose Tables.
3. In the list of tables, choose the table for which you want to create the IAM policy.
4. Choose the Access control tab.
5. Choose the identity provider, actions, and attributes for the policy.
When the settings are as you want them, click Create policy. The generated policy appears.
6. Click Attach policy instructions, and follow the steps required to attach the generated policy to an
IAM role.
At runtime, if your app uses web identity federation, it must follow these steps:
1. Authenticate with a third-party identity provider. Your app must call the identity provider using an
interface that they provide. The exact way in which you authenticate the user depends on the provider
and on what platform your app is running. Typically, if the user is not already signed in, the identity
provider takes care of displaying a sign-in page for that provider.
After the identity provider authenticates the user, the provider returns a web identity token to your
app. The format of this token depends on the provider, but is typically a very long string of characters.
2. Obtain temporary AWS security credentials. To do this, your app sends a
AssumeRoleWithWebIdentity request to AWS Security Token Service (AWS STS). This request
contains the following:
• The web identity token from the previous step
• The app ID from the identity provider
• The Amazon Resource Name (ARN) of the IAM role that you created for this identity provider for this
app
AWS STS returns a set of AWS security credentials that expire after a certain amount of time (3,600
seconds, by default).
GET / HTTP/1.1
Host: sts.amazonaws.com
Content-Type: application/json; charset=utf-8
URL: https://sts.amazonaws.com/?ProviderId=www.amazon.com
&DurationSeconds=900&Action=AssumeRoleWithWebIdentity
&Version=2011-06-15&RoleSessionName=web-identity-federation
&RoleArn=arn:aws:iam::123456789012:role/GameRole
&WebIdentityToken=Atza|IQEBLjAsAhQluyKqyBiYZ8-kclvGTYM81e...(remaining characters
omitted)
<AssumeRoleWithWebIdentityResponse
xmlns="https://sts.amazonaws.com/doc/2011-06-15/">
<AssumeRoleWithWebIdentityResult>
<SubjectFromWebIdentityToken>amzn1.account.AGJZDKHJKAUUSW6C44CHPEXAMPLE</
SubjectFromWebIdentityToken>
<Credentials>
<SessionToken>AQoDYXdzEMf//////////wEa8AP6nNDwcSLnf+cHupC...(remaining characters
omitted)</SessionToken>
<SecretAccessKey>8Jhi60+EWUUbbUShTEsjTxqQtM8UKvsM6XAjdA==</SecretAccessKey>
<Expiration>2013-10-01T22:14:35Z</Expiration>
<AccessKeyId>06198791C436IEXAMPLE</AccessKeyId>
</Credentials>
<AssumedRoleUser>
<Arn>arn:aws:sts::123456789012:assumed-role/GameRole/web-identity-federation</Arn>
<AssumedRoleId>AROAJU4SA2VW5SZRF2YMG:web-identity-federation</AssumedRoleId>
</AssumedRoleUser>
</AssumeRoleWithWebIdentityResult>
<ResponseMetadata>
<RequestId>c265ac8e-2ae4-11e3-8775-6969323a932d</RequestId>
</ResponseMetadata>
</AssumeRoleWithWebIdentityResponse>
3. Access AWS resources. The response from AWS STS contains information that your app requires in
order to access DynamoDB resources:
• The AccessKeyID, SecretAccessKey, and SessionToken fields contain security credentials that
are valid for this user and this app only.
• The Expiration field signifies the time limit for these credentials, after which they are no longer
valid.
• The AssumedRoleId field contains the name of a session-specific IAM role that has been assumed
by the app. The app honors the access controls in the IAM policy document for the duration of this
session.
• The SubjectFromWebIdentityToken field contains the unique ID that appears in an IAM
policy variable for this particular identity provider. The following are the IAM policy variables for
supported providers, and some example values for them:
${www.amazon.com:user_id} amzn1.account.AGJZDKHJKAUUSW6C44CHPEXAMPLE
${graph.facebook.com:id} 123456789
${accounts.google.com:sub} 123456789012345678901
For example IAM policies where these policy variables are used, see Example Policies: Using Conditions
for Fine-Grained Access Control (p. 734).
For more information about how AWS STS generates temporary access credentials, see Requesting
Temporary Security Credentials in IAM User Guide.
We highly recommend that you understand both security models, so that you can implement proper
security measures for your applications that use DAX.
This section describes the access control mechanisms provided by DAX and provides sample IAM policies
that you can tailor to your needs.
With DynamoDB, you can create IAM policies that limit the actions a user can perform on individual
DynamoDB resources. For example, you can create a user role that only allows the user to perform read-
only actions on a particular DynamoDB table. (For more information, see the section called “Identity and
Access Management in Amazon DynamoDB” (p. 714).) By comparison, the DAX security model focuses
on cluster security, and the ability of the cluster to perform DynamoDB API actions on your behalf.
Warning
If you are currently using IAM roles and policies to restrict access to DynamoDB tables data, then
the use of DAX can subvert those policies. For example, a user could have access to a DynamoDB
table via DAX but not have explicit access to the same table accessing DynamoDB directly.
For more information, see the section called “Identity and Access Management in Amazon
DynamoDB” (p. 714).
DAX does not enforce user-level separation on data in DynamoDB. Instead, users inherit the
permissions of the DAX cluster's IAM policy when they access that cluster. Thus, when accessing
DynamoDB tables via DAX, the only access controls that are in effect are the permissions in the
DAX cluster's IAM policy. No other permissions are recognized.
If you require isolation, we recommend that you create additional DAX clusters and scope the
IAM policy for each cluster accordingly. For example, you could create multiple DAX clusters and
allow each cluster to access only a single table.
Suppose that you wanted to create a new DAX cluster named DAXCluster01. You could create
a service role named DAXServiceRole, and associate the role with DAXCluster01. The policy
for DAXServiceRole would define the DynamoDB actions that DAXCluster01 could perform, on behalf of
the users who interact with DAXCluster01.
When you create a service role, you must specify a trust relationship between DAXServiceRole and the
DAX service itself. A trust relationship determines which entities can assume a role and make use of its
permissions. The following is an example trust relationship document for DAXServiceRole:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "dax.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
This trust relationship allows a DAX cluster to assume DAXServiceRole and perform DynamoDB API calls
on your behalf.
The DynamoDB API actions that are allowed are described in an IAM policy document, which you attach
to DAXServiceRole. The following is an example policy document.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DaxAccessPolicy",
"Effect": "Allow",
"Action": [
"dynamodb:*"
],
"Resource": [
"arn:aws:dynamodb:us-west-2:123456789012:table/Books"
]
}
]
}
This policy allows DAX to perform all DynamoDB API actions on a DynamoDB table named Books. The
table is in the us-west-2 Region and is owned by AWS account ID 123456789012.
For example, suppose that you want to grant access to DAXCluster01 to an IAM user named Alice. You
would first create an IAM policy (AliceAccessPolicy) that defines the DAX clusters and DAX API actions
that the recipient can access. You would then confer access by attaching this policy to user Alice.
The following policy document gives the recipient full access on DAXCluster01.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"dax:*"
],
"Effect": "Allow",
"Resource": [
"arn:aws:dax:us-west-2:123456789012:cache/DAXCluster01"
]
}
]
}
The policy document allows access to the DAX cluster, but it does not grant any DynamoDB permissions.
(The DynamoDB permissions are conferred by the DAX service role.)
For user Alice, you would first create AliceAccessPolicy with the policy document shown previously.
You would then attach the policy to Alice.
Note
Instead of attaching the policy to an IAM user, you could attach it to an IAM role. That way, all of
the users who assume that role would have the permissions that you defined in the policy.
The user policy, together with the DAX service role, determine the DynamoDB resources and API actions
that the recipient can access via DAX.
It is possible to allow direct access to a DynamoDB table, while preventing indirect access using a
DAX cluster. For direct access to DynamoDB, the permissions for BobUserRole are determined by
BobAccessPolicy (which is attached to the role).
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DynamoDBAccessStmt",
"Effect": "Allow",
"Action": [
"dynamodb:GetItem",
"dynamodb:BatchGetItem",
"dynamodb:Query",
"dynamodb:Scan"
],
"Resource": "arn:aws:dynamodb:us-west-2:123456789012:table/Books"
}
]
}
DAX does not appear in this policy, so access via DAX is denied.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DynamoDBAccessStmt",
"Effect": "Allow",
"Action": [
"dynamodb:GetItem",
"dynamodb:BatchGetItem",
"dynamodb:Query",
"dynamodb:Scan",
"dynamodb:PutItem",
"dynamodb:UpdateItem",
"dynamodb:DeleteItem",
"dynamodb:BatchWriteItem",
"dynamodb:ConditionCheckItem"
],
"Resource": "arn:aws:dynamodb:us-west-2:123456789012:table/Books"
}
]
}
Again, DAX does not appear in this policy, so access via DAX is denied.
To allow access to a DAX cluster, you must include DAX-specific actions in an IAM policy.
The following DAX-specific actions correspond to their similarly named counterparts in the DynamoDB
API:
• dax:GetItem
• dax:BatchGetItem
• dax:Query
• dax:Scan
• dax:PutItem
• dax:UpdateItem
• dax:DeleteItem
• dax:BatchWriteItem
• dax:ConditionCheckItem
In addition, there are four other DAX-specific actions that do not correspond to any DynamoDB APIs:
• dax:DefineAttributeList
• dax:DefineAttributeListId
• dax:DefineKeySchema
• dax:Endpoints
You must specify all four of these actions in any IAM policy that allows access to a DAX cluster. These
actions are specific to the low-level DAX data transport protocol. Your application does not need to
concern itself with these actions—they are only used in IAM policies.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DAXAccessStmt",
"Effect": "Allow",
"Action": [
"dax:GetItem",
"dax:BatchGetItem",
"dax:Query",
"dax:Scan",
"dax:DefineAttributeList",
"dax:DefineAttributeListId",
"dax:DefineKeySchema",
"dax:Endpoints"
],
"Resource": "arn:aws:dax:us-west-2:123456789012:cache/DAXCluster01"
},
{
"Sid": "DynamoDBAccessStmt",
"Effect": "Allow",
"Action": [
"dynamodb:GetItem",
"dynamodb:BatchGetItem",
"dynamodb:Query",
"dynamodb:Scan"
],
"Resource": "arn:aws:dynamodb:us-west-2:123456789012:table/Books"
}
]
}
The policy has a statement for DAX access (DAXAccessStmt) and another statement
for DynamoDBaccess (DynamoDBAccessStmt). These statements allow Bob to
send GetItem, BatchGetItem, Query, and Scan requests to DAXCluster01.
However, the service role for DAXCluster01 would also require read-only access to the Books table in
DynamoDB. The following IAM policy, attached to DAXServiceRole, would fulfill this requirement.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DynamoDBAccessStmt",
"Effect": "Allow",
"Action": [
"dynamodb:GetItem",
"dynamodb:BatchGetItem",
"dynamodb:Query",
"dynamodb:Scan"
],
"Resource": "arn:aws:dynamodb:us-west-2:123456789012:table/Books"
}
]
}
For Bob, the IAM policy for BobUserRole would need to allow DynamoDB read and write actions on
the Books table, while also supporting read-only actions via DAXCluster01.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DAXAccessStmt",
"Effect": "Allow",
"Action": [
"dax:GetItem",
"dax:BatchGetItem",
"dax:Query",
"dax:Scan",
"dax:DefineKeySchema",
"dax:DefineAttributeList",
"dax:DefineAttributeListId",
"dax:Endpoints"
],
"Resource": "arn:aws:dax:us-west-2:123456789012:cache/DAXCluster01"
},
{
"Sid": "DynamoDBAccessStmt",
"Effect": "Allow",
"Action": [
"dynamodb:GetItem",
"dynamodb:BatchGetItem",
"dynamodb:Query",
"dynamodb:Scan",
"dynamodb:PutItem",
"dynamodb:UpdateItem",
"dynamodb:DeleteItem",
"dynamodb:BatchWriteItem",
"dynamodb:DescribeTable",
"dynamodb:ConditionCheckItem"
],
"Resource": "arn:aws:dynamodb:us-west-2:123456789012:table/Books"
}
]
}
In addition, DAXServiceRole would require an IAM policy that allows DAXCluster01 to perform read-
only actions on the Books table.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DynamoDBAccessStmt",
"Effect": "Allow",
"Action": [
"dynamodb:GetItem",
"dynamodb:BatchGetItem",
"dynamodb:Query",
"dynamodb:Scan",
"dynamodb:DescribeTable"
],
"Resource": "arn:aws:dynamodb:us-west-2:123456789012:table/Books"
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DAXAccessStmt",
"Effect": "Allow",
"Action": [
"dax:GetItem",
"dax:BatchGetItem",
"dax:Query",
"dax:Scan",
"dax:PutItem",
"dax:UpdateItem",
"dax:DeleteItem",
"dax:BatchWriteItem",
"dax:DefineKeySchema",
"dax:DefineAttributeList",
"dax:DefineAttributeListId",
"dax:Endpoints",
"dax:ConditionCheckItem"
],
"Resource": "arn:aws:dax:us-west-2:123456789012:cache/DAXCluster01"
},
{
"Sid": "DynamoDBAccessStmt",
"Effect": "Allow",
"Action": [
"dynamodb:GetItem",
"dynamodb:BatchGetItem",
"dynamodb:Query",
"dynamodb:Scan",
"dynamodb:PutItem",
"dynamodb:UpdateItem",
"dynamodb:DeleteItem",
"dynamodb:BatchWriteItem",
"dynamodb:DescribeTable",
"dynamodb:ConditionCheckItem"
],
"Resource": "arn:aws:dynamodb:us-west-2:123456789012:table/Books"
}
]
}
In addition, DAXServiceRole would require an IAM policy that allows DAXCluster01 to perform read/
write actions on the Books table.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DynamoDBAccessStmt",
"Effect": "Allow",
"Action": [
"dynamodb:GetItem",
"dynamodb:BatchGetItem",
"dynamodb:Query",
"dynamodb:Scan",
"dynamodb:PutItem",
"dynamodb:UpdateItem",
"dynamodb:DeleteItem",
"dynamodb:BatchWriteItem",
"dynamodb:DescribeTable"
],
"Resource": "arn:aws:dynamodb:us-west-2:123456789012:table/Books"
}
]
}
remember that any user that is given access to the DAX cluster via the user access policy gains access to
the tables specified in that policy. In this case, BobAccessPolicy gains access to the tables specified in
DAXAccessPolicy.
If you are currently using IAM roles and policies to restrict access to DynamoDB tables and data, using
DAX can subvert those policies. In the following policy, Bob has access to a DynamoDB table via DAX but
does not have explicit direct access to the same table in DynamoDB.
The following policy document (BobAccessPolicy), attached to BobUserRole, would confer this
access.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DAXAccessStmt",
"Effect": "Allow",
"Action": [
"dax:GetItem",
"dax:BatchGetItem",
"dax:Query",
"dax:Scan",
"dax:PutItem",
"dax:UpdateItem",
"dax:DeleteItem",
"dax:BatchWriteItem",
"dax:DefineKeySchema",
"dax:DefineAttributeList",
"dax:DefineAttributeListId",
"dax:Endpoints",
"dax:ConditionCheckItem"
],
"Resource": "arn:aws:dax:us-west-2:123456789012:cache/DAXCluster01"
}
]
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DynamoDBAccessStmt",
"Effect": "Allow",
"Action": [
"dynamodb:GetItem",
"dynamodb:BatchGetItem",
"dynamodb:Query",
"dynamodb:Scan",
"dynamodb:PutItem",
"dynamodb:UpdateItem",
"dynamodb:DeleteItem",
"dynamodb:BatchWriteItem",
"dynamodb:DescribeTable",
"dynamodb:ConditionCheckItem"
],
"Resource": "arn:aws:dynamodb:us-west-2:123456789012:table/Books"
}
]
}
As this example shows, when you configure access control for the user access policy and the DAX cluster
access policy, you must fully understand the end-to-end access to ensure that the principle of least
privilege is observed. Also ensure that giving a user access to a DAX cluster does not subvert previously
established access control policies.
Topics
• Logging and Monitoring in DynamoDB (p. 755)
• Logging and Monitoring in DAX (p. 781)
The next step is to establish a baseline for normal DynamoDB performance in your environment, by
measuring performance at various times and under different load conditions. As you monitor DynamoDB,
you should consider storing historical monitoring data. This stored data will give you a baseline from
which to compare current performance data, identify normal performance patterns and performance
anomalies, and devise methods to address issues.
• The number of read or write capacity units consumed over the specified time period, so you can track
how much of your provisioned throughput is used.
• Requests that exceeded a table's provisioned write or read capacity during the specified time period, so
you can determine which requests exceed the provisioned throughput limits of a table.
• System errors, so you can determine if any requests resulted in an error.
Topics
• Monitoring Tools (p. 756)
• Monitoring with Amazon CloudWatch (p. 757)
• Logging DynamoDB Operations by Using AWS CloudTrail (p. 774)
Monitoring Tools
AWS provides tools that you can use to monitor DynamoDB. You can configure some of these tools
to do the monitoring for you; some require manual intervention. We recommend that you automate
monitoring tasks as much as possible.
• Amazon CloudWatch Alarms – Watch a single metric over a time period that you specify, and perform
one or more actions based on the value of the metric relative to a given threshold over a number of
time periods. The action is a notification sent to an Amazon Simple Notification Service (Amazon SNS)
topic or Amazon EC2 Auto Scaling policy. CloudWatch alarms do not invoke actions simply because
they are in a particular state; the state must have changed and been maintained for a specified
number of periods. For more information, see Monitoring with Amazon CloudWatch (p. 757).
• Amazon CloudWatch Logs – Monitor, store, and access your log files from AWS CloudTrail or other
sources. For more information, see Monitoring Log Files in the Amazon CloudWatch User Guide.
• Amazon CloudWatch Events – Match events and route them to one or more target functions or
streams to make changes, capture state information, and take corrective action. For more information,
see What is Amazon CloudWatch Events in the Amazon CloudWatch User Guide.
• AWS CloudTrail Log Monitoring – Share log files between accounts, monitor CloudTrail log files in real
time by sending them to CloudWatch Logs, write log processing applications in Java, and validate that
your log files have not changed after delivery by CloudTrail. For more information, see Working with
CloudTrail Log Files in the AWS CloudTrail User Guide.
Topics
• DynamoDB Metrics and Dimensions (p. 757)
• How Do I Use DynamoDB Metrics? (p. 772)
• Creating CloudWatch Alarms to Monitor DynamoDB (p. 773)
The metrics and dimensions that DynamoDB sends to Amazon CloudWatch are listed here.
DynamoDB Metrics
Note
Amazon CloudWatch aggregates the following DynamoDB metrics at one-minute intervals:
• ConditionalCheckFailedRequests
• ConsumedReadCapacityUnits
• ConsumedWriteCapacityUnits
• ReadThrottleEvents
• ReturnedBytes
• ReturnedItemCount
• ReturnedRecordsCount
• SuccessfulRequestLatency
• SystemErrors
• TimeToLiveDeletedItemCount
• ThrottledRequests
• TransactionConflict
• UserErrors
• WriteThrottleEvents
For all other DynamoDB metrics, the aggregation granularity is five minutes.
Not all statistics, such as Average or Sum, are applicable for every metric. However, all of these values are
available through the Amazon DynamoDB console, or by using the CloudWatch console, AWS CLI, or AWS
SDKs for all metrics. In the following table, each metric has a list of valid statistics that are applicable to
that metric.
Metric Description
Units: Count
Metric Description
Dimensions: TableName
Valid Statistics:
• Minimum
• Maximum
• Average
• SampleCount
• Sum
Units: Count
Valid Statistics:
Metric Description
Units: Count
Valid Statistics:
Metric Description
You can adjust the write capacity of the index using the
UpdateTable operation, even while the index is still being
built.
Units: Count
Valid Statistics:
• Minimum
• Maximum
• Average
• SampleCount
• Sum
Units: Count
Valid Statistics:
• Minimum
• Maximum
• Average
• SampleCount
• Sum
Metric Description
OnlineIndexThrottleEvents The number of write throttle events that occur when adding
a new global secondary index to a table. These events
indicate that the index creation will take longer to complete,
because incoming write activity is exceeding the provisioned
write throughput of the index.
You can adjust the write capacity of the index using the
UpdateTable operation, even while the index is still being
built.
Units: Count
Valid Statistics:
• Minimum
• Maximum
• Average
• SampleCount
• Sum
Units: Count
Valid Statistics:
• Average
• Sample Count
• Sum
Metric Description
Units: Count
Valid Statistics:
Metric Description
Units: Count
Valid Statistics:
Units: Count
Valid Statistics:
• SampleCount
• Sum
Metric Description
Units: Milliseconds
Valid Statistics:
• Average
• Minimum
• Maximum
Units: Bytes
Valid Statistics:
• Minimum
• Maximum
• Average
• SampleCount
• Sum
Units: Count
Valid Statistics:
• Minimum
• Maximum
• Average
• SampleCount
• Sum
Metric Description
Units: Count
Valid Statistics:
• Minimum
• Maximum
• Average
• SampleCount
• Sum
Units: Milliseconds
Valid Statistics:
• Minimum
• Maximum
• Average
• SampleCount
Units: Count
Valid Statistics:
• Sum
• SampleCount
Metric Description
Units: Count
Dimensions: TableName
Valid Statistics:
• Sum
Metric Description
Units: Count
Valid Statistics:
• Sum
• SampleCount
Metric Description
Units: Count
Dimensions: TableName
Valid Statistics:
Metric Description
Units: Count
Valid Statistics:
• Sum
• SampleCount
Metric Description
Units: Count
Valid Statistics:
• Sum
• SampleCount
The metrics for DynamoDB are qualified by the values for the account, table name, global secondary
index name, or operation. You can use the CloudWatch console to retrieve DynamoDB data along any of
the dimensions in the table below.
Dimension Description
Operation This dimension limits the data to one of the following DynamoDB
operations:
• PutItem
• DeleteItem
• UpdateItem
• GetItem
• BatchGetItem
• Scan
• Query
• BatchWriteItem
Dimension Description
In addition, you can limit the data to the following Amazon
DynamoDB Streams operation:
• GetRecords
ReceivingRegion This dimension limits the data to a particular AWS region. It is used
with metrics originating from replica tables within a DynamoDB
global table.
TableName This dimension limits the data to a specific table. This value can be
any table name in the current region and the current AWS account.
How can I monitor the You can monitor TimeToLiveDeletedItemCount over the
rate of TTL deletions on specified time period, to track the rate of TTL deletions on your
my table? table. For an example of a server-less application using the
TimeToLiveDeletedItemCount metric, see Automatically
archive items to S3 using DynamoDB Time to Live (TTL) with AWS
Lambda and Amazon Kinesis Firehose.
How can I determine You can monitor SystemErrors to determine if any requests
if any system errors resulted in an HTTP 500 (server error) code. Typically, this
occurred? metric should be equal to zero. If it isn't, then you might want to
investigate.
Note
You might encounter internal server errors while working
with items. These are expected during the lifetime of a
table. Any failed requests can be retried immediately.
Note
The alarm is activated whenever the consumed read capacity is at least 4 units per second (80%
of provisioned read capacity of 5) for 1 minute (60 seconds). So the threshold is 240 read
capacity units (4 units/sec * 60 seconds). Any time the read capacity is updated you should
update the alarm calculations appropriately. You can avoid this process by creating alarms
through the DynamoDB Console. In this way, the alarms are automatically updated for you.
How can I be notified if any requests exceed the provisioned throughput limits of a table?
--namespace AWS/DynamoDB \
--metric-name ThrottledRequests \
--dimensions Name=TableName,Value=myTable \
--statistic Sum \
--threshold 0 \
--comparison-operator GreaterThanThreshold \
--period 300 \
--unit Count \
--evaluation-periods 1 \
--alarm-actions arn:aws:sns:us-east-1:123456789012:requests-exceeding-throughput
events in the CloudTrail console in Event history. Using the information collected by CloudTrail, you can
determine the request that was made to DynamoDB, the IP address from which the request was made,
who made the request, when it was made, and additional details.
To learn more about CloudTrail, including how to configure and enable it, see the AWS CloudTrail User
Guide.
For an ongoing record of events in your AWS account, including events for DynamoDB, create a trail.
A trail enables CloudTrail to deliver log files to an Amazon S3 bucket. By default, when you create a
trail in the console, the trail applies to all AWS Regions. The trail logs events from all Regions in the
AWS partition and delivers the log files to the Amazon S3 bucket that you specify. Additionally, you can
configure other AWS services to further analyze and act upon the event data collected in CloudTrail logs.
For more information, see the following:
Amazon DynamoDB
• CreateBackup
• CreateGlobalTable
• CreateTable
• DeleteBackup
• DeleteTable
• DescribeBackup
• DescribeContinuousBackups
• DescribeGlobalTable
• DescribeLimits
• DescribeTable
• DescribeTimeToLive
• ListBackups
• ListTables
• ListTagsOfResource
• ListGlobalTables
• RestoreTableFromBackup
• TagResource
• UntagResource
• UpdateGlobalTable
• UpdateTable
• UpdateTimeToLive
• DescribeReservedCapacity
• DescribeReservedCapacityOfferings
• DescribeScalableTargets
• RegisterScalableTarget
• PurchaseReservedCapacityOfferings
DynamoDB Streams
• DescribeStream
• ListStreams
• CreateCluster
• CreateParameterGroup
• CreateSubnetGroup
• DecreaseReplicationFactor
• DeleteCluster
• DeleteParameterGroup
• DeleteSubnetGroup
• DescribeClusters
• DescribeDefaultParameters
• DescribeEvents
• DescribeParameterGroups
• DescribeParameters
• DescribeSubnetGroups
• IncreaseReplicationFactor
• ListTags
• RebootNode
• TagResource
• UntagResource
• UpdateCluster
• UpdateParameterGroup
• UpdateSubnetGroup
Every event or log entry contains information about who generated the request. The identity
information helps you determine the following:
• Whether the request was made with root or AWS Identity and Access Management (IAM) user
credentials.
• Whether the request was made with temporary security credentials for a role or federated user.
• Whether the request was made by another AWS service.
The following example shows a CloudTrail log that demonstrates the CreateTable, DescribeTable,
UpdateTable, ListTables, and DeleteTable actions.
{"Records": [
{
"eventVersion": "1.03",
"userIdentity": {
"type": "AssumedRole",
"principalId": "AKIAIOSFODNN7EXAMPLE:bob",
"arn": "arn:aws:sts::111122223333:assumed-role/users/bob",
"accountId": "111122223333",
"accessKeyId": "AKIAIOSFODNN7EXAMPLE",
"sessionContext": {
"attributes": {
"mfaAuthenticated": "false",
"creationDate": "2015-05-28T18:06:01Z"
},
"sessionIssuer": {
"type": "Role",
"principalId": "AKIAI44QH8DHBEXAMPLE",
"arn": "arn:aws:iam::444455556666:role/admin-role",
"accountId": "444455556666",
"userName": "bob"
}
}
},
"eventTime": "2015-05-01T07:24:55Z",
"eventSource": "dynamodb.amazonaws.com",
"eventName": "CreateTable",
"awsRegion": "us-west-2",
"sourceIPAddress": "192.0.2.0",
"userAgent": "console.aws.amazon.com",
"requestParameters": {
"provisionedThroughput": {
"writeCapacityUnits": 10,
"readCapacityUnits": 10
},
"tableName": "Music",
"keySchema": [
{
"attributeName": "Artist",
"keyType": "HASH"
},
{
"attributeName": "SongTitle",
"keyType": "RANGE"
}
],
"attributeDefinitions": [
{
"attributeType": "S",
"attributeName": "Artist"
},
{
"attributeType": "S",
"attributeName": "SongTitle"
}
]
},
"responseElements": {"tableDescription": {
"tableName": "Music",
"attributeDefinitions": [
{
"attributeType": "S",
"attributeName": "Artist"
},
{
"attributeType": "S",
"attributeName": "SongTitle"
}
],
"itemCount": 0,
"provisionedThroughput": {
"writeCapacityUnits": 10,
"numberOfDecreasesToday": 0,
"readCapacityUnits": 10
},
"creationDateTime": "May 1, 2015 7:24:55 AM",
"keySchema": [
{
"attributeName": "Artist",
"keyType": "HASH"
},
{
"attributeName": "SongTitle",
"keyType": "RANGE"
}
],
"tableStatus": "CREATING",
"tableSizeBytes": 0
}},
"requestID": "KAVGJR1Q0I5VHF8FS8V809EV7FVV4KQNSO5AEMVJF66Q9ASUAAJG",
"eventID": "a8b5f864-480b-43bf-bc22-9b6d77910a29",
"eventType": "AwsApiCall",
"apiVersion": "2012-08-10",
"recipientAccountId": "111122223333"
},
{
"eventVersion": "1.03",
"userIdentity": {
"type": "AssumedRole",
"principalId": "AKIAIOSFODNN7EXAMPLE:bob",
"arn": "arn:aws:sts::111122223333:assumed-role/users/bob",
"accountId": "444455556666",
"accessKeyId": "AKIAIOSFODNN7EXAMPLE",
"sessionContext": {
"attributes": {
"mfaAuthenticated": "false",
"creationDate": "2015-05-28T18:06:01Z"
},
"sessionIssuer": {
"type": "Role",
"principalId": "AKIAI44QH8DHBEXAMPLE",
"arn": "arn:aws:iam::444455556666:role/admin-role",
"accountId": "444455556666",
"userName": "bob"
}
}
},
"eventTime": "2015-05-04T02:43:11Z",
"eventSource": "dynamodb.amazonaws.com",
"eventName": "DescribeTable",
"awsRegion": "us-west-2",
"sourceIPAddress": "192.0.2.0",
"userAgent": "console.aws.amazon.com",
"requestParameters": {"tableName": "Music"},
"responseElements": null,
"requestID": "DISTSH6DQRLCC74L48Q51LRBHFVV4KQNSO5AEMVJF66Q9ASUAAJG",
"eventID": "c07befa7-f402-4770-8c1b-1911601ed2af",
"eventType": "AwsApiCall",
"apiVersion": "2012-08-10",
"recipientAccountId": "111122223333"
},
{
"eventVersion": "1.03",
"userIdentity": {
"type": "AssumedRole",
"principalId": "AKIAIOSFODNN7EXAMPLE:bob",
"arn": "arn:aws:sts::111122223333:assumed-role/users/bob",
"accountId": "111122223333",
"accessKeyId": "AKIAIOSFODNN7EXAMPLE",
"sessionContext": {
"attributes": {
"mfaAuthenticated": "false",
"creationDate": "2015-05-28T18:06:01Z"
},
"sessionIssuer": {
"type": "Role",
"principalId": "AKIAI44QH8DHBEXAMPLE",
"arn": "arn:aws:iam::444455556666:role/admin-role",
"accountId": "444455556666",
"userName": "bob"
}
}
},
"eventTime": "2015-05-04T02:14:52Z",
"eventSource": "dynamodb.amazonaws.com",
"eventName": "UpdateTable",
"awsRegion": "us-west-2",
"sourceIPAddress": "192.0.2.0",
"userAgent": "console.aws.amazon.com",
"requestParameters": {"provisionedThroughput": {
"writeCapacityUnits": 25,
"readCapacityUnits": 25
}},
"responseElements": {"tableDescription": {
"tableName": "Music",
"attributeDefinitions": [
{
"attributeType": "S",
"attributeName": "Artist"
},
{
"attributeType": "S",
"attributeName": "SongTitle"
}
],
"itemCount": 0,
"provisionedThroughput": {
"writeCapacityUnits": 10,
"numberOfDecreasesToday": 0,
"readCapacityUnits": 10,
"lastIncreaseDateTime": "May 3, 2015 11:34:14 PM"
},
"creationDateTime": "May 3, 2015 11:34:14 PM",
"keySchema": [
{
"attributeName": "Artist",
"keyType": "HASH"
},
{
"attributeName": "SongTitle",
"keyType": "RANGE"
}
],
"tableStatus": "UPDATING",
"tableSizeBytes": 0
}},
"requestID": "AALNP0J2L244N5O15PKISJ1KUFVV4KQNSO5AEMVJF66Q9ASUAAJG",
"eventID": "eb834e01-f168-435f-92c0-c36278378b6e",
"eventType": "AwsApiCall",
"apiVersion": "2012-08-10",
"recipientAccountId": "111122223333"
},
{
"eventVersion": "1.03",
"userIdentity": {
"type": "AssumedRole",
"principalId": "AKIAIOSFODNN7EXAMPLE:bob",
"arn": "arn:aws:sts::111122223333:assumed-role/users/bob",
"accountId": "111122223333",
"accessKeyId": "AKIAIOSFODNN7EXAMPLE",
"sessionContext": {
"attributes": {
"mfaAuthenticated": "false",
"creationDate": "2015-05-28T18:06:01Z"
},
"sessionIssuer": {
"type": "Role",
"principalId": "AKIAI44QH8DHBEXAMPLE",
"arn": "arn:aws:iam::444455556666:role/admin-role",
"accountId": "444455556666",
"userName": "bob"
}
}
},
"eventTime": "2015-05-04T02:42:20Z",
"eventSource": "dynamodb.amazonaws.com",
"eventName": "ListTables",
"awsRegion": "us-west-2",
"sourceIPAddress": "192.0.2.0",
"userAgent": "console.aws.amazon.com",
"requestParameters": null,
"responseElements": null,
"requestID": "3BGHST5OVHLMTPUMAUTA1RF4M3VV4KQNSO5AEMVJF66Q9ASUAAJG",
"eventID": "bd5bf4b0-b8a5-4bec-9edf-83605bd5e54e",
"eventType": "AwsApiCall",
"apiVersion": "2012-08-10",
"recipientAccountId": "111122223333"
},
{
"eventVersion": "1.03",
"userIdentity": {
"type": "AssumedRole",
"principalId": "AKIAIOSFODNN7EXAMPLE:bob",
"arn": "arn:aws:sts::111122223333:assumed-role/users/bob",
"accountId": "111122223333",
"accessKeyId": "AKIAIOSFODNN7EXAMPLE",
"sessionContext": {
"attributes": {
"mfaAuthenticated": "false",
"creationDate": "2015-05-28T18:06:01Z"
},
"sessionIssuer": {
"type": "Role",
"principalId": "AKIAI44QH8DHBEXAMPLE",
"arn": "arn:aws:iam::444455556666:role/admin-role",
"accountId": "444455556666",
"userName": "bob"
}
}
},
"eventTime": "2015-05-04T13:38:20Z",
"eventSource": "dynamodb.amazonaws.com",
"eventName": "DeleteTable",
"awsRegion": "us-west-2",
"sourceIPAddress": "192.0.2.0",
"userAgent": "console.aws.amazon.com",
"requestParameters": {"tableName": "Music"},
"responseElements": {"tableDescription": {
"tableName": "Music",
"itemCount": 0,
"provisionedThroughput": {
"writeCapacityUnits": 25,
"numberOfDecreasesToday": 0,
"readCapacityUnits": 25
},
"tableStatus": "DELETING",
"tableSizeBytes": 0
}},
"requestID": "4KBNVRGD25RG1KEO9UT4V3FQDJVV4KQNSO5AEMVJF66Q9ASUAAJG",
"eventID": "a954451c-c2fc-4561-8aea-7a30ba1fdf52",
"eventType": "AwsApiCall",
"apiVersion": "2012-08-10",
"recipientAccountId": "111122223333"
}
]}
Before you start monitoring DAX, you should create a monitoring plan that includes answers to the
following questions:
The next step is to establish a baseline for normal DAX performance in your environment, by measuring
performance at various times and under different load conditions. As you monitor DAX, you should
consider storing historical monitoring data. This stored data gives you a baseline from which to compare
current performance data, identify normal performance patterns and performance anomalies, and
devise methods to address issues.
• CPU utilization, so that you can determine if you might need to use a larger node type in your cluster.
• Estimated database size and evicted size, so that you can determine if the cluster’s node type has
sufficient memory to hold your working set.
• Client connections, so that you can monitor for any unexplained spikes in connections to the cluster.
• System errors, so that you can determine if any requests resulted in an error.
Topics
• Monitoring Tools (p. 782)
• Monitoring with Amazon CloudWatch (p. 783)
• Logging DAX Operations Using AWS CloudTrail (p. 793)
Monitoring Tools
AWS provides tools that you can use to monitor DynamoDB Accelerator (DAX). You can configure some
of these tools to do the monitoring for you, and some require manual intervention. We recommend that
you automate monitoring tasks as much as possible.
• Amazon CloudWatch Alarms – Watch a single metric over a time period that you specify, and perform
one or more actions based on the value of the metric relative to a given threshold over a number of
time periods. The action is a notification sent to an Amazon Simple Notification Service (Amazon SNS)
topic or Amazon EC2 Auto Scaling policy. CloudWatch alarms do not invoke actions simply because
they are in a particular state; the state must have changed and been maintained for a specified
number of periods. For more information, see Monitoring with Amazon CloudWatch (p. 757).
• Amazon CloudWatch Logs – Monitor, store, and access your log files from AWS CloudTrail or other
sources. For more information, see Monitoring Log Files in the Amazon CloudWatch User Guide.
• Amazon CloudWatch Events – Match events and route them to one or more target functions or
streams to make changes, capture state information, and take corrective action. For more information,
see What Is Amazon CloudWatch Events in the Amazon CloudWatch User Guide.
• AWS CloudTrail Log Monitoring – Share log files between accounts, monitor CloudTrail log files in real
time by sending them to CloudWatch Logs, write log processing applications in Java, and validate that
your log files have not changed after delivery by CloudTrail. For more information, see Working with
CloudTrail Log Files in the AWS CloudTrail User Guide.
Topics
• How Do I Use DAX Metrics? (p. 783)
• Viewing DAX Metrics and Dimensions (p. 784)
• Creating CloudWatch Alarms to Monitor DAX (p. 792)
Determine if any cache misses Monitor ItemCacheMisses to determine the number of times an
occurred item was not found in the cache, and QueryCacheMisses and
ScanCacheMisses to determine the number of times a query or
scan result was not found in the cache.
Monitor cache hit rates Use CloudWatch Metric Math to define a cache hit rate metric using
math expressions.
For example, for the item cache, you can use the expression m1/
SUM([m1, m2])*100, where m1 is the ItemCacheHits metric
and m2 is the ItemCacheMisses metric for your cluster. For the
query and scan caches, you can follow the same pattern using the
corresponding query and scan cache metric.
DAX Metrics
The following metrics are available from DAX. DAX sends metrics to CloudWatch only when they have a
non-zero value.
Note
CloudWatch aggregates the following DAX metrics at one-minute intervals:
• CPUUtilization
• NetworkPacketsIn
• NetworkPacketsOut
• GetItemRequestCount
• BatchGetItemRequestCount
• BatchWriteItemRequestCount
• DeleteItemRequestCount
• PutItemRequestCount
• UpdateItemRequestCount
• TransactWriteItemsCount
• TransactGetItemsCount
• ItemCacheHits
• ItemCacheMisses
• QueryCacheHits
• QueryCacheMisses
• ScanCacheHits
• ScanCacheMisses
• TotalRequestCount
• ErrorRequestCount
• FaultRequestCount
• FailedRequestCount
• QueryRequestCount
• ScanRequestCount
• ClientConnections
• EstimatedDbSize
• EvictedSize
Not all statistics, such as Average or Sum, are applicable for every metric. However, all of these values
are available through the DAX console, or by using the CloudWatch console, AWS CLI, or AWS SDKs for
all metrics. In the following table, each metric has a list of valid statistics that are applicable to that
metric.
Metric Description
Units: Percent
Valid Statistics:
• Minimum
• Maximum
• Average
Units: Count
Valid Statistics:
• Minimum
• Maximum
• Average
NetworkPacketsOut The number of packets sent out on all network interfaces by the
node or cluster. This metric identifies the volume of outgoing
traffic in terms of the number of packets on a single node or
cluster.
Units: Count
Valid Statistics:
• Minimum
• Maximum
• Average
Units: Count
Valid Statistics:
• Minimum
• Maximum
Metric Description
• Average
• SampleCount
• Sum
Units: Count
Valid Statistics:
• Minimum
• Maximum
• Average
• SampleCount
• Sum
Units: Count
Valid Statistics:
• Minimum
• Maximum
• Average
• SampleCount
• Sum
Units: Count
Valid Statistics:
• Minimum
• Maximum
• Average
• SampleCount
• Sum
Metric Description
Units: Count
Valid Statistics:
• Minimum
• Maximum
• Average
• SampleCount
• Sum
Units: Count
Valid Statistics:
• Minimum
• Maximum
• Average
• SampleCount
• Sum
Units: Count
Valid Statistics:
• Minimum
• Maximum
• Average
• SampleCount
• Sum
Units: Count
Valid Statistics:
• Minimum
• Maximum
• Average
• SampleCount
• Sum
Metric Description
ItemCacheHits The number of times an item was returned from the cache by the
node or cluster.
Units: Count
Valid Statistics:
• Minimum
• Maximum
• Average
• SampleCount
• Sum
ItemCacheMisses The number of times an item was not in the node or cluster cache,
and had to be retrieved from DynamoDB.
Units: Count
Valid Statistics:
• Minimum
• Maximum
• Average
• SampleCount
• Sum
QueryCacheHits The number of times a query result was returned from the node or
cluster cache.
Units: Count
Valid Statistics:
• Minimum
• Maximum
• Average
• SampleCount
• Sum
QueryCacheMisses The number of times a query result was not in the node or cluster
cache, and had to be retrieved from DynamoDB.
Units: Count
Valid Statistics:
• Minimum
• Maximum
• Average
• SampleCount
• Sum
Metric Description
ScanCacheHits The number of times a scan result was returned from the node or
cluster cache.
Units: Count
Valid Statistics:
• Minimum
• Maximum
• Average
• SampleCount
• Sum
ScanCacheMisses The number of times a scan result was not in the node or cluster
cache, and had to be retrieved from DynamoDB.
Units: Count
Valid Statistics:
• Minimum
• Maximum
• Average
• SampleCount
• Sum
Units: Count
Valid Statistics:
• Minimum
• Maximum
• Average
• SampleCount
• Sum
Units: Count
Valid Statistics:
• Minimum
• Maximum
• Average
• SampleCount
• Sum
Metric Description
Units: Count
Valid Statistics:
• Minimum
• Maximum
• Average
• SampleCount
• Sum
Units: Count
Valid Statistics:
• Minimum
• Maximum
• Average
• SampleCount
• Sum
Units: Count
Valid Statistics:
• Minimum
• Maximum
• Average
• SampleCount
• Sum
Units: Count
Valid Statistics:
• Minimum
• Maximum
• Average
• SampleCount
• Sum
Metric Description
Units: Count
Valid Statistics:
• Minimum
• Maximum
• Average
• SampleCount
• Sum
Units: Bytes
Valid Statistics:
• Minimum
• Maximum
• Average
EvictedSize The amount of data that was evicted by the node or cluster to
make room for newly requested data. If the hit rates goes up,
and you see this metric also growing, it probably means that your
working set has increased. You should consider switching to a
cluster with a larger node type.
Units: Bytes
Valid Statistics:
• Minimum
• Maximum
• Average
• Sum
The metrics for DAX are qualified by the values for the account, cluster ID, or cluster ID and node ID
combination. You can use the CloudWatch console to retrieve DAX data along any of the dimensions in
the following table.
Dimension CloudWatch
Description
Metric
Namespace
Dimension CloudWatch
Description
Metric
Namespace
For more information, see Set Up Amazon Simple Notification Service in the Amazon CloudWatch
User Guide.
2. Create the alarm.
Note
You can increase or decrease the threshold to one that makes sense for your application. You
can also use CloudWatch Metric Math to define a cache miss rate metric and set an alarm over
that metric.
How Can I Be Notified If Requests Cause Any Internal Error in the Cluster?
For more information, see Set Up Amazon Simple Notification Service in the Amazon CloudWatch
User Guide.
2. Create the alarm.
To learn more about DAX and CloudTrail, see the DynamoDB Accelerator (DAX) section in Logging
DynamoDB Operations by Using AWS CloudTrail (p. 774).
AWS provides a frequently updated list of AWS services in scope of specific compliance programs at AWS
Services in Scope by Compliance Program.
Third-party audit reports are available for you to download using AWS Artifact. For more information,
see Downloading Reports in AWS Artifact.
For more information about AWS compliance programs, see AWS Compliance Programs.
Your compliance responsibility when using DynamoDB is determined by the sensitivity of your data, your
organization’s compliance objectives, and applicable laws and regulations. If your use of DynamoDB is
subject to compliance with standards like HIPAA, PCI, or FedRAMP, AWS provides resources to help:
• Security and Compliance Quick Start Guides – Deployment guides that discuss architectural
considerations and provide steps for deploying security- and compliance-focused baseline
environments on AWS.
• Architecting for HIPAA Security and Compliance Whitepaper – A whitepaper that describes how
companies can use AWS to create HIPAA-compliant applications.
• AWS Compliance Resources – A collection of workbooks and guides that might apply to your industry
and location.
• AWS Config – A service that assesses how well your resource configurations comply with internal
practices, industry guidelines, and regulations.
• AWS Security Hub – A comprehensive view of your security state within AWS that helps you check your
compliance with security industry standards and best practices.
If you need to replicate your data or applications over greater geographic distances, use AWS Local
Regions. An AWS Local Region is a single data center designed to complement an existing AWS Region.
Like all AWS Regions, AWS Local Regions are completely isolated from other AWS Regions.
For more information about AWS Regions and Availability Zones, see AWS Global Infrastructure.
In addition to the AWS global infrastructure, Amazon DynamoDB offers several features to help support
your data resiliency and backup needs.
DynamoDB provides on-demand backup capability. It allows you to create full backups of your tables
for long-term retention and archival. For more information, see On-Demand Backup and Restore for
DynamoDB.
Point-in-time recovery
Point-in-time recovery helps protect your DynamoDB tables from accidental write or delete
operations. With point in time recovery, you don't have to worry about creating, maintaining, or
scheduling on-demand backups. For more information, see Point-in-Time Recovery for DynamoDB.
You use AWS published API calls to access DynamoDB through the network. Clients must support TLS
(Transport Layer Security) 1.0. We recommend TLS 1.2 or later. Clients must also support cipher suites
with perfect forward secrecy (PFS) such as Ephemeral Diffie-Hellman (DHE) or Elliptic Curve Diffie-
Hellman Ephemeral (ECDHE). Most modern systems such as Java 7 and later support these modes.
Additionally, requests must be signed using an access key ID and a secret access key that is associated
with an IAM principal. Or you can use the AWS Security Token Service (AWS STS) to generate temporary
security credentials to sign requests.
You can also use a virtual private cloud (VPC) endpoint for DynamoDB to enable Amazon EC2 instances
in your VPC to use their private IP addresses to access DynamoDB with no exposure to the public
internet. For more information, see Using Amazon VPC Endpoints to Access DynamoDB (p. 795).
To access the public internet, your VPC must have an internet gateway—a virtual router that connects
your VPC to the internet. This allows applications running on Amazon EC2 in your VPC to access internet
resources, such as Amazon DynamoDB.
By default, communications to and from DynamoDB use the HTTPS protocol, which protects network
traffic by using SSL/TLS encryption. The following diagram shows how an EC2 instance in a VPC accesses
DynamoDB:
Many customers have legitimate privacy and security concerns about sending and receiving data across
the public internet. You can address these concerns by using a virtual private network (VPN) to route all
DynamoDB network traffic through your own corporate network infrastructure. However, this approach
can introduce bandwidth and availability challenges.
VPC endpoints for DynamoDB can alleviate these challenges. A VPC endpoint for DynamoDB enables
Amazon EC2 instances in your VPC to use their private IP addresses to access DynamoDB with no
exposure to the public internet. Your EC2 instances do not require public IP addresses, and you don't
need an internet gateway, a NAT device, or a virtual private gateway in your VPC. You use endpoint
policies to control access to DynamoDB. Traffic between your VPC and the AWS service does not leave
the Amazon network.
When you create a VPC endpoint for DynamoDB, any requests to a DynamoDB endpoint within the
Region (for example, dynamodb.us-west-2.amazonaws.com) are routed to a private DynamoDB endpoint
within the Amazon network. You don't need to modify your applications running on EC2 instances in
your VPC. The endpoint name remains the same, but the route to DynamoDB stays entirely within the
Amazon network, and does not access the public internet.
The following diagram shows how an EC2 instance in a VPC can use a VPC endpoint to access
DynamoDB.
For more information, see the section called “Tutorial: Using a VPC Endpoint for DynamoDB” (p. 797).
Topics
• Step 1: Launch an Amazon EC2 Instance (p. 797)
• Step 2: Configure Your Amazon EC2 Instance (p. 799)
• Step 3: Create a VPC Endpoint for DynamoDB (p. 799)
• Step 4: (Optional) Clean Up (p. 801)
• At the top of the list of AMIs, go to Amazon Linux AMI and choose Select.
• Choose Launch.
3. In the Select an existing key pair or create a new key pair window, do one of the following:
• If you do not have an Amazon EC2 key pair, choose Create a new key pair and follow the
instructions. You will be asked to download a private key file (.pem file); you will need this file later
when you log in to your Amazon EC2 instance.
• If you already have an existing Amazon EC2 key pair, go to Select a key pair and choose your key
pair from the list. You must already have the private key file ( .pem file) available in order to log in
to your Amazon EC2 instance.
4. When you have configured your key pair, choose Launch Instances.
5. Return to the Amazon EC2 console home page and choose the instance that you launched. In
the lower pane, on the Description tab, find the Public DNS for your instance. For example:
ec2-00-00-00-00.us-east-1.compute.amazonaws.com.
Make a note of this public DNS name, because you will need it in the next step in this tutorial (Step
2: Configure Your Amazon EC2 Instance (p. 799)).
Note
It will take a few minutes for your Amazon EC2 instance to become available. Before you go on
to the next step, ensure that the Instance State is running and that all of its Status Checks
have passed.
1. You will need to authorize inbound SSH traffic to your Amazon EC2 instance. To do this, you will
create a new EC2 security group, and then assign the security group to your EC2 instance.
• Security group name—type a name for your security group. For example: my-ssh-access
• Description—type a short description for the security group.
• VPC—choose your default VPC.
• In the Security group rules section, choose Add Rule and do the following:
• Type—choose SSH.
• Source—choose My IP.
You will need to specify your private key file (.pem file) and the public DNS name of your instance.
(See Step 1: Launch an Amazon EC2 Instance (p. 797)).
aws configure
1. Before you begin, verify that you can communicate with DynamoDB using its public endpoint.
The output will show a list of DynamoDB tables that you currently own. (If you don't have any tables,
the list will be empty.).
2. Verify that DynamoDB is an available service for creating VPC endpoints in the current AWS Region.
(The command is shown in bold text, followed by example output.)
{
"ServiceNames": [
"com.amazonaws.us-east-1.s3",
"com.amazonaws.us-east-1.dynamodb"
]
}
In the example output, DynamoDB is one of the services available, so you can proceed with creating
a VPC endpoint for it.
3. Determine your VPC identifier.
{
"Vpcs": [
{
"VpcId": "vpc-0bbc736e",
"InstanceTenancy": "default",
"State": "available",
"DhcpOptionsId": "dopt-8454b7e1",
"CidrBlock": "172.31.0.0/16",
"IsDefault": true
}
]
}
{
"VpcEndpoint": {
"PolicyDocument": "{\"Version\":\"2008-10-17\",\"Statement\":[{\"Effect\":
\"Allow\",\"Principal\":\"*\",\"Action\":\"*\",\"Resource\":\"*\"}]}",
"VpcId": "vpc-0bbc736e",
"State": "available",
"ServiceName": "com.amazonaws.us-east-1.dynamodb",
"RouteTableIds": [],
"VpcEndpointId": "vpce-9b15e2f2",
"CreationTimestamp": "2017-07-26T22:00:14Z"
}
}
5. Verify that you can access DynamoDB through the VPC endpoint.
If you want, you can try some other AWS CLI commands for DynamoDB. For more information, see
the AWS CLI Command Reference.
{
"VpcEndpoint": {
"PolicyDocument": "{\"Version\":\"2008-10-17\",\"Statement\":[{\"Effect\":
\"Allow\",\"Principal\":\"*\",\"Action\":\"*\",\"Resource\":\"*\"}]}",
"VpcId": "vpc-0bbc736e",
"State": "available",
"ServiceName": "com.amazonaws.us-east-1.dynamodb",
"RouteTableIds": [],
"VpcEndpointId": "vpce-9b15e2f2",
"CreationTimestamp": "2017-07-26T22:00:14Z"
}
}
{
"Unsuccessful": []
}
The following security best practices also address configuration and vulnerability analysis in Amazon
DynamoDB:
Topics
• DynamoDB Preventative Security Best Practices (p. 802)
• DynamoDB Detective Security Best Practices (p. 804)
Encryption at rest
DynamoDB encrypts at rest all user data stored in tables, indexes, streams, and backups using
encryption keys stored in AWS Key Management Service (AWS KMS). This provides an additional
layer of data protection by securing your data from unauthorized access to the underlying storage .
You can specify whether DynamoDB should use an AWS owned CMK (default encryption type) or
an AWS managed CMK to encrypt the user data. For more information, see Amazon DynamoDB
Encryption at Rest.
Use IAM roles to authenticate access to DynamoDB
For users, applications, and other AWS services to access DynamoDB, they must include valid
AWS credentials in their AWS API requests. You should not store AWS credentials directly in the
application or EC2 instance. These are long-term credentials that are not automatically rotated, and
therefore could have significant business impact if they are compromised. An IAM role enables you
to obtain temporary access keys that can be used to access AWS services and resources.
When granting permissions, you decide who is getting them, which DynamoDB APIs they are getting
permissions for, and the specific actions you want to allow on those resources. Implementing least
privilege is key in reducing security risk and the impact that can result from errors or malicious
intent.
Attach permissions policies to IAM identities (that is, users, groups, and roles) and thereby grant
permissions to perform operations on DynamoDB resources.
When you grant permissions in DynamoDB, you can specify conditions that determine how a
permissions policy takes effect. Implementing least privilege is key in reducing security risk and the
impact that can result from errors or malicious intent.
You can specify conditions when granting permissions using an IAM policy. For example, you can do
the following:
• Grant permissions to allow users read-only access to certain items and attributes in a table or a
secondary index.
• Grant permissions to allow users write-only access to certain attributes in a table, based upon the
identity of that user.
For more information, see Using IAM Policy Conditions for Fine-Grained Access Control.
Use a VPC endpoint and policies to access DynamoDB
If you only require access to DynamoDB from within a virtual private cloud (VPC), you should use
a VPC endpoint to limit access from only the required VPC. Doing this prevents that traffic from
traversing the open internet and being subject to that environment.
Using a VPC endpoint for DynamoDB allows you to control and limit access using the following:
• VPC endpoint policies – These policies are applied on the DynamoDB VPC endpoint. They allow
you to control and limit API access to the DynamoDB table.
• IAM policies – By using the aws:sourceVpce condition on policies attached to IAM users, groups,
or roles, you can enforce that all access to the DynamoDB table is via the specified VPC endpoint.
If you store sensitive or confidential data in DynamoDB, you might want to encrypt that data as
close as possible to its origin so that your data is protected throughout its lifecycle. Encrypting your
sensitive data in transit and at rest helps ensure that your plaintext data isn’t available to any third
party.
The Amazon DynamoDB Encryption Client is a software library that helps you protect your table
data before you send it to DynamoDB.
At the core of the DynamoDB Encryption Client is an item encryptor that encrypts, signs, verifies,
and decrypts table items. It takes in information about your table items and instructions about
which items to encrypt and sign. It gets the encryption materials and instructions on how to use
them from a cryptographic material provider that you select and configure.
If you are using an AWS managed customer master key (CMK) for encryption at rest, usage of this
key is logged into AWS CloudTrail. CloudTrail provides visibility into user activity by recording actions
taken on your account. CloudTrail records important information about each action, including who
made the request, the services used, the actions performed, parameters for the actions, and the
response elements returned by the AWS service. This information helps you track changes made
to your AWS resources and troubleshoot operational issues. CloudTrail makes it easier to ensure
compliance with internal policies and regulatory standards.
You can use CloudTrail to audit key usage. CloudTrail creates log files that contain a history of AWS
API calls and related events for your account. These log files include all AWS KMS API requests made
using the AWS Management Console, AWS SDKs, and command line tools, in addition to those made
through integrated AWS services. You can use these log files to get information about when the
CMK was used, the operation that was requested, the identity of the requester, the IP address that
the request came from, and so on. For more information, see Logging AWS KMS API Calls with AWS
CloudTrail and the AWS CloudTrail User Guide.
Use CloudTrail to monitor DynamoDB control-plane operations
CloudTrail provides visibility into user activity by recording actions taken on your account. CloudTrail
records important information about each action, including who made the request, the services
used, the actions performed, parameters for the actions, and the response elements returned by
the AWS service. This information helps you to track changes made to your AWS resources and
to troubleshoot operational issues. CloudTrail makes it easier to ensure compliance with internal
policies and regulatory standards.
Control-plane operations let you create and manage DynamoDB tables. They also let you work with
indexes, streams, and other objects that are dependent on tables.
When activity occurs in DynamoDB, that activity is recorded in a CloudTrail event along with other
AWS service events in the event history. For more information, see Logging DynamoDB Operations
by Using AWS CloudTrail. You can view, search, and download recent events in your AWS account.
For more information, see Viewing Events with CloudTrail Event History in the AWS CloudTrail User
Guide.
For an ongoing record of events in your AWS account, including events for DynamoDB, create a trail.
A trail enables CloudTrail to deliver log files to an Amazon Simple Storage Service (Amazon S3)
bucket. By default, when you create a trail on the console, the trail applies to all AWS Regions. The
trail logs events from all Regions in the AWS partition and delivers the log files to the S3 bucket that
you specify. Additionally, you can configure other AWS services to further analyze and act upon the
event data collected in CloudTrail logs.
Consider using DynamoDB Streams to monitor modify/update data-plane operations
AWS CloudTrail does not support logging of DynamoDB data-plane operations, such as GetItem
and PutItem. So you might want to consider using Amazon DynamoDB Streams as a source for
these events occurring in your environment.
DynamoDB is integrated with AWS Lambda so that you can create triggers—pieces of code that
automatically respond to events in DynamoDB Streams. With triggers, you can build applications
that react to data modifications in DynamoDB tables.
If you enable DynamoDB Streams on a table, you can associate the stream Amazon Resource Name
(ARN) with a Lambda function that you write. Immediately after an item in the table is modified, a
new record appears in the table's stream. AWS Lambda polls the stream and invokes your Lambda
function synchronously when it detects new stream records. The Lambda function can perform any
actions that you specify, such as sending a notification or initiating a workflow.
For an example, see Tutorial: Using AWS Lambda with Amazon DynamoDB Streams. This example
receives a DynamoDB event input, processes the messages that it contains, and writes some of the
incoming event data to Amazon CloudWatch Logs.
Monitor DynamoDB configuration with AWS Config
Using AWS Config, you can continuously monitor and record configuration changes of your AWS
resources. AWS Config also enables you to inventory your AWS resources. When a change from a
previous state is detected, an Amazon Simple Notification Service (Amazon SNS) notification can be
delivered for you to review and take action. Follow the guidance in Setting Up AWS Config with the
Console, ensuring that DynamoDB resource types are included.
You can configure AWS Config to stream configuration changes and notifications to an Amazon SNS
topic. For example, when a resource is updated, you can get a notification sent to your email, so
that you can view the changes. You can also be notified when AWS Config evaluates your custom or
managed rules against your resources.
For an example, see Notifications that AWS Config Sends to an Amazon SNS topic in the AWS Config
Developer Guide.
Monitor DynamoDB compliance with AWS Config rules
AWS Config continuously tracks the configuration changes that occur among your resources. It
checks whether these changes violate any of the conditions in your rules. If a resource violates a rule,
AWS Config flags the resource and the rule as noncompliant.
By using AWS Config to evaluate your resource configurations, you can assess how well your resource
configurations comply with internal practices, industry guidelines, and regulations. AWS Config
provides AWS managed rules, which are predefined, customizable rules that AWS Config uses to
evaluate whether your AWS resources comply with common best practices.
Tag your DynamoDB resources for identification and automation
You can assign metadata to your AWS resources in the form of tags. Each tag is a simple label
consisting of a customer-defined key and an optional value that can make it easier to manage,
search for, and filter resources.
Tagging allows for grouped controls to be implemented. Although there are no inherent types of
tags, they enable you to categorize resources by purpose, owner, environment, or other criteria. The
following are some examples:
• Security – Used to determine requirements such as encryption.
• Confidentiality – An identifier for the specific data-confidentiality level a resource supports.
• Environment – Used to distinguish between development, test, and production infrastructure.
For more information, see AWS Tagging Strategies and Tagging for DynamoDB.
Contents
• NoSQL Design for DynamoDB (p. 807)
• Differences Between Relational Data Design and NoSQL (p. 807)
• Two Key Concepts for NoSQL Design (p. 807)
• Approaching NoSQL Design (p. 808)
• Best Practices for Designing and Using Partition Keys Effectively (p. 809)
• Using Burst Capacity Effectively (p. 809)
• Understanding DynamoDB Adaptive Capacity (p. 809)
• Designing Partition Keys to Distribute Your Workload Evenly (p. 810)
• Using Write Sharding to Distribute Workloads Evenly (p. 811)
• Sharding Using Random Suffixes (p. 811)
• Sharding Using Calculated Suffixes (p. 811)
• Distributing Write Activity Efficiently During Data Upload (p. 812)
• Best Practices for Using Sort Keys to Organize Data (p. 813)
• Using Sort Keys for Version Control (p. 813)
• Best Practices for Using Secondary Indexes in DynamoDB (p. 814)
• General Guidelines for Secondary Indexes in DynamoDB (p. 815)
• Use Indexes Efficiently (p. 815)
• Choose Projections Carefully (p. 815)
• Optimize Frequent Queries to Avoid Fetches (p. 816)
• Be Aware of Item-Collection Size Limits When Creating Local Secondary
Indexes (p. 816)
• Take Advantage of Sparse Indexes (p. 816)
• Examples of Sparse Indexes in DynamoDB (p. 817)
• Using Global Secondary Indexes for Materialized Aggregation Queries (p. 818)
• Overloading Global Secondary Indexes (p. 819)
• Using Global Secondary Index Write Sharding for Selective Table Queries (p. 821)
• Using Global Secondary Indexes to Create an Eventually Consistent Replica (p. 822)
• Best Practices for Storing Large Items and Attributes (p. 822)
• Compressing Large Attribute Values (p. 822)
• Storing Large Attribute Values in Amazon S3 (p. 823)
• Best Practices for Handling Time-Series Data in DynamoDB (p. 823)
• Design Pattern for Time-Series Data (p. 823)
• Time-Series Table Examples (p. 824)
• Best Practices for Managing Many-to-Many Relationships (p. 825)
• Adjacency List Design Pattern (p. 826)
• Materialized Graph Pattern (p. 827)
• Best Practices for ImplementingAPI
a Hybrid
VersionDatabase System (p. 833)
2012-08-10
806
Amazon DynamoDB Developer Guide
NoSQL Design
Topics
• Differences Between Relational Data Design and NoSQL (p. 807)
• Two Key Concepts for NoSQL Design (p. 807)
• Approaching NoSQL Design (p. 808)
• In RDBMS, data can be queried flexibly, but queries are relatively expensive and don't scale well in
high-traffic situations (see First Steps for Modeling Relational Data in DynamoDB (p. 837)).
• In a NoSQL database such as DynamoDB, data can be queried efficiently in a limited number of ways,
outside of which queries can be expensive and slow.
These differences make database design very different between the two systems:
• In RDBMS, you design for flexibility without worrying about implementation details or performance.
Query optimization generally doesn't affect schema design, but normalization is very important.
• In DynamoDB, you design your schema specifically to make the most common and important queries
as fast and as inexpensive as possible. Your data structures are tailored to the specific requirements of
your business use cases.
• For DynamoDB, by contrast, you shouldn't start designing your schema until you know the
questions it will need to answer. Understanding the business problems and the application use
cases up front is essential.
• You should maintain as few tables as possible in a DynamoDB application. Most well designed
applications require only one table.
• Data size: Knowing how much data will be stored and requested at one time will help determine the
most effective way to partition the data.
• Data shape: Instead of reshaping data when a query is processed (as an RDBMS system does), a NoSQL
database organizes data so that its shape in the database corresponds with what will be queried. This
is a key factor in increasing speed and scalability.
• Data velocity: DynamoDB scales by increasing the number of physical partitions that are available to
process queries, and by efficiently distributing data across those partitions. Knowing in advance what
the peak query loads might help determine how to partition data to best use I/O capacity.
After you identify specific query requirements, you can organize data according to general principles that
govern performance:
• Keep related data together. Research on routing-table optimization 20 years ago found that "locality
of reference" was the single most important factor in speeding up response time: keeping related
data together in one place. This is equally true in NoSQL systems today, where keeping related data
in close proximity has a major impact on cost and performance. Instead of distributing related data
items across multiple tables, you should keep related items in your NoSQL system as close together as
possible.
As a general rule, you should maintain as few tables as possible in a DynamoDB application. As
emphasized earlier, most well designed applications require only one table, unless there is a specific
reason for using multiple tables.
Exceptions are cases where high-volume time series data are involved, or datasets that have very
different access patterns—but these are exceptions. A single table with inverted indexes can usually
enable simple queries to create and retrieve the complex hierarchical data structures required by your
application.
• Use sort order. Related items can be grouped together and queried efficiently if their key design
causes them to sort together. This is an important NoSQL design strategy.
• Distribute queries. It is also important that a high volume of queries not be focused on one part of
the database, where they can exceed I/O capacity. Instead, you should design data keys to distribute
traffic evenly across partitions as much as possible, avoiding "hot spots."
• Use global secondary indexes. By creating specific global secondary indexes, you can enable
different queries than your main table can support, and that are still fast and relatively inexpensive.
These general principles translate into some common design patterns that you can use to model data
efficiently in DynamoDB.
Generally speaking, you should design your application for uniform activity across all logical partition
keys in the Table and its secondary indexes. You can determine the access patterns that your application
requires, and estimate the total read capacity units and write capacity units that each table and
secondary Index requires.
As traffic starts to flow, DynamoDB automatically supports your access patterns using the throughput
you have provisioned, as long as the traffic against a given partition key does not exceed 3000 read
capacity units or 1000 write capacity units.
Topics
• Using Burst Capacity Effectively (p. 809)
• Understanding DynamoDB Adaptive Capacity (p. 809)
• Designing Partition Keys to Distribute Your Workload Evenly (p. 810)
• Using Write Sharding to Distribute Workloads Evenly (p. 811)
• Distributing Write Activity Efficiently During Data Upload (p. 812)
DynamoDB currently retains up to five minutes (300 seconds) of unused read and write capacity. During
an occasional burst of read or write activity, these extra capacity units can be consumed quickly—even
faster than the per-second provisioned throughput capacity that you've defined for your table.
DynamoDB can also consume burst capacity for background maintenance and other tasks without prior
notice.
Note that these details of burst capacity might change in the future.
To better accommodate uneven access patterns, DynamoDB adaptive capacity enables your application
to continue reading and writing to hot partitions without being throttled, provided that traffic does not
exceed your table’s total provisioned capacity or the partition maximum capacity. Adaptive capacity
works by automatically and instantly increasing throughput capacity for partitions that receive more
traffic.
Adaptive capacity is enabled automatically for every DynamoDB table, so you don't need to explicitly
enable or disable it.
The following diagram illustrates how adaptive capacity works. The example table is provisioned with
400 write-capacity units (WCUs) evenly shared across four partitions, allowing each partition to sustain
up to 100 WCUs per second. Partitions 1, 2, and 3 each receive write traffic of 50 WCU/sec. Partition 4
receives 150 WCU/sec. This hot partition can accept write traffic while it still has unused burst capacity,
but eventually it will throttle traffic that exceeds 100 WCU/sec.
DynamoDB adaptive capacity responds by increasing partition 4's capacity so that it can sustain the
higher workload of 150 WCU/sec without being throttled.
The optimal usage of a table's provisioned throughput depends not only on the workload patterns
of individual items, but also on the partition-key design. This doesn't mean that you must access all
partition key values to achieve an efficient throughput level, or even that the percentage of accessed
partition key values must be high. It does mean that the more distinct partition key values that your
workload accesses, the more those requests will be spread across the partitioned space. In general, you
will use your provisioned throughput more efficiently as the ratio of partition key values accessed to the
total number of partition key values increases.
Here is a comparison of the provisioned throughput efficiency of some common partition key schemas:
If a single table has only a small number of partition key values, consider distributing your write
operations across more distinct partition key values. In other words, structure the primary key elements
to avoid one "hot" (heavily requested) partition key value that slows overall performance.
For example, consider a table with a composite primary key. The partition key represents the item's
creation date, rounded to the nearest day. The sort key is an item identifier. On a given day, say
2014-07-09, all of the new items are written to that single partition-key value (and corresponding
physical partition).
If the table fits entirely into a single partition (considering growth of your data over time), and if your
application's read and write throughput requirements don't exceed the read and write capabilities of a
single partition, your application won't encounter any unexpected throttling as a result of partitioning.
However, if you anticipate your table scaling beyond a single partition, you should architect your
application so that it can use more of the table's full provisioned throughput.
For example, in the case of a partition key that represents today's date, you might choose a random
number between 1 and 200 and concatenate it as a suffix to the date. This yields partition key values like
2014-07-09.1, 2014-07-09.2, and so on, through 2014-07-09.200. Because you are randomizing
the partition key, the writes to the table on each day are spread evenly across multiple partitions. This
results in better parallelism and higher overall throughput.
However, to read all the items for a given day, you would have to query the items for all the suffixes and
then merge the results. For example, you would first issue a Query request for the partition key value
2014-07-09.1, then another Query for 2014-07-09.2, and so on, through 2014-07-09.200. Finally,
your application would have to merge the results from all those Query requests.
Consider the previous example, in which a table uses today's date in the partition key. Now suppose that
each item has an accessible OrderId attribute, and that you most often need to find items by order ID
in addition to date. Before your application writes the item to the table, it could calculate a hash suffix
based on the order ID and append it to the partition key date. The calculation might generate a number
between 1 and 200 that is fairly evenly distributed, similar to what the random strategy produces.
A simple calculation would likely suffice, such as the product of the UTF-8 code point values for the
characters in the order ID, modulo 200, + 1. The partition key value would then be the date concatenated
with the calculation result.
With this strategy, the writes are spread evenly across the partition-key values, and thus across the
physical partitions. You can easily perform a GetItem operation for a particular item and date because
you can calculate the partition-key value for a specific OrderId value.
To read all the items for a given day, you still must Query each of the 2014-07-09.N keys (where N is 1–
200), and your application then has to merge all the results. The benefit is that you avoid having a single
"hot" partition key value taking all of the workload.
Note
For an even more efficient strategy specifically designed to handle high-volume time-series
data, see Time-Series Data (p. 823).
For example, suppose that you want to upload user messages to a DynamoDB table that uses a
composite primary key with UserID as the partition key and MessageID as the sort key.
When you upload the data, one approach you can take is to upload all message items for each user, one
user after another:
UserID MessageID
U1 1
U1 2
U1 ...
U1 ... up to 100
U2 1
U2 2
U2 ...
U2 ... up to 200
The problem in this case is that you are not distributing your write requests to DynamoDB across your
partition key values. You are taking one partition key value at a time and uploading all of its items before
going to the next partition key value and doing the same.
Behind the scenes, DynamoDB is partitioning the data in your table across multiple servers. To fully use
all the throughput capacity that is provisioned for the table, you must distribute your workload across
your partition key values. By directing an uneven amount of upload work toward items that all have the
same partition key value, you are not fully using all the resources that DynamoDB has provisioned for
your table.
You can distribute your upload work by using the sort key to load one item from each partition key value,
then another item from each partition key value, and so on:
UserID MessageID
U1 1
U2 1
U3 1
... ...
U1 2
U2 2
U3 2
... ...
Every upload in this sequence uses a different partition key value, keeping more DynamoDB servers busy
simultaneously and improving your throughput performance.
• They gather related information together in one place where it can be queried efficiently. Careful
design of the sort key lets you retrieve commonly needed groups of related items using range queries
with operators such as starts-with, between, >, <, and so on.
• Composite sort keys let you define hierarchical (one-to-many) relationships in your data that you can
query at any level of the hierarchy.
For example, in a table listing geographical locations, you might structure the sort key as follows:
[country]#[region]#[state]#[county]#[city]#[neighborhood]
This would let you make efficient range queries for a list of locations at any one of these levels of
aggregation, from country all the way down to a neighborhood, and everything in between.
• For each new item, create two copies of the item: One copy should have a version-number prefix of
zero (such as v0_) at the beginning of the sort key, and one should have a version-number prefix of
one (such as v001_).
• Every time the item is updated, use the next higher version-prefix in the sort key of the updated
version, and copy the updated contents into the item with version-prefix zero. This means that the
latest version of any item can be located easily using the zero prefix.
For example, a parts manufacturer might use a schema like the one illustrated below.
The Equipment_1 item goes through a sequence of audits by various auditors. The results of each new
audit are captured in a new item in the table, starting with version number one, and then incrementing
the number for each successive revision.
When each new revision is added, the application layer replaces the contents of the zero-version item
(having sort-key equal to v0_Audit) with the contents of the new revision.
Whenever the application needs to retrieve for the most recent audit status, it can query for the sort-key
prefix of v0_.
If the application needs to retrieve the entire revision history, it can query all the items under the item's
partition key and filter out the v0_ item.
This design also works for audits across multiple parts of a piece of equipment, if you include the
individual part-IDs in the sort key after the sort key prefix.
Contents
• General Guidelines for Secondary Indexes in DynamoDB (p. 815)
• Use Indexes Efficiently (p. 815)
• Choose Projections Carefully (p. 815)
• Optimize Frequent Queries to Avoid Fetches (p. 816)
• Be Aware of Item-Collection Size Limits When Creating Local Secondary Indexes (p. 816)
• Take Advantage of Sparse Indexes (p. 816)
• Examples of Sparse Indexes in DynamoDB (p. 817)
• Using Global Secondary Indexes for Materialized Aggregation Queries (p. 818)
• Overloading Global Secondary Indexes (p. 819)
• Using Global Secondary Index Write Sharding for Selective Table Queries (p. 821)
• Using Global Secondary Indexes to Create an Eventually Consistent Replica (p. 822)
• Global secondary index—An index with a partition key and a sort key that can be different from
those on the base table. A global secondary index is considered "global" because queries on the
index can span all of the data in the base table, across all partitions. A global secondary index has no
size limitations and has its own provisioned throughput settings for read and write activity that are
separate from those of the table.
• Local secondary index—An index that has the same partition key as the base table, but a different
sort key. A local secondary index is "local" in the sense that every partition of a local secondary index
is scoped to a base table partition that has the same partition key value. As a result, the total size of
indexed items for any one partition key value can't exceed 10 GB. Also, a local secondary index shares
provisioned throughput settings for read and write activity with the table it is indexing.
Each table in DynamoDB is limited to 20 global secondary indexes (default limit) and 5 local secondary
indexes.
For more information about the differences between global secondary indexes and local secondary
indexes, see Improving Data Access with Secondary Indexes (p. 493).
In general, you should use global secondary indexes rather than local secondary indexes. The exception is
when you need strong consistency in your query results, which a local secondary index can provide but a
global secondary index cannot (global secondary index queries only support eventual consistency).
The following are some general principles and design patterns to keep in mind when creating indexes in
DynamoDB:
Topics
• Use Indexes Efficiently (p. 815)
• Choose Projections Carefully (p. 815)
• Optimize Frequent Queries to Avoid Fetches (p. 816)
• Be Aware of Item-Collection Size Limits When Creating Local Secondary Indexes (p. 816)
If you expect a lot of write activity on a table compared to reads, follow these best practices:
• Consider projecting fewer attributes to minimize the size of items written to the index. However, this
only applies if the size of projected attributes would otherwise be larger than a single write capacity
unit (1 KB). For example, if the size of an index entry is only 200 bytes, DynamoDB rounds this up to
1 KB. In other words, as long as the index items are small, you can project more attributes at no extra
cost.
• Avoid projecting attributes that you know will rarely be needed in queries. Every time you update an
attribute that is projected in an index, you incur the extra cost of updating the index as well. You can
still retrieve non-projected attributes in a Query at a higher provisioned throughput cost, but the
query cost may be significantly lower than the cost of updating the index frequently.
• Specify ALL only if you want your queries to return the entire table item sorted by a different sort key.
Projecting all attributes eliminates the need for table fetches, but in most cases, it doubles your costs
for storage and write activity.
Balance the need to keep your indexes as small as possible against the need to keep fetches to a
minimum, as explained in the next section.
Keep in mind that "occasional" queries can often turn into "essential" queries. If there are attributes that
you don't intend to project because you anticipate querying them only occasionally, consider whether
circumstances might change and you might regret not projecting those attributes after all.
For more information about table fetches, see Provisioned Throughput Considerations for Local
Secondary Indexes (p. 537).
When you add or update a table item, DynamoDB updates all local secondary indexes that are affected.
If the indexed attributes are defined in the table, the local secondary indexes grow too.
When you create a local secondary index, think about how much data will be written to it, and how
many of those data items will have the same partition key value. If you expect that the sum of table and
index items for a particular partition key value might exceed 10 GB, consider whether you should avoid
creating the index.
If you can't avoid creating the local secondary index, you must anticipate the item collection size limit
and take action before you exceed it. For strategies on working within the limit and taking corrective
action, see Item Collection Size Limit (p. 541).
Sparse indexes are useful for queries over a small subsection of a table. For example, suppose that you
have a table where you store all your customer orders, with the following key attributes:
To track open orders, you can insert a Boolean attribute named isOpen in order items that have not
already shipped. Then when the order ships, you can delete the attribute. If you then create an index on
CustomerId (partition key) and isOpen (sort key), only those orders with isOpen defined appear in it.
When you have thousands of orders of which only a small number are open, it's faster and less expensive
to query that index for open orders than to scan the entire table.
Instead of using a Boolean type of attribute like isOpen, you could use an attribute with a value that
results in a useful sort order in the index. For example, you could use an OrderOpenDate attribute set
to the date on which each order was placed, and then delete it after the order is fulfilled. That way, when
you query the sparse index, the items are returned sorted by the date on which each order was placed.
By designing a global secondary index to be sparse, you can provision it with lower write throughput
than that of the parent table, while still achieving excellent performance.
For example, a gaming application might track all scores of every user, but generally only needs to query
a few high scores. The following design handles this scenario efficiently:
Here, Rick has played three games and achieved Champ status in one of them. Padma has played four
games and achieved Champ status in two of them. Notice that the Award attribute is present only in
items where the user achieved an award. The associated global secondary index looks like the following:
The global secondary index contains only the high scores that are frequently queried, which are a small
subset of the items in the parent table.
The table in this example stores songs with the songID as the partition key. You can enable Amazon
DynamoDB Streams on this table and attach a Lambda function to the streams so that as each
song is downloaded, an entry is added to the table with Partition-Key=SongID and Sort-
Key=DownloadID. As these updates are made, they trigger a Lambda function in DynamoDB Streams.
The Lambda function can aggregate and group the downloads by songID and update the top-level item,
Partition-Key=songID, and Sort-Key=Month.
To read the updates in near real time, with single-digit millisecond latency, use the global secondary
index with query conditions Month=2018-01, ScanIndexForward=False, Limit=1.
Another key optimization used here is that the global secondary index is a sparse index and is available
only on the items that need to be queried to retrieve the data in real time. The global secondary index
can serve additional workflows that need information on the top 10 songs that were popular, or any
song downloaded in that month.
Consider the following example of a DynamoDB table layout that saves a variety of different kinds of
data:
The Data attribute, which is common to all the items, has different content depending on its parent
item. If you create a global secondary index for the table that uses the table's sort key as its partition key
and the Data attribute as its sort key, you can make a variety of different queries using that single global
secondary index. These queries might include the following:
• Look up an employee by name in the global secondary index, by searching on the Employee_Name
attribute value.
• Use the global secondary index to find all employees working in a particular warehouse by searching
on a warehouse ID (such as Warehouse_01).
• Get a list of recent hires, querying the global secondary index on HR_confidential as a partition-key
value and Data as the sort key value.
To enable selective queries across the entire key space, you can use write sharding by adding an attribute
containing a (0-N) value to every item that you will use for the global secondary index partition key.
Using this schema design, the event items are distributed across 0-N partitions on the GSI, allowing a
scatter read using a sort condition on the composite key to retrieve all items with a given state during a
specified time period.
This schema pattern delivers a highly selective result set at minimal cost, without requiring a table scan.
• Set different provisioned read capacity for different readers. For example, suppose that you have
two applications: One application handles high-priority queries and needs the highest levels of read
performance, whereas the other handles low-priority queries that can tolerate throttling of read
activity.
If both of these applications read from the same table, a heavy read load from the low-priority
application could consume all the available read capacity for the table. This would throttle the high-
priority application's read activity.
Instead, you can create a replica through a global secondary index whose read capacity you can set
separate from that of the table itself. You can then have your low-priority app query the replica
instead of the table.
• Eliminate reads from a table entirely. For example, you might have an application that captures a
high volume of clickstream activity from a website, and you don't want to risk having reads interfere
with that. You can isolate this table and prevent reads by other applications (see Using IAM Policy
Conditions for Fine-Grained Access Control (p. 730)), while letting other applications read a replica
created using a global secondary index.
To create a replica, set up a global secondary index that has the same key schema as the parent table,
with some or all of the non-key attributes projected into it. In applications, you can direct some or all
read activity to this global secondary index rather than to the parent table. You can then adjust the
provisioned read capacity of the global secondary index to handle those reads without changing the
parent table's provisioned read capacity.
There is always a short propagation delay between a write to the parent table and the time when the
written data appears in the index. In other words, your applications should take into account that the
global secondary index replica is only eventually consistent with the parent table.
You can create multiple global secondary index replicas to support different read patterns. When you
create the replicas, project only the attributes that each read pattern actually requires. An application
can then consume less provisioned read capacity to obtain only the data it needs rather than having to
read the item from the parent table. This optimization can result in significant cost savings over time.
For example, the Reply table in the Creating Tables and Loading Sample Data (p. 323) section stores
messages written by forum users. These user replies might consist of long strings of text, which makes
them excellent candidates for compression.
For sample code that demonstrates how to compress such messages in DynamoDB, see:
• Example: Handling Binary Type Attributes Using the AWS SDK for Java Document API (p. 429)
• Example: Handling Binary Type Attributes Using the AWS SDK for .NET Low-Level API (p. 452)
You can also use the object metadata support in Amazon S3 to provide a link back to the parent item in
DynamoDB. Store the primary key value of the item as Amazon S3 metadata of the object in Amazon S3.
Doing this often helps with maintenance of the Amazon S3 objects.
For example, consider the ProductCatalog table in the Creating Tables and Loading Sample
Data (p. 323) section. Items in this table store information about item price, description, book authors,
and dimensions for other products. If you wanted to store an image of each product that was too large
to fit in an item, you could store the images in Amazon S3 instead of in DynamoDB.
• DynamoDB doesn't support transactions that cross Amazon S3 and DynamoDB. Therefore, your
application must deal with any failures, which could include cleaning up orphaned Amazon S3 objects.
• Amazon S3 limits the length of object identifiers. So you must organize your data in a way that doesn't
generate excessively long object identifiers or violate other Amazon S3 constraints.
For more information about how to use Amazon S3, see the Amazon Simple Storage Service Developer
Guide.
The following design pattern often handles this kind of scenario effectively:
• Create one table per period, provisioned with the required read and write capacity and the required
indexes.
• Before the end of each period, prebuild the table for the next period. Just as the current period ends,
direct event traffic to the new table. You can assign names to these tables that specify the periods they
have recorded.
• As soon as a table is no longer being written to, reduce its provisioned write capacity to a lower value
(for example, 1 WCU) and provision whatever read capacity is appropriate. Reduce the provisioned read
capacity of earlier tables as they age. You may choose to archive or delete the tables whose contents
will rarely or never be needed.
The idea is to allocate the required resources for the current period that will experience the highest
volume of traffic and scale down provisioning for older tables that are not used actively, therefore
saving costs. Depending on your business needs, you may need to consider write sharding to distribute
traffic evenly to the logical partition key. For more information, see Using Write Sharding to Distribute
Workloads Evenly (p. 811).
The advantages of this pattern include minimal data duplication and simplified query patterns to find all
entities (nodes) related to a target entity (having an edge to a target node).
A real-world example where this pattern has been useful is an invoicing system where invoices contain
multiple bills. One bill can belong in multiple invoices. The partition key in this example is either an
InvoiceID or a BillID. BillID partitions have all attributes specific to bills. InvoiceID partitions
have an item storing invoice-specific attributes, and an item for each BillID that rolls up to the invoice.
Using the preceding schema, you can see that all bills for an invoice can be queried using the primary key
on the table. To look up all invoices that contain a part of a bill, create a global secondary index on the
table's sort key. The projections for the global secondary index look like the following:
The preceding schema shows a graph data structure that is defined by a set of data partitions containing
the items that define the edges and nodes of the graph. Edge items contain a Target and a Type
attribute. These attributes are used as part of a composite key name "TypeTarget" to identify the item in
a partition in the primary table or in a second global secondary index.
The first global secondary index is built on the Data attribute. This attribute uses global secondary
index-overloading as described earlier to index several different attribute types, namely Dates, Names,
Places, and Skills. Here, one global secondary index is effectively indexing four different attributes.
As you insert items into the table, you can use an intelligent sharding strategy to distribute item sets
with large aggregations (birthdate, skill) across as many logical partitions on the global secondary
indexes as are needed to avoid hot read/write problems.
The result of this combination of design patterns is a solid datastore for highly efficient real-time
graph workflows. These workflows can provide high-performance neighbor entity state and edge
aggregation queries for recommendation engines, social-networking applications, node rankings, subtree
aggregations, and other common graph use cases.
If your use case isn't sensitive to real-time data consistency, you can use a scheduled Amazon EMR
process to populate edges with relevant graph summary aggregations for your workflows in a cost-
effective way. If your application doesn't need to know immediately when an edge is added to the graph,
you can use a scheduled process to aggregate results.
To maintain some level of consistency, the design could include Amazon DynamoDB Streams and AWS
Lambda to process edge updates. It could also use an Amazon EMR job to validate results on a regular
interval. This approach is illustrated by the following diagram. It is commonly used in social networking
applications, where the cost of a real-time query is high and the need to immediately know individual
user updates is low.
IT service-management (ITSM) and security applications generally need to respond in real time to entity
state changes composed of complex edge aggregations. Such applications need a system that can
support real-time multiple node aggregations of second- and third-level relationships, or complex edge
traversals. If your use case requires these types of real-time graph query workflows, we recommend that
you consider using Amazon Neptune to manage these workflows.
Some organizations also maintain a variety of legacy relational systems that they have acquired or
inherited over decades. Migrating data from these systems might be too risky and expensive to justify
the effort.
However, the same organizations may now find that their operations depend on high-traffic customer-
facing websites, where millisecond response is essential. Relational systems can't scale to meet this
requirement except at huge (and often unacceptable) expense.
In these situations, the answer might be to create a hybrid system, in which DynamoDB creates a
materialized view of data stored in one or more relational systems and handles high-traffic requests
against this view. This type of system can potentially reduce costs by eliminating server hardware,
maintenance, and RDBMS licenses that were previously needed to handle customer-facing traffic.
A system that integrates DynamoDB Streams and AWS Lambda can provides several advantages:
For this kind of integration to be implemented, essentially three kinds of interoperation must be
provided:
1. Fill the DynamoDB cache incrementally. When an item is queried, look for it first in DynamoDB. If it
is not there, look for it in the SQL system, and load it into DynamoDB.
2. Write through a DynamoDB cache. When a customer changes a value in DynamoDB, a Lambda
function is triggered to write the new data back to the SQL system.
3. Update DynamoDB from the SQL system. When internal processes such as inventory management or
pricing change a value in the SQL system, a stored procedures is triggered to propagate the change to
the DynamoDB materialized view.
These operations are straightforward, and not all of them are needed for every scenario.
A hybrid solution can also be useful when you want to rely primarily on DynamoDB, but you also want to
maintain a small relational system for one-time queries, or for operations that need special security or
that are not time-critical.
RDBMS platforms use an ad hoc query language (generally a flavor of SQL) to generate or materialize
views of the normalized data to support application-layer access patterns.
For example, to generate a list of purchase order items sorted by the quantity in stock at all warehouses
that can ship each item, you could issue the following query against the preceding schema:
One-time queries of this kind provide a flexible API for accessing data, but they require a significant
amount of processing. You must often query the data from multiple locations, and the results must be
assembled for presentation. The preceding query initiates complex queries across a number of tables and
then sorts and integrates the resulting data.
Another factor that can slow down RDBMS systems is the need to support an ACID-compliant transaction
framework. The hierarchical data structures used by most online transaction processing (OLTP)
applications must be broken down and distributed across multiple logical tables when they are stored
in an RDBMS. Therefore, an ACID-compliant transaction framework is necessary to avoid race conditions
that could occur if an application tries to read an object that is in the process of being written. Such a
transaction framework necessarily adds significant overhead to the write process.
These two factors are the primary barriers to scale for traditional RDBMS platforms. It remains to be seen
whether the NewSQL community can be successful in delivering a distributed RDBMS solution. But it is
unlikely that even that would resolve the two limitations described earlier. No matter how the solution is
delivered, the processing costs of normalization and ACID transactions must remain significant.
For this reason, when your business requires low-latency response to high-traffic queries, taking
advantage of a NoSQL system generally makes technical and economic sense. Amazon DynamoDB helps
solve the problems that limit relational system scalability by avoiding them.
A relational database system does not scale well for the following reasons:
• It normalizes data and stores it on multiple tables that require multiple queries to write to disk.
• It generally incurs the performance costs of an ACID-compliant transaction system.
• It uses expensive joins to reassemble required views of query results.
• Schema flexibility lets DynamoDB store complex hierarchical data within a single item.
• Composite key design lets it store related items close together on the same table.
Queries against the data store become much simpler, often in the following form:
DynamoDB does far less work to return the requested data compared to the RDBMS in the earlier
example.
To start designing a DynamoDB table that will scale efficiently, you must take several steps first to
identify the access patterns that are required by the operations and business support systems (OSS/BSS)
that it needs to support:
• For new applications, review user stories about activities and objectives. Document the various use
cases you identify, and analyze the access patterns that they require.
• For existing applications, analyze query logs to find out how people are currently using the system and
what the key access patterns are.
After completing this process, you should end up with a list that might look something like the
following:
In a real application, your list might be much longer. But this collection represents the range of query
pattern complexity that you might find in a production environment.
A common approach to DynamoDB schema design is to identify application layer entities and use
denormalization and composite key aggregation to reduce query complexity.
In DynamoDB, this means using composite sort keys, overloaded global secondary indexes, partitioned
tables/indexes, and other design patterns. You can use these elements to structure the data so that
an application can retrieve whatever it needs for a given access pattern using a single query on a table
or index. The primary pattern that you can use to model the normalized schema shown in Relational
Modeling (p. 834) is the adjacency list pattern. Other patterns used in this design can include global
secondary index write sharding, global secondary index overloading, composite keys, and materialized
aggregations.
Important
In general, you should maintain as few tables as possible in a DynamoDB application. Most
well-designed applications require only one table. Exceptions include cases where high-
volume time series data are involved, or datasets that have very different access patterns. A
single table with inverted indexes can usually enable simple queries to create and retrieve
the complex hierarchical data structures required by your application.
The design pattern requires you to define a set of entity types that usually correlate to the various tables
in the relational schema. Entity items are then added to the table using a compound (partition and sort)
primary key. The partition key of these entity items is the attribute that uniquely identifies the item and
is referred to generically on all items as PK. The sort key attribute contains an attribute value that you
can use for an inverted index or global secondary index. It is generically referred to as SK.
You define the following entities, which support the relational order entry schema:
After adding these entity items to the table, you can define the relationships between them by adding
edge items to the entity item partitions. The following table demonstrates this step:
In this example, the Employee, Order, and Product Entity partitions on the table have additional edge
items that contain pointers to other entity items on the table. Next, define a few global secondary
indexes (GSIs) to support all the access patterns defined previously. The entity items don't all use the
same type of value for the primary key or the sort key attribute. All that is required is to have the
primary key and sort key attributes present to be inserted on the table.
The fact that some of these entities use proper names and others use other entity IDs as sort key values
allows the same global secondary index to support multiple types of queries. This technique is called
GSI overloading. It effectively eliminates the default limit of 20 global secondary indexes for tables that
contain multiple item types. This is shown in the following diagram as GSI 1:
GSI 2 is designed to support a fairly common application access pattern, which is to get all the items on
the table that have a certain state. For a large table with an uneven distribution of items across available
states, this access pattern can result in a hot key, unless the items are distributed across more than one
logical partition that can be queried in parallel. This design pattern is called write sharding.
To accomplish this for GSI 2, the application adds the GSI 2 primary key attribute to every Order item. It
populates that with a random number in a range of 0–N, where N can generically be calculated using the
following formula, unless there is a specific reason to do otherwise:
PartitionMaxReadRate = 3K * ItemsPerRCU
N = MaxRequiredIO / PartitionMaxReadRate
For that table, the N factor calculation would look like the following:
PartitionMaxReadRate = 3K * 16 = 48K
In this case, you need to distribute all the orders across at least 13 logical partitions on GSI 2 to ensure
that a read of all Order items with an OPEN status doesn't cause a hot partition on the physical storage
layer. It is a good practice to pad this number to allow for anomalies in the dataset. So a model using N
= 15 is probably fine. As mentioned earlier, you do this by adding the random 0–N value to the GSI 2 PK
attribute of each Order and OrderItem record that is inserted on the table.
This breakdown assumes that the access pattern that requires gathering all OPEN invoices occurs
relatively infrequently so that you can use burst capacity to fulfill the request. You can query the
following global secondary index using a State and Date Range Sort Key condition to produce a subset or
all Orders in a given state as needed.
In this example, the items are randomly distributed across the 15 logical partitions. This structure works
because the access pattern requires a large number of items to be retrieved. Therefore, it's unlikely that
any of the 15 threads will return empty result sets that could potentially represent wasted capacity. A
query always uses 1 read capacity unit (RCU) or 1 write capacity unit (WCU), even if nothing is returned
or no data is written.
If the access pattern requires a high velocity query on this global secondary index that returns a sparse
result set, it's probably better to use a hash algorithm to distribute the items rather than a random
pattern. In this case, you might select an attribute that is known when the query is executed at run time
and hash that attribute into a 0–14 key space when the items are inserted. Then they can be efficiently
read from the global secondary index.
Finally, you can revisit the access patterns that were defined earlier. Following is the list of access
patterns and the query conditions that you will use with the new DynamoDB version of the application
to accommodate them:
If possible, you should avoid using a Scan operation on a large table or index with a filter that removes
many results. Also, as a table or index grows, the Scan operation slows. The Scan operation examines
every item for the requested values and can use up the provisioned throughput for a large table or
index in a single operation. For faster response times, design your tables and indexes so that your
applications can use Query instead of Scan. (For tables, you can also consider using the GetItem and
BatchGetItem APIs.)
Alternatively, design your application to use Scan operations in a way that minimizes the impact on your
request rate.
This represents a sudden spike in usage, compared to the configured read capacity for the table. This
usage of capacity units by a scan prevents other potentially more important requests for the same table
from using the available capacity units. As a result, you likely get a ProvisionedThroughputExceeded
exception for those requests.
The problem is not just the sudden increase in capacity units that the Scan uses. The scan is also likely
to consume all of its capacity units from the same partition because the scan requests read items that
are next to each other on the partition. This means that the request is hitting the same partition, causing
all of its capacity units to be consumed, and throttling other requests to that partition. If the request to
read data is spread across multiple partitions, the operation would not throttle a specific partition.
The following diagram illustrates the impact of a sudden spike of capacity unit usage by Query and
Scan operations, and its impact on your other requests against the same table.
As illustrated here, the usage spike can impact the table's provisioned throughput in several ways:
Instead of using a large Scan operation, you can use the following techniques to minimize the impact of
a scan on a table's provisioned throughput.
Because a Scan operation reads an entire page (by default, 1 MB), you can reduce the impact of the
scan operation by setting a smaller page size. The Scan operation provides a Limit parameter that you
can use to set the page size for your request. Each Query or Scan request that has a smaller page size
uses fewer read operations and creates a "pause" between each request. For example, suppose that
each item is 4 KB and you set the page size to 40 items. A Query request would then consume only
20 eventually consistent read operations or 40 strongly consistent read operations. A larger number
of smaller Query or Scan operations would allow your other critical requests to succeed without
throttling.
• Isolate scan operations
DynamoDB is designed for easy scalability. As a result, an application can create tables for distinct
purposes, possibly even duplicating content across several tables. You want to perform scans on a
table that is not taking "mission-critical" traffic. Some applications handle this load by rotating traffic
hourly between two tables—one for critical traffic, and one for bookkeeping. Other applications can
do this by performing every write on two tables: a "mission-critical" table, and a "shadow" table.
Configure your application to retry any request that receives a response code that indicates you have
exceeded your provisioned throughput. Or, increase the provisioned throughput for your table using the
UpdateTable operation. If you have temporary spikes in your workload that cause your throughput to
exceed, occasionally, beyond the provisioned level, retry the request with exponential backoff. For more
information about implementing exponential backoff, see Error Retries and Exponential Backoff (p. 224).
Although parallel scans can be beneficial, they can place a heavy demand on provisioned throughput.
With a parallel scan, your application has multiple workers that are all running Scan operations
concurrently. This can quickly consume all of your table's provisioned read capacity. In that case, other
applications that need to access the table might be throttled.
A parallel scan can be the right choice if the following conditions are met:
Choosing TotalSegments
The best setting for TotalSegments depends on your specific data, the table's provisioned throughput
settings, and your performance requirements. You might need to experiment to get it right. We
recommend that you begin with a simple ratio, such as one segment per 2 GB of data. For example, for
a 30 GB table, you could set TotalSegments to 15 (30 GB / 2 GB). Your application would then use 15
workers, with each worker scanning a different segment.
You can also choose a value for TotalSegments that is based on client resources. You can set
TotalSegments to any number from 1 to 1000000, and DynamoDB lets you scan that number of
segments. For example, if your client limits the number of threads that can run concurrently, you can
gradually increase TotalSegments until you get the best Scan performance with your application.
Monitor your parallel scans to optimize your provisioned throughput use, while also making sure that
your other applications aren't starved of resources. Increase the value for TotalSegments if you don't
consume all of your provisioned throughput but still experience throttling in your Scan requests. Reduce
the value for TotalSegments if the Scan requests consume more provisioned throughput than you
want to use.
Topics
• Configure AWS Credentials in Your Files Using Amazon Cognito (p. 843)
• Loading Data From DynamoDB Into Amazon Redshift (p. 844)
• Processing DynamoDB Data With Apache Hive on Amazon EMR (p. 845)
For example, to configure your JavaScript files to use an Amazon Cognito unauthenticated role to access
the DynamoDB web service:
2. Copy the following policy into a file named myCognitoPolicy.json. Modify the identity pool ID
(us-west-2:12345678-1ab2-123a-1234-a12345ab12) with your own IdentityPoolId obtained in
the previous step:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "cognito-identity.amazonaws.com"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"cognito-identity.amazonaws.com:aud": "us-west-2:12345678-1ab2-123a-1234-
a12345ab12"
},
"ForAnyValue:StringLike": {
"cognito-identity.amazonaws.com:amr": "unauthenticated"
}
}
}
]
}
3. Create an IAM role that assumes the previous policy. In this way, Amazon Cognito becomes a trusted
entity that can assume the Cognito_DynamoPoolUnauth role.
4. Grant the Cognito_DynamoPoolUnauth role full access to the DynamoDB service by attaching a
managed policy (AmazonDynamoDBFullAccess).
Note
Alternatively, you can grant fine-grained access to DynamoDB. For more information, see
Using IAM Policy Conditions for Fine-Grained Access Control.
5. Obtain and copy the IAM role ARN:
6. Add the Cognito_DynamoPoolUnauth role to the DynamoPool identity pool. The format to
specify is KeyName=string, where KeyName is unauthenticated and the string is the role ARN
obtained in the previous step.
7. Specify the Amazon Cognito credentials in your files. Modify the IdentityPoolId and RoleArn
accordingly.
You can now run your JavaScript programs against the DynamoDB web service using Amazon Cognito
credentials. For more information, see Setting Credentials in a Web Browser in the AWS SDK for
JavaScript Getting Started Guide.
can perform complex data analysis queries on that data, including joins with other tables in your Amazon
Redshift cluster.
In terms of provisioned throughput, a copy operation from a DynamoDB table counts against that table's
read capacity. After the data is copied, your SQL queries in Amazon Redshift do not affect DynamoDB
in any way. This is because your queries act upon a copy of the data from DynamoDB, rather than upon
DynamoDB itself.
Before you can load data from a DynamoDB table, you must first create an Amazon Redshift table to
serve as the destination for the data. Keep in mind that you are copying data from a NoSQL environment
into a SQL environment, and that there are certain rules in one environment that do not apply in the
other. Here are some of the differences to consider:
• DynamoDB table names can contain up to 255 characters, including '.' (dot) and '-' (dash) characters,
and are case-sensitive. Amazon Redshift table names are limited to 127 characters, cannot contain
dots or dashes and are not case-sensitive. In addition, table names cannot conflict with any Amazon
Redshift reserved words.
• DynamoDB does not support the SQL concept of NULL. You need to specify how Amazon Redshift
interprets empty or blank attribute values in DynamoDB, treating them either as NULLs or as empty
fields.
• DynamoDB data types do not correspond directly with those of Amazon Redshift. You need to ensure
that each column in the Amazon Redshift table is of the correct data type and size to accommodate
the data from DynamoDB.
In this example, the source table in DynamoDB is my-favorite-movies-table. The target table
in Amazon Redshift is favoritemovies. The readratio 50 clause regulates the percentage of
provisioned throughput that is consumed; in this case, the COPY command will use no more than 50
percent of the read capacity units provisioned for my-favorite-movies-table. We highly recommend
setting this ratio to a value less than the average unused provisioned throughput.
For detailed instructions on loading data from DynamoDB into Amazon Redshift, refer to the following
sections in the Amazon Redshift Database Developer Guide:
• Copy data from a DynamoDB table into Hadoop Distributed File System (HDFS), and vice-versa.
• Perform join operations on DynamoDB tables.
Topics
• Overview (p. 846)
• Tutorial: Working with Amazon DynamoDB and Apache Hive (p. 846)
• Creating an External Table in Hive (p. 853)
• Processing HiveQL Statements (p. 855)
• Querying Data in DynamoDB (p. 856)
• Copying Data to and from Amazon DynamoDB (p. 858)
• Performance Tuning (p. 868)
Overview
Amazon EMR is a service that makes it easy to quickly and cost-effectively process vast amounts of data.
To use Amazon EMR, you launch a managed cluster of Amazon EC2 instances running the Hadoop open
source framework. Hadoop is a distributed application that implements the MapReduce algorithm, where
a task is mapped to multiple nodes in the cluster. Each node processes its designated work, in parallel
with the other nodes. Finally, the outputs are reduced on a single node, yielding the final result.
You can choose to launch your Amazon EMR cluster so that it is persistent or transient:
• A persistent cluster runs until you shut it down. Persistent clusters are ideal for data analysis, data
warehousing, or any other interactive use.
• A transient cluster runs long enough to process a job flow, and then shuts down automatically.
Transient clusters are ideal for periodic processing tasks, such as running scripts.
For information about Amazon EMR architecture and administration, see the Amazon EMR Management
Guide.
When you launch an Amazon EMR cluster, you specify the initial number and type of Amazon EC2
instances. You also specify other distributed applications (in addition to Hadoop itself) that you want to
run on the cluster. These applications include Hue, Mahout, Pig, Spark, and more.
For information about applications for Amazon EMR, see the Amazon EMR Release Guide.
Depending on the cluster configuration, you might have one or more of the following node types:
• Master node — Manages the cluster, coordinating the distribution of the MapReduce executable and
subsets of the raw data, to the core and task instance groups. It also tracks the status of each task
performed and monitors the health of the instance groups. There is only one master node in a cluster.
• Core nodes — Runs MapReduce tasks and stores data using the Hadoop Distributed File System
(HDFS).
• Task nodes (optional) — Runs MapReduce tasks.
Hive is a data warehouse application for Hadoop that allows you to process and analyze data from
multiple sources. Hive provides a SQL-like language, HiveQL, that lets you work with data stored locally
in the Amazon EMR cluster or in an external data source (such as Amazon DynamoDB).
Topics
• Before You Begin (p. 847)
• Step 1: Create an Amazon EC2 Key Pair (p. 847)
• Step 2: Launch an Amazon EMR Cluster (p. 848)
• Step 3: Connect to the Master Node (p. 848)
• Step 4: Load Data into HDFS (p. 849)
• Step 5: Copy Data to DynamoDB (p. 851)
• Step 6: Query the Data in the DynamoDB Table (p. 852)
• Step 7: (Optional) Clean Up (p. 852)
• An AWS account. If you do not have one, see Signing Up for AWS (p. 48).
• An SSH client (Secure Shell). You use the SSH client to connect to the master node of the Amazon
EMR cluster and run interactive commands. SSH clients are available by default on most Linux, Unix,
and Mac OS X installations. Windows users can download and install the PuTTY client, which has SSH
support.
Next Step
1. Sign in to the AWS Management Console and open the Amazon EC2 console at https://
console.aws.amazon.com/ec2/.
2. Choose a region (for example, US West (Oregon)). This should be the same region in which your
DynamoDB table is located.
3. In the navigation pane, choose Key Pairs.
4. Choose Create Key Pair.
5. In Key pair name, type a name for your key pair (for example, mykeypair), and then choose Create.
6. Download the private key file. The file name will end with .pem (such as mykeypair.pem). Keep
this private key file in a safe place. You will need it to access any Amazon EMR cluster that you
launch with this key pair.
Important
If you lose the key pair, you cannot connect to the master node of your Amazon EMR
cluster.
For more information about key pairs, see Amazon EC2 Key Pairs in the Amazon EC2 User Guide for
Linux Instances.
Next Step
a. In Cluster name, type a name for your cluster (for example: My EMR cluster).
b. In EC2 key pair, choose the key pair you created earlier.
It will take several minutes to launch your cluster. You can use the Cluster Details page in the Amazon
EMR console to monitor its progress.
When the status changes to Waiting, the cluster is ready for use.
If one does not already exist, the AWS Management Console creates an Amazon S3 bucket. The bucket
name is aws-logs-account-id-region, where account-id is your AWS account number and
region is the region in which you launched the cluster (for example, aws-logs-123456789012-us-
west-2).
Note
You can use the Amazon S3 console to view the log files. For more information, see View Log
Files in the Amazon EMR Management Guide.
You can use this bucket for purposes in addition to logging. For example, you can use the bucket as a
location for storing a Hive script or as a destination when exporting data from Amazon DynamoDB to
Amazon S3.
Next Step
1. In the Amazon EMR console, choose your cluster's name to view its status.
2. On the Cluster Details page, find the Master public DNS field. This is the public DNS name for the
master node of your Amazon EMR cluster.
Depending on your operating system, choose the Windows tab or the Mac/Linux tab, and follow
the instructions for connecting to the master node.
After you connect to the master node using either SSH or PuTTY, you should see a command prompt
similar to the following:
[hadoop@ip-192-0-2-0 ~]$
Next Step
wget https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/samples/
features.zip
unzip features.zip
head features.txt
1535908|Big Run|Stream|WV|38.6370428|-80.8595469|794
875609|Constable Hook|Cape|NJ|40.657881|-74.0990309|7
1217998|Gooseberry Island|Island|RI|41.4534361|-71.3253284|10
26603|Boone Moore Spring|Spring|AZ|34.0895692|-111.410065|3681
1506738|Missouri Flat|Flat|WA|46.7634987|-117.0346113|2605
1181348|Minnow Run|Stream|PA|40.0820178|-79.3800349|1558
1288759|Hunting Creek|Stream|TN|36.343969|-83.8029682|1024
533060|Big Charles Bayou|Bay|LA|29.6046517|-91.9828654|0
829689|Greenwood Creek|Stream|NE|41.596086|-103.0499296|3671
541692|Button Willow Island|Island|LA|31.9579389|-93.0648847|98
The data file contains a subset of data provided by the United States Board on Geographic Names
(http://geonames.usgs.gov/domestic/download_data.htm).
The features.txt file contains a subset of data from the United States Board on Geographic
Names (http://geonames.usgs.gov/domestic/download_data.htm). The fields in each line represent
the following:
hive
6. Enter the following HiveQL statement to load the table with data:
LOAD DATA
LOCAL
INPATH './features.txt'
OVERWRITE
INTO TABLE hive_features;
7. You now have a native Hive table populated with data from the features.txt file. To verify, enter
the following HiveQL statement:
The output should show a list of states and the number of geographic features in each.
Next Step
Clear Use Default Settings. For Provisioned Capacity, type the following:
Choose Create.
4. At the Hive prompt, enter the following HiveQL statement:
"dynamodb.column.mapping"="feature_id:Id,feature_name:Name,feature_class:Class,state_alpha:State,p
);
You have now established a mapping between Hive and the Features table in DynamoDB.
5. Enter the following HiveQL statement to import data to DynamoDB:
Hive will submit a MapReduce job, which will be processed by your Amazon EMR cluster. It will take
several minutes to complete the job.
6. Verify that the data has been loaded into DynamoDB:
Next Step
3. States with at least three features higher than a mile (5,280 feet):
Next Step
If you don't need the cluster anymore, you should terminate it and remove any associated resources. This
will help you avoid being charged for resources you don't need.
You can think of an external table as a pointer to a data source that is managed and stored elsewhere.
In this case, the underlying data source is a DynamoDB table. (The table must already exist. You cannot
create, update, or delete a DynamoDB table from within Hive.) You use the CREATE EXTERNAL TABLE
statement to create the external table. After that, you can use HiveQL to work with data in DynamoDB,
as if that data were stored locally within Hive.
Note
You can use INSERT statements to insert data into an external table and SELECT statements to
select data from it. However, you cannot use UPDATE or DELETE statements to manipulate data
in the table.
If you no longer need the external table, you can remove it using the DROP TABLE statement. In this
case, DROP TABLE only removes the external table in Hive. It does not affect the underlying DynamoDB
table or any of its data.
Topics
• CREATE EXTERNAL TABLE Syntax (p. 853)
• Data Type Mappings (p. 854)
Line 1 is the start of the CREATE EXTERNAL TABLE statement, where you provide the name of the Hive
table (hive_table) you want to create.
Line 2 specifies the columns and data types for hive_table. You need to define columns and data types
that correspond to the attributes in the DynamoDB table.
Line 3 is the STORED BY clause, where you specify a class that handles data management
between the Hive and the DynamoDB table. For DynamoDB, STORED BY should be set to
'org.apache.hadoop.hive.dynamodb.DynamoDBStorageHandler'.
Line 4 is the start of the TBLPROPERTIES clause, where you define the following parameters for
DynamoDBStorageHandler:
• The name of the Hive table name does not have to be the same as the DynamoDB table name.
• The Hive table column names do not have to be the same as those in the DynamoDB table.
• The table specified by dynamodb.table.name must exist in DynamoDB.
• For dynamodb.column.mapping:
• You must map the key schema attributes for the DynamoDB table. This includes the partition key
and the sort key (if present).
• You do not have to map the non-key attributes of the DynamoDB table. However, you will not see
any data from those attributes when you query the Hive table.
• If the data types of a Hive table column and a DynamoDB attribute are incompatible, you will see
NULL in these columns when you query the Hive table.
Note
The CREATE EXTERNAL TABLE statement does not perform any validation on the
TBLPROPERTIES clause. The values you provide for dynamodb.table.name and
dynamodb.column.mapping are only evaluated by the DynamoDBStorageHandler class
when you attempt to access the table.
String STRING
Binary BINARY
Note
The following DynamoDB data types are not supported by the DynamoDBStorageHandler
class, so they cannot be used with dynamodb.column.mapping:
• Map
• List
• Boolean
• Null
If you want to map a DynamoDB attribute of type Number, you must choose an appropriate Hive type:
• The Hive BIGINT type is for 8-byte signed integers. It is the same as the long data type in Java.
• The Hive DOUBLE type is for 8-bit double precision floating point numbers. It is the same as the
double type in Java.
If you have numeric data stored in DynamoDB that has a higher precision than the Hive data type you
choose, then accessing the DynamoDB data could cause a loss of precision.
If you export data of type Binary from DynamoDB to (Amazon S3) or HDFS, the data is stored as a
Base64-encoded string. If you import data from Amazon S3 or HDFS into the DynamoDB Binary type,
you must ensure the data is encoded as a Base64 string.
For example, consider the ddb_features table (from Tutorial: Working with Amazon DynamoDB and
Apache Hive (p. 846)). The following Hive query prints state abbreviations and the number of summits
in each:
Hive does not return the results immediately. Instead, it submits a MapReduce job, which is processed
by the Hadoop framework. Hive will wait until the job is complete before it shows the results from the
query:
AK 2
AL 2
AR 2
AZ 3
CA 7
CO 2
CT 2
ID 1
KS 1
ME 2
MI 1
MT 3
NC 1
NE 1
NM 1
NY 2
OR 5
PA 1
TN 1
TX 1
UT 4
VA 1
VT 2
WA 2
WY 3
Time taken: 8.753 seconds, Fetched: 25 row(s)
If you need to cancel the job before it is complete, you can type Ctrl+C at any time.
These examples refer to the ddb_features table in the tutorial (Step 5: Copy Data to DynamoDB (p. 851)
).
Topics
• Using Aggregate Functions (p. 856)
• Using the GROUP BY and HAVING Clauses (p. 856)
• Joining Two DynamoDB tables (p. 857)
• Joining Tables from Different Sources (p. 857)
SELECT MAX(elev_in_ft)
FROM ddb_features
WHERE state_alpha = 'CO';
The following example returns a list of the highest elevations from states that have more than five
features in the ddb_features table.
Consider a DynamoDB table named EastCoastStates that contains the following data:
StateName StateAbbrev
Maine ME
New Hampshire NH
Massachusetts MA
Rhode Island RI
Connecticut CT
New York NY
New Jersey NJ
Delaware DE
Maryland MD
Virginia VA
North Carolina NC
South Carolina SC
Georgia GA
Florida FL
Let's assume the table is available as a Hive external table named east_coast_states:
The following join returns the states on the East Coast of the United States that have at least three
features:
Hive is an excellent solution for copying data among DynamoDB tables, Amazon S3 buckets, native Hive
tables, and Hadoop Distributed File System (HDFS). This section provides examples of these operations.
Topics
• Copying Data Between DynamoDB and a Native Hive Table (p. 858)
• Copying Data Between DynamoDB and Amazon S3 (p. 859)
• Copying Data Between DynamoDB and HDFS (p. 863)
• Using Data Compression (p. 867)
• Reading Non-Printable UTF-8 Character Data (p. 868)
You might decide to do this if you need to perform many HiveQL queries, but do not want to consume
provisioned throughput capacity from DynamoDB. Because the data in the native Hive table is a copy of
the data from DynamoDB, and not "live" data, your queries should not expect that the data is up-to-date.
The examples in this section are written with the assumption you followed the steps in Tutorial: Working
with Amazon DynamoDB and Apache Hive (p. 846) and have an external table that is mastered in
DynamoDB (ddb_features).
You can create a native Hive table and populate it with data from ddb_features, like this:
In these examples, the subquery SELECT * FROM ddb_features will retrieve all of the data from
ddb_features. If you only want to copy a subset of the data, you can use a WHERE clause in the subquery.
The following example creates a native Hive table, containing only some of the attributes for lakes and
summits:
Use the following HiveQL statement to copy the data from the native Hive table to ddb_features:
You might do this if you want to create an archive of data in your DynamoDB table. For example,
suppose you have a test environment where you need to work with a baseline set of test data in
DynamoDB. You can copy the baseline data to an Amazon S3 bucket, and then run your tests. Afterward,
you can reset the test environment by restoring the baseline data from the Amazon S3 bucket to
DynamoDB.
If you worked through Tutorial: Working with Amazon DynamoDB and Apache Hive (p. 846), then you
already have an Amazon S3 bucket that contains your Amazon EMR logs. You can use this bucket for the
examples in this section, if you know the root path for the bucket:
s3://aws-logs-accountID-region
where accountID is your AWS account ID and region is the AWS region for the bucket.
Note
For these examples, we will use a subpath within the bucket, as in this example:
s3://aws-logs-123456789012-us-west-2/hive-test
The following procedures are written with the assumption you followed the steps in the tutorial and
have an external table that is mastered in DynamoDB (ddb_features).
Topics
• Copying Data Using the Hive Default Format (p. 860)
• Copying Data with a User-Specified Format (p. 860)
• Copying Data Without a Column Mapping (p. 861)
• Viewing the Data in Amazon S3 (p. 862)
Each field is separated by an SOH character (start of heading, 0x01). In the file, SOH appears as ^A.
1. Create a Hive external table that maps to Amazon S3. When you do this, ensure that the data types
are consistent with those of the DynamoDB external table.
With a single HiveQL statement, you can populate the DynamoDB table using the data from Amazon S3:
Name^C{"s":"Soldiers Farewell
Hill"}^BState^C{"s":"NM"}^BClass^C{"s":"Summit"}^BElevation^C{"n":"6135"}^BLatitude^C{"n":"32.3564729"
Name^C{"s":"Jones
Run"}^BState^C{"s":"PA"}^BClass^C{"s":"Stream"}^BElevation^C{"n":"1260"}^BLatitude^C{"n":"41.2120086"}
Name^C{"s":"Sentinel
Dome"}^BState^C{"s":"CA"}^BClass^C{"s":"Summit"}^BElevation^C{"n":"8133"}^BLatitude^C{"n":"37.7229821"
Name^C{"s":"Neversweet
Gulch"}^BState^C{"s":"CA"}^BClass^C{"s":"Valley"}^BElevation^C{"n":"2900"}^BLatitude^C{"n":"41.6565269
Name^C{"s":"Chacaloochee
Bay"}^BState^C{"s":"AL"}^BClass^C{"s":"Bay"}^BElevation^C{"n":"0"}^BLatitude^C{"n":"30.6979676"}^BId^C
Each field begins with an STX character (start of text, 0x02) and ends with an ETX character (end of text,
0x03). In the file, STX appears as ^B and ETX appears as ^C.
With a single HiveQL statement, you can populate the DynamoDB table using the data from Amazon S3:
The following steps are written with the assumption you have copied data from DynamoDB to Amazon
S3 using one of the procedures in this section.
1. If you are currently at the Hive command prompt, exit to the Linux command prompt.
hive> exit;
2. List the contents of the hive-test directory in your Amazon S3 bucket. (This is where Hive copied the
data from DynamoDB.)
aws s3 ls s3://aws-logs-123456789012-us-west-2/hive-test/
aws s3 cp s3://aws-logs-123456789012-us-west-2/hive-test/000000_0 .
download: s3://aws-logs-123456789012-us-west-2/hive-test/000000_0
to ./000000_0
Note
The local file system on the master node has limited capacity. Do not use this command
with files that are larger than the available space in the local file system.
You might do this if you are running a MapReduce job that requires data from DynamoDB. If you copy
the data from DynamoDB into HDFS, Hadoop can process it, using all of the available nodes in the
Amazon EMR cluster in parallel. When the MapReduce job is complete, you can then write the results
from HDFS to DDB.
In the following examples, Hive will read from and write to the following HDFS directory: /user/
hadoop/hive-test
The examples in this section are written with the assumption you followed the steps in Tutorial: Working
with Amazon DynamoDB and Apache Hive (p. 846) and you have an external table that is mastered in
DynamoDB (ddb_features).
Topics
• Copying Data Using the Hive Default Format (p. 863)
• Copying Data with a User-Specified Format (p. 864)
• Copying Data Without a Column Mapping (p. 865)
• Accessing the Data in HDFS (p. 866)
Each field is separated by an SOH character (start of heading, 0x01). In the file, SOH appears as ^A.
1. Create a Hive external table that maps to HDFS. When you do this, ensure that the data types are
consistent with those of the DynamoDB external table.
elev_in_ft BIGINT)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
LOCATION 'hdfs:///user/hadoop/hive-test';
With a single HiveQL statement, you can populate the DynamoDB table using the data from HDFS:
Name^C{"s":"Soldiers Farewell
Hill"}^BState^C{"s":"NM"}^BClass^C{"s":"Summit"}^BElevation^C{"n":"6135"}^BLatitude^C{"n":"32.3564729"
Name^C{"s":"Jones
Run"}^BState^C{"s":"PA"}^BClass^C{"s":"Stream"}^BElevation^C{"n":"1260"}^BLatitude^C{"n":"41.2120086"}
Name^C{"s":"Sentinel
Dome"}^BState^C{"s":"CA"}^BClass^C{"s":"Summit"}^BElevation^C{"n":"8133"}^BLatitude^C{"n":"37.7229821"
Name^C{"s":"Neversweet
Gulch"}^BState^C{"s":"CA"}^BClass^C{"s":"Valley"}^BElevation^C{"n":"2900"}^BLatitude^C{"n":"41.6565269
Name^C{"s":"Chacaloochee
Bay"}^BState^C{"s":"AL"}^BClass^C{"s":"Bay"}^BElevation^C{"n":"0"}^BLatitude^C{"n":"30.6979676"}^BId^C
Each field begins with an STX character (start of text, 0x02) and ends with an ETX character (end of text,
0x03). In the file, STX appears as ^B and ETX appears as ^C.
With a single HiveQL statement, you can populate the DynamoDB table using the data from HDFS:
HDFS is not the same thing as the local file system on the master node. You cannot work with files and
directories in HDFS using standard Linux commands (such as cat, cp, mv, or rm). Instead, you perform
these tasks using the hadoop fs command.
The following steps are written with the assumption you have copied data from DynamoDB to HDFS
using one of the procedures in this section.
1. If you are currently at the Hive command prompt, exit to the Linux command prompt.
hive> exit;
2. List the contents of the /user/hadoop/hive-test directory in HDFS. (This is where Hive copied the
data from DynamoDB.)
Found 1 items
-rw-r--r-- 1 hadoop hadoop 29504 2016-06-08 23:40 /user/hadoop/hive-test/000000_0
Note
In this example, the file is relatively small (approximately 29 KB). Be careful when you use
this command with files that are very large or contain non-printable characters.
4. (Optional) You can copy the data file from HDFS to the local file system on the master node. After
you do this, you can use standard Linux command line utilities to work with the data in the file.
The following example compresses data using the Lempel-Ziv-Oberhumer (LZO) algorithm.
SET hive.exec.compress.output=true;
SET io.seqfile.compression.type=BLOCK;
SET mapred.output.compression.codec = com.hadoop.compression.lzo.LzopCodec;
The resulting file in Amazon S3 will have a system-generated name with .lzo at the end (for example,
8d436957-57ba-4af7-840c-96c2fc7bb6f5-000000.lzo).
• org.apache.hadoop.io.compress.GzipCodec
• org.apache.hadoop.io.compress.DefaultCodec
• com.hadoop.compression.lzo.LzoCodec
• com.hadoop.compression.lzo.LzopCodec
• org.apache.hadoop.io.compress.BZip2Codec
• org.apache.hadoop.io.compress.SnappyCodec
Performance Tuning
When you create a Hive external table that maps to a DynamoDB table, you do not consume any read or
write capacity from DynamoDB. However, read and write activity on the Hive table (such as INSERT or
SELECT) translates directly into read and write operations on the underlying DynamoDB table.
Apache Hive on Amazon EMR implements its own logic for balancing the I/O load on the DynamoDB
table and seeks to minimize the possibility of exceeding the table's provisioned throughput. At the
end of each Hive query, Amazon EMR returns runtime metrics, including the number of times your
provisioned throughput was exceeded. You can use this information, together with CloudWatch metrics
on your DynamoDB table, to improve performance in subsequent requests.
The Amazon EMR console provides basic monitoring tools for your cluster. For more information, see
View and Monitor a Cluster in the Amazon EMR Management Guide.
You can also monitor your cluster and Hadoop jobs using web-based tools, such as Hue, Ganglia, and the
Hadoop web interface. For more information, see View Web Interfaces Hosted on Amazon EMR Clusters
in the Amazon EMR Management Guide.
This section describes steps you can take to performance-tune Hive operations on external DynamoDB
tables.
Topics
• DynamoDB Provisioned Throughput (p. 869)
• Adjusting the Mappers (p. 870)
• Additional Topics (p. 871)
For example, suppose that you have provisioned 100 read capacity units for your DynamoDB table. This
will let you read 409,600 bytes per second (100 × 4 KB read capacity unit size). Now suppose that the
table contains 20 GB of data (21,474,836,480 bytes) and you want to use the SELECT statement to
select all of the data using HiveQL. You can estimate how long the query will take to run like this:
In this scenario, the DynamoDB table is a bottleneck. It won't help to add more Amazon EMR nodes,
because the Hive throughput is constrained to only 409,600 bytes per second. The only way to
decrease the time required for the SELECT statement is to increase the provisioned read capacity of the
DynamoDB table.
You can perform a similar calculation to estimate how long it would take to bulk-load data into a Hive
external table mapped to a DynamoDB table. Determine the total number of bytes in the data you want
to load, and then divide it by the size of one DynamoDB write capacity unit (1 KB). This will yield the
number of seconds it will take to load the table.
You should regularly monitor the CloudWatch metrics for your table. For a quick overview in the
DynamoDB console, choose your table and then choose the Metrics tab. From here, you can view read
and write capacity units consumed and read and write requests that have been throttled.
Read Capacity
Amazon EMR manages the request load against your DynamoDB table, according to
the table's provisioned throughput settings. However, if you notice a large number of
ProvisionedThroughputExceeded messages in the job output, you can adjust the default read rate.
To do this, you can modify the dynamodb.throughput.read.percent configuration variable. You can
use the SET command to set this variable at the Hive command prompt:
SET dynamodb.throughput.read.percent=1.0;
This variable persists for the current Hive session only. If you exit Hive and return to it later,
dynamodb.throughput.read.percent will return to its default value.
The value of dynamodb.throughput.read.percent can be between 0.1 and 1.5, inclusively. 0.5
represents the default read rate, meaning that Hive will attempt to consume half of the read capacity of
the table. If you increase the value above 0.5, Hive will increase the request rate; decreasing the value
below 0.5 decreases the read request rate. (The actual read rate will vary, depending on factors such as
whether there is a uniform key distribution in the DynamoDB table.)
If you notice that Hive is frequently depleting the provisioned read capacity of the table, or if your read
requests are being throttled too much, try reducing dynamodb.throughput.read.percent below
0.5. If you have sufficient read capacity in the table and want more responsive HiveQL operations, you
can set the value above 0.5.
Write Capacity
Amazon EMR manages the request load against your DynamoDB table, according to
the table's provisioned throughput settings. However, if you notice a large number of
ProvisionedThroughputExceeded messages in the job output, you can adjust the default write rate.
To do this, you can modify the dynamodb.throughput.write.percent configuration variable. You
can use the SET command to set this variable at the Hive command prompt:
SET dynamodb.throughput.write.percent=1.0;
This variable persists for the current Hive session only. If you exit Hive and return to it later,
dynamodb.throughput.write.percent will return to its default value.
The value of dynamodb.throughput.write.percent can be between 0.1 and 1.5, inclusively. 0.5
represents the default write rate, meaning that Hive will attempt to consume half of the write capacity
of the table. If you increase the value above 0.5, Hive will increase the request rate; decreasing the value
below 0.5 decreases the write request rate. (The actual write rate will vary, depending on factors such as
whether there is a uniform key distribution in the DynamoDB table.)
If you notice that Hive is frequently depleting the provisioned write capacity of the table, or if your write
requests are being throttled too much, try reducing dynamodb.throughput.write.percent below
0.5. If you have sufficient capacity in the table and want more responsive HiveQL operations, you can set
the value above 0.5.
When you write data to DynamoDB using Hive, ensure that the number of write capacity units is greater
than the number of mappers in the cluster. For example, consider an Amazon EMR cluster consisting of
10 m1.xlarge nodes. The m1.xlarge node type provides 8 mapper tasks, so the cluster would have a total
of 80 mappers (10 × 8). If your DynamoDB table has fewer than 80 write capacity units, then a Hive write
operation could consume all of the write throughput for that table.
To determine the number of mappers for Amazon EMR node types, see Task Configuration in the
Amazon EMR Developer Guide.
For more information on mappers, see Adjusting the Mappers (p. 870).
If your DynamoDB table has ample throughput capacity for reads, you can try increasing the number of
mappers by doing one of the following:
• Increase the size of the nodes in your cluster. For example, if your cluster is using m1.large nodes (three
mappers per node), you can try upgrading to m1.xlarge nodes (eight mappers per node).
• Increase the number of nodes in your cluster. For example, if you have three-node cluster of m1.xlarge
nodes, you have a total of 24 mappers available. If you were to double the size of the cluster, with the
same type of node, you would have 48 mappers.
You can use the AWS Management Console to manage the size or the number of nodes in your cluster.
(You might need to restart the cluster for these changes to take effect.)
You set the value for mapred.tasktracker.map.tasks.maximum as a bootstrap action when you
first launch your Amazon EMR cluster. For more information, see (Optional) Create Bootstrap Actions to
Install Additional Software in the Amazon EMR Management Guide.
You can use the dynamodb.max.map.tasks parameter to set an upper limit for map tasks:
SET dynamodb.max.map.tasks=1
This value must be equal to or greater than 1. When Hive processes your query, the resulting Hadoop job
will use no more than dynamodb.max.map.tasks when reading from the DynamoDB table.
Additional Topics
The following are some more ways to tune applications that use Hive to access DynamoDB.
Retry Duration
By default, Hive will rerun a Hadoop job if it has not returned any results from DynamoDB within two
minutes. You can adjust this interval by modifying the dynamodb.retry.duration parameter:
SET dynamodb.retry.duration=2;
The value must be a nonzero integer, representing the number of minutes in the retry interval. The
default for dynamodb.retry.duration is 2 (minutes).
Process Duration
Data consistency in DynamoDB depends on the order of read and write operations on each node. While
a Hive query is in progress, another application might load new data into the DynamoDB table or modify
or delete existing data. In this case, the results of the Hive query might not reflect changes made to the
data while the query was running.
Request Time
Scheduling Hive queries that access a DynamoDB table when there is lower demand on the DynamoDB
table improves performance. For example, if most of your application's users live in San Francisco,
you might choose to export daily data at 4:00 A.M. PST when the majority of users are asleep and not
updating records in your DynamoDB database.
Limits in DynamoDB
This section describes current limits within Amazon DynamoDB (or no limit, in some cases). Each limit
listed below applies on a per-region basis unless otherwise specified.
Topics
• Read/Write Capacity Mode and Throughput (p. 873)
• Tables (p. 875)
• Secondary Indexes (p. 876)
• Partition Keys and Sort Keys (p. 876)
• Naming Rules (p. 877)
• Data Types (p. 877)
• Items (p. 878)
• Attributes (p. 878)
• Expression Parameters (p. 879)
• DynamoDB Transactions (p. 879)
• DynamoDB Streams (p. 880)
• DynamoDB Accelerator (DAX) (p. 880)
• API-Specific Limits (p. 881)
• DynamoDB Encryption at rest (p. 881)
One write capacity unit = one write per second, for items up to 1 KB in size.
Transactional read requests require two read capacity units to perform one read per second for items up
to 4 KB.
Transactional write requests require two write capacity units to perform one write per second for items
up to 1 KB.
Transactional read requests require two read request units to perform one read for items up to 4 KB.
Transactional write requests require two write request units to perform one write for items up to 1 KB.
• US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), South America (São
Paulo), EU (Frankfurt), EU (Ireland), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Mumbai), Asia
Pacific (Singapore), Asia Pacific (Sydney), China (Beijing) Regions:
On-Demand Provisioned
On-Demand Provisioned
Note
All the account's available throughput can be applied to a single table or across multiple
tables.
The provisioned throughput limit includes the sum of the capacity of the table together with the
capacity of all of its global secondary indexes.
In the AWS Management Console, you can use AWS CloudWatch to see what your current read and
write throughput is in a given region by looking at the read capacity and write capacity graphs
on the Metrics tab. Make sure you are not too close to the limits.
If you increased your provisioned throughput default limits, you can use the DescribeLimits operation to
see the current limit values.
You cannot exceed your per-account limits when you add provisioned capacity, and DynamoDB will not
permit you to increase provisioned capacity extremely rapidly. Aside from these restrictions, you can
increase the provisioned capacity for your tables as high as you need. For more information about per-
account limits, see the preceding section Throughput Default Limits (p. 874).
Example
A table with a GSI, in the first 4 hours of a day, can be modified as follows:
At the end of that same day the table and the GSI's throughput can potentially be decreased a total of
27 times each.
Tables
Table Size
There is no practical limit on a table's size. Tables are unconstrained in terms of the number of items or
the number of bytes.
Secondary Indexes
Secondary Indexes Per Table
You can define a maximum of 5 local secondary indexes.
There is an initial limit of 20 global secondary indexes per table. To request a service limit increase see
https://aws.amazon.com/support.
The limit of global secondary indexes per table is 5 for the following regions:
This limit does not apply for secondary indexes with a ProjectionType of KEYS_ONLY or ALL.
The limit of projected secondary index attributes per table is 20 for the following regions:
The exception is for tables with local secondary indexes. With a local secondary index, there is a limit on
item collection sizes: For every distinct partition key value, the total sizes of all table and index items
cannot exceed 10 GB. This might constrain the number of sort keys per partition key value. For more
information, see Item Collection Size Limit (p. 541).
Naming Rules
Table Names and Secondary Index Names
Names for tables and secondary indexes must be at least 3 characters long, but no greater than 255
characters long. Allowed characters are:
• A-Z
• a-z
• 0-9
• _ (underscore)
• - (hyphen)
• . (dot)
Attribute Names
In general, an attribute name must be at least 1 character long, but no greater than 64 KB long.
The exceptions are listed below. These attribute names must be no greater than 255 characters long:
These attribute names must be encoded using UTF-8, and the total size of each name (after encoding)
cannot exceed 255 bytes.
Data Types
String
The length of a String is constrained by the maximum item size of 400 KB.
Strings are Unicode with UTF-8 binary encoding. Because UTF-8 is a variable width encoding, DynamoDB
determines the length of a String using its UTF-8 bytes.
Number
A Number can have up to 38 digits of precision, and can be positive, negative, or zero.
DynamoDB uses JSON strings to represent Number data in requests and replies. For more information,
see DynamoDB Low-Level API (p. 216).
If number precision is important, you should pass numbers to DynamoDB using strings that you convert
from a number type.
Binary
The length of a Binary is constrained by the maximum item size of 400 KB.
Applications that work with Binary attributes must encode the data in Base64 format before sending it
to DynamoDB. Upon receipt of the data, DynamoDB decodes it into an unsigned byte array and uses that
as the length of the attribute.
Items
Item Size
The maximum item size in DynamoDB is 400 KB, which includes both attribute name binary length
(UTF-8 length) and attribute value lengths (again binary length). The attribute name counts towards the
size limit.
For example, consider an item with two attributes: one attribute named "shirt-color" with value "R" and
another attribute named "shirt-size" with value "M". The total size of that item is 23 bytes.
Attributes
Attribute Name-Value Pairs Per Item
The cumulative size of attributes per item must fit within the maximum DynamoDB item size (400 KB).
Attribute Values
An attribute value cannot be an empty String or empty Set (String Set, Number Set, or Binary Set).
However, empty Lists and Maps are allowed.
Expression Parameters
Expression parameters include ProjectionExpression, ConditionExpression,
UpdateExpression, and FilterExpression.
Lengths
The maximum length of any expression string is 4 KB. For example, the size of the
ConditionExpression a=b is three bytes.
The maximum length of any single expression attribute name or expression attribute value is 255 bytes.
For example, #name is five bytes; :val is four bytes.
The maximum length of all substitution variables in an expression is 2 MB. This is the sum of the lengths
of all ExpressionAttributeNames and ExpressionAttributeValues.
Reserved Words
DynamoDB does not prevent you from using names that conflict with reserved words. (For a complete
list, see Reserved Words in DynamoDB (p. 937).)
However, if you use a reserved word in an expression parameter, you must also specify
ExpressionAttributeNames. For more information, see Expression Attribute Names (p. 387).
DynamoDB Transactions
DynamoDB transactional APIs have the following constraints:
DynamoDB Streams
Simultaneous Readers of a Shard in DynamoDB
Streams
Do not allow more than two processes to read from the same DynamoDB Streams shard at the same
time. Exceeding this limit can result in request throttling.
• US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), South America (São
Paulo), EU (Frankfurt), EU (Ireland), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Singapore),
Asia Pacific (Sydney), China (Beijing) Regions:
• Per table – 40,000 write capacity units
• All Other Regions:
• Per table – 10,000 write capacity units
Note
The provisioned throughput limits also apply for DynamoDB tables with Streams enabled. For
more information, see Throughput Default Limits (p. 874).
Nodes
A DAX cluster consists of exactly 1 primary node, and between 0 and 9 read replica nodes.
The total number of nodes (per AWS account) cannot exceed 50 in a single AWS region.
Parameter Groups
You can create up to 20 DAX parameter groups per region.
Subnet Groups
You can create up to 50 DAX subnet groups per region.
API-Specific Limits
CreateTable/UpdateTable/DeleteTable
The only exception is when you are creating a table with one or more secondary indexes. You can
have up to 25 such requests running at a time; however, if the table or index specifications are
complex, DynamoDB might temporarily reduce the number of concurrent operations.
BatchGetItem
A single BatchGetItem operation can retrieve a maximum of 100 items. The total size of all the
items retrieved cannot exceed 16 MB.
BatchWriteItem
DescribeLimits
DescribeLimits should only be called periodically. You can expect throttling errors if you call it
more than once in a minute.
Query
The result set from a Query is limited to 1 MB per call. You can use the LastEvaluatedKey from
the query response to retrieve more results.
Scan
The result set from a Scan is limited to 1 MB per call. You can use the LastEvaluatedKey from the
scan response to retrieve more results.
You can switch encryption keys to use AWS owned CMK as often as necessary.
These are the limits unless you request a higher amount. To request a service limit increase see https://
aws.amazon.com/support.
API Reference
The following DynamoDB operations are supported. For complete documentation, see the Amazon
DynamoDB API Reference.
• BatchGetItem
• BatchWriteItem
• CreateBackup
• CreateGlobalTable
• CreateTable
• DeleteBackup
• DeleteItem
• DeleteTable
• DescribeBackup
• DescribeContinuousBackups
• DescribeGlobalTable
• DescribeGlobalTableSettings
• DescribeLimits
• DescribeTable
• DescribeTimeToLive
• GetItem
• ListBackups
• ListGlobalTables
• ListTables
• ListTagsOfResource
• PutItem
• Query
• RestoreTableFromBackup
• RestoreTableToPointInTime
• Scan
• TagResource
• TransactGetItems
• TransactWriteItems
• UntagResource
• UpdateContinuousBackups
• UpdateGlobalTable
• UpdateGlobalTableSettings
• UpdateItem
• UpdateTable
• UpdateTimeToLive
The following DynamoDB Streams operations are supported. For complete documentation, see the
Amazon DynamoDB Streams API Reference.
• DescribeStream
• GetShardIterator
• GetRecords
• ListStreams
The following DAX operations are supported. For complete documentation, see the Amazon DynamoDB
Accelerator API reference.
• CreateCluster
• CreateParameterGroup
• CreateSubnetGroup
• DecreaseReplicationFactor
• DeleteCluster
• DeleteParameterGroup
• DeleteSubnetGroup
• DescribeClusters
• DescribeDefaultParameters
• DescribeEvents
• DescribeParameterGroups
• DescribeParameters
• DescribeSubnetGroups
• IncreaseReplicationFactor
• ListTags
• RebootNode
• TagResource
• UntagResource
• UpdateCluster
• UpdateParameterGroup
• UpdateSubnetGroup
DynamoDB Appendix
Topics
• Troubleshooting SSL/TLS connection establishment issues (p. 884)
• Example Tables and Data (p. 886)
• Creating Example Tables and Uploading Data (p. 895)
• DynamoDB Example Application Using AWS SDK for Python (Boto): Tic-Tac-Toe (p. 911)
• Exporting and Importing DynamoDB Data Using AWS Data Pipeline (p. 928)
• Amazon DynamoDB Storage Backend for Titan (p. 936)
• Reserved Words in DynamoDB (p. 937)
• Legacy Conditional Parameters (p. 945)
• Previous Low-Level API Version (2011-12-05) (p. 962)
To validate access to DynamoDB endpoints, you will need to develop a test that accesses DynamoDB API
or DynamoDB Streams API in the EU-WEST-3 region and validate that the TLS handshake succeeds. The
specific endpoints you will need to access in such test are:
• DynamoDB: https://dynamodb.eu-west-3.amazonaws.com
• DynamoDB Streams: https://streams.dynamodb.eu-west-3.amazonaws.com
If your application does not support Amazon Trust Services Certificate Authority you will see one of the
following failures:
healthy: dynamodb.eu-west-3.amazonaws.com
• Amazon Root CA 1
• Starfield Services Root Certificate Authority - G2
• Starfield Class 2 Certification Authority
If the clients already trust ANY of the above three CAs then these will trust certificates used by
DynamoDB and no action is required. However, if your clients do not already trust any of the above CAs,
the HTTPS connections to the DynamoDB or DynamoDB Streams APIs will fail. For more information,
please visit this blog post: https://aws.amazon.com/blogs/security/how-to-prepare-for-aws-move-to-
its-own-certificate-authority/.
• Chrome: https://support.google.com/chrome/answer/95414?hl=en
• Firefox: https://support.mozilla.org/en-US/kb/update-firefox-latest-version
• Safari: https://support.apple.com/en-us/HT204416
• Internet Explorer: https://support.microsoft.com/en-us/help/17295/windows-internet-explorer-
which-version#ie=other
The following operating systems and programming languages support Amazon Trust Services
certificates:
• Microsoft Windows versions that have January 2005 or later updates installed, Windows Vista,
Windows 7, Windows Server 2008, and newer versions.
• Mac OS X 10.4 with Java for Mac OS X 10.4 Release 5, Mac OS X 10.5 and newer versions.
• Red Hat Enterprise Linux 5 (March 2007), Linux 6, and Linux 7 and CentOS 5, CentOS 6, and CentOS 7
• Ubuntu 8.10
• Debian 5.0
If you are still unable to connect please consult your software documentation, OS Vendor or contact AWS
Support https://aws.amazon.com/support for further assistance.
• Id (Number)
• Name (String)
• ForumName (String)
• Subject (String)
• Id (String)
• ReplyDateTime (String)
The Reply table has a global secondary index named PostedBy-Message-Index. This index will facilitate
queries on two non-key attributes of the Reply table.
• PostedBy (String)
• Message (String)
For more information about these tables, see Use Case 1: Product Catalog (p. 323) and Use Case 2:
Forum Application (p. 324).
The following sections show the sample data files that are used for loading the ProductCatalog, Forum,
Thread and Reply tables.
Each data file contains multiple PutRequest elements, each of which contain a single item. These
PutRequest elements are used as input to the BatchWriteItem operation, using the AWS Command
Line Interface (AWS CLI).
For more information, see Step 2: Load Data into Tables (p. 325) in Creating Tables and Loading Sample
Data (p. 323).
"L": [
{
"S": "Author1"
},
{
"S": "Author2"
}
]
},
"Price": {
"N": "20"
},
"Dimensions": {
"S": "8.5 x 11.0 x 0.8"
},
"PageCount": {
"N": "600"
},
"InPublication": {
"BOOL": true
},
"ProductCategory": {
"S": "Book"
}
}
}
},
{
"PutRequest": {
"Item": {
"Id": {
"N": "103"
},
"Title": {
"S": "Book 103 Title"
},
"ISBN": {
"S": "333-3333333333"
},
"Authors": {
"L": [
{
"S": "Author1"
},
{
"S": "Author2"
}
]
},
"Price": {
"N": "2000"
},
"Dimensions": {
"S": "8.5 x 11.0 x 1.5"
},
"PageCount": {
"N": "600"
},
"InPublication": {
"BOOL": false
},
"ProductCategory": {
"S": "Book"
}
}
}
},
{
"PutRequest": {
"Item": {
"Id": {
"N": "201"
},
"Title": {
"S": "18-Bike-201"
},
"Description": {
"S": "201 Description"
},
"BicycleType": {
"S": "Road"
},
"Brand": {
"S": "Mountain A"
},
"Price": {
"N": "100"
},
"Color": {
"L": [
{
"S": "Red"
},
{
"S": "Black"
}
]
},
"ProductCategory": {
"S": "Bicycle"
}
}
}
},
{
"PutRequest": {
"Item": {
"Id": {
"N": "202"
},
"Title": {
"S": "21-Bike-202"
},
"Description": {
"S": "202 Description"
},
"BicycleType": {
"S": "Road"
},
"Brand": {
"S": "Brand-Company A"
},
"Price": {
"N": "200"
},
"Color": {
"L": [
{
"S": "Green"
},
{
"S": "Black"
}
]
},
"ProductCategory": {
"S": "Bicycle"
}
}
}
},
{
"PutRequest": {
"Item": {
"Id": {
"N": "203"
},
"Title": {
"S": "19-Bike-203"
},
"Description": {
"S": "203 Description"
},
"BicycleType": {
"S": "Road"
},
"Brand": {
"S": "Brand-Company B"
},
"Price": {
"N": "300"
},
"Color": {
"L": [
{
"S": "Red"
},
{
"S": "Green"
},
{
"S": "Black"
}
]
},
"ProductCategory": {
"S": "Bicycle"
}
}
}
},
{
"PutRequest": {
"Item": {
"Id": {
"N": "204"
},
"Title": {
"S": "18-Bike-204"
},
"Description": {
"S": "204 Description"
},
"BicycleType": {
"S": "Mountain"
},
"Brand": {
"S": "Brand-Company B"
},
"Price": {
"N": "400"
},
"Color": {
"L": [
{
"S": "Red"
}
]
},
"ProductCategory": {
"S": "Bicycle"
}
}
}
},
{
"PutRequest": {
"Item": {
"Id": {
"N": "205"
},
"Title": {
"S": "18-Bike-204"
},
"Description": {
"S": "205 Description"
},
"BicycleType": {
"S": "Hybrid"
},
"Brand": {
"S": "Brand-Company C"
},
"Price": {
"N": "500"
},
"Color": {
"L": [
{
"S": "Red"
},
{
"S": "Black"
}
]
},
"ProductCategory": {
"S": "Bicycle"
}
}
}
}
]
}
}
},
{
"PutRequest": {
"Item": {
"ForumName": {
"S": "Amazon DynamoDB"
},
"Subject": {
"S": "DynamoDB Thread 2"
},
"Message": {
"S": "DynamoDB thread 2 message"
},
"LastPostedBy": {
"S": "User A"
},
"LastPostedDateTime": {
"S": "2015-09-15T19:58:22.514Z"
},
"Views": {
"N": "3"
},
"Replies": {
"N": "0"
},
"Answered": {
"N": "0"
},
"Tags": {
"L": [
{
"S": "items"
},
{
"S": "attributes"
},
{
"S": "throughput"
}
]
}
}
}
},
{
"PutRequest": {
"Item": {
"ForumName": {
"S": "Amazon S3"
},
"Subject": {
"S": "S3 Thread 1"
},
"Message": {
"S": "S3 thread 1 message"
},
"LastPostedBy": {
"S": "User A"
},
"LastPostedDateTime": {
"S": "2015-09-29T19:58:22.514Z"
},
"Views": {
"N": "0"
},
"Replies": {
"N": "0"
},
"Answered": {
"N": "0"
},
"Tags": {
"L": [
{
"S": "largeobjects"
},
{
"S": "multipart upload"
}
]
}
}
}
}
]
}
"Item": {
"Id": {
"S": "Amazon DynamoDB#DynamoDB Thread 2"
},
"ReplyDateTime": {
"S": "2015-09-29T19:58:22.947Z"
},
"Message": {
"S": "DynamoDB Thread 2 Reply 1 text"
},
"PostedBy": {
"S": "User A"
}
}
}
},
{
"PutRequest": {
"Item": {
"Id": {
"S": "Amazon DynamoDB#DynamoDB Thread 2"
},
"ReplyDateTime": {
"S": "2015-10-05T19:58:22.947Z"
},
"Message": {
"S": "DynamoDB Thread 2 Reply 2 text"
},
"PostedBy": {
"S": "User A"
}
}
}
}
]
}
In Creating Tables and Loading Sample Data (p. 323), you first create tables using the DynamoDB console
and then use the AWS CLI to add data to the tables. This appendix provides code to both create the
tables and add data programmatically.
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
package com.amazonaws.codesamples;
import java.text.SimpleDateFormat;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Date;
import java.util.HashSet;
import java.util.TimeZone;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;
import com.amazonaws.services.dynamodbv2.document.DynamoDB;
import com.amazonaws.services.dynamodbv2.document.Item;
import com.amazonaws.services.dynamodbv2.document.Table;
import com.amazonaws.services.dynamodbv2.model.AttributeDefinition;
import com.amazonaws.services.dynamodbv2.model.CreateTableRequest;
import com.amazonaws.services.dynamodbv2.model.KeySchemaElement;
import com.amazonaws.services.dynamodbv2.model.KeyType;
import com.amazonaws.services.dynamodbv2.model.LocalSecondaryIndex;
import com.amazonaws.services.dynamodbv2.model.Projection;
import com.amazonaws.services.dynamodbv2.model.ProjectionType;
import com.amazonaws.services.dynamodbv2.model.ProvisionedThroughput;
try {
deleteTable(productCatalogTableName);
deleteTable(forumTableName);
deleteTable(threadTableName);
deleteTable(replyTableName);
loadSampleProducts(productCatalogTableName);
loadSampleForums(forumTableName);
loadSampleThreads(threadTableName);
loadSampleReplies(replyTableName);
}
catch (Exception e) {
System.err.println("Program failed:");
System.err.println(e.getMessage());
}
System.out.println("Success.");
}
}
catch (Exception e) {
System.err.println("DeleteTable request failed for " + tableName);
System.err.println(e.getMessage());
}
}
try {
// key
if (sortKeyName != null) {
keySchema.add(new
KeySchemaElement().withAttributeName(sortKeyName).withKeyType(KeyType.RANGE)); // Sort
// key
attributeDefinitions
.add(new
AttributeDefinition().withAttributeName(sortKeyName).withAttributeType(sortKeyType));
}
attributeDefinitions
.add(new
AttributeDefinition().withAttributeName("PostedBy").withAttributeType("S"));
// key
new
KeySchemaElement().withAttributeName("PostedBy").withKeyType(KeyType.RANGE)) // Sort
// key
.withProjection(new
Projection().withProjectionType(ProjectionType.KEYS_ONLY)));
request.setLocalSecondaryIndexes(localSecondaryIndexes);
}
request.setAttributeDefinitions(attributeDefinitions);
}
catch (Exception e) {
System.err.println("CreateTable request failed for " + tableName);
System.err.println(e.getMessage());
}
}
try {
table.putItem(item);
// Add bikes.
}
catch (Exception e) {
System.err.println("Failed to create item in " + tableName);
System.err.println(e.getMessage());
}
try {
}
catch (Exception e) {
System.err.println("Failed to create item in " + tableName);
System.err.println(e.getMessage());
}
}
dateFormatter.setTimeZone(TimeZone.getTimeZone("UTC"));
}
catch (Exception e) {
System.err.println("Failed to create item in " + tableName);
System.err.println(e.getMessage());
}
dateFormatter.setTimeZone(TimeZone.getTimeZone("UTC"));
// Add threads.
}
catch (Exception e) {
System.err.println("Failed to create item in " + tableName);
System.err.println(e.getMessage());
}
}
/**
* Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* This file is licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License. A copy of
* the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
*/
using System;
using System.Collections.Generic;
using Amazon.DynamoDBv2;
using Amazon.DynamoDBv2.DocumentModel;
using Amazon.DynamoDBv2.Model;
using Amazon.Runtime;
using Amazon.SecurityToken;
namespace com.amazonaws.codesamples
{
class CreateTablesLoadData
{
private static AmazonDynamoDBClient client = new AmazonDynamoDBClient();
// Create tables (using the AWS SDK for .NET low-level API).
CreateTableProductCatalog();
CreateTableForum();
CreateTableThread(); // ForumTitle, Subject */
CreateTableReply();
},
new KeySchemaElement()
{
AttributeName = "ReplyDateTime",
KeyType = "RANGE"
}
},
LocalSecondaryIndexes = new List<LocalSecondaryIndex>()
{
new LocalSecondaryIndex()
{
IndexName = "PostedBy_index",
book2["Id"] = 102;
book2["Title"] = "Book 102 Title";
book2["ISBN"] = "222-2222222222";
book2["Authors"] = new List<string> { "Author 1", "Author 2" }; ;
book2["Price"] = 20;
book2["Dimensions"] = "8.5 x 11.0 x 0.8";
book2["PageCount"] = 600;
book2["InPublication"] = true;
book2["ProductCategory"] = "Book";
productCatalogTable.PutItem(book2);
{
Table forumTable = Table.LoadTable(client, "Forum");
forumTable.PutItem(forum1);
forumTable.PutItem(forum2);
}
// Thread 1.
var thread1 = new Document();
thread1["ForumName"] = "Amazon DynamoDB"; // Hash attribute.
thread1["Subject"] = "DynamoDB Thread 1"; // Range attribute.
thread1["Message"] = "DynamoDB thread 1 message text";
thread1["LastPostedBy"] = "User A";
thread1["LastPostedDateTime"] = DateTime.UtcNow.Subtract(new TimeSpan(14, 0, 0,
0));
thread1["Views"] = 0;
thread1["Replies"] = 0;
thread1["Answered"] = false;
thread1["Tags"] = new List<string> { "index", "primarykey", "table" };
threadTable.PutItem(thread1);
// Thread 2.
var thread2 = new Document();
thread2["ForumName"] = "Amazon DynamoDB"; // Hash attribute.
thread2["Subject"] = "DynamoDB Thread 2"; // Range attribute.
thread2["Message"] = "DynamoDB thread 2 message text";
thread2["LastPostedBy"] = "User A";
thread2["LastPostedDateTime"] = DateTime.UtcNow.Subtract(new TimeSpan(21, 0, 0,
0));
thread2["Views"] = 0;
thread2["Replies"] = 0;
thread2["Answered"] = false;
thread2["Tags"] = new List<string> { "index", "primarykey", "rangekey" };
threadTable.PutItem(thread2);
// Thread 3.
var thread3 = new Document();
thread3["ForumName"] = "Amazon S3"; // Hash attribute.
thread3["Subject"] = "S3 Thread 1"; // Range attribute.
thread3["Message"] = "S3 thread 3 message text";
thread3["LastPostedBy"] = "User A";
thread3["LastPostedDateTime"] = DateTime.UtcNow.Subtract(new TimeSpan(7, 0, 0,
0));
thread3["Views"] = 0;
thread3["Replies"] = 0;
thread3["Answered"] = false;
thread3["Tags"] = new List<string> { "largeobjects", "multipart upload" };
threadTable.PutItem(thread3);
}
// Reply 1 - thread 1.
var thread1Reply1 = new Document();
thread1Reply1["Id"] = "Amazon DynamoDB#DynamoDB Thread 1"; // Hash attribute.
thread1Reply1["ReplyDateTime"] = DateTime.UtcNow.Subtract(new TimeSpan(21, 0,
0, 0)); // Range attribute.
thread1Reply1["Message"] = "DynamoDB Thread 1 Reply 1 text";
thread1Reply1["PostedBy"] = "User A";
replyTable.PutItem(thread1Reply1);
// Reply 2 - thread 1.
var thread1reply2 = new Document();
thread1reply2["Id"] = "Amazon DynamoDB#DynamoDB Thread 1"; // Hash attribute.
thread1reply2["ReplyDateTime"] = DateTime.UtcNow.Subtract(new TimeSpan(14, 0,
0, 0)); // Range attribute.
thread1reply2["Message"] = "DynamoDB Thread 1 Reply 2 text";
thread1reply2["PostedBy"] = "User B";
replyTable.PutItem(thread1reply2);
// Reply 3 - thread 1.
var thread1Reply3 = new Document();
thread1Reply3["Id"] = "Amazon DynamoDB#DynamoDB Thread 1"; // Hash attribute.
thread1Reply3["ReplyDateTime"] = DateTime.UtcNow.Subtract(new TimeSpan(7, 0, 0,
0)); // Range attribute.
thread1Reply3["Message"] = "DynamoDB Thread 1 Reply 3 text";
thread1Reply3["PostedBy"] = "User B";
replyTable.PutItem(thread1Reply3);
// Reply 1 - thread 2.
var thread2Reply1 = new Document();
thread2Reply1["Id"] = "Amazon DynamoDB#DynamoDB Thread 2"; // Hash attribute.
thread2Reply1["ReplyDateTime"] = DateTime.UtcNow.Subtract(new TimeSpan(7, 0, 0,
0)); // Range attribute.
thread2Reply1["Message"] = "DynamoDB Thread 2 Reply 1 text";
thread2Reply1["PostedBy"] = "User A";
replyTable.PutItem(thread2Reply1);
// Reply 2 - thread 2.
var thread2Reply2 = new Document();
thread2Reply2["Id"] = "Amazon DynamoDB#DynamoDB Thread 2"; // Hash attribute.
thread2Reply2["ReplyDateTime"] = DateTime.UtcNow.Subtract(new TimeSpan(1, 0, 0,
0)); // Range attribute.
thread2Reply2["Message"] = "DynamoDB Thread 2 Reply 2 text";
thread2Reply2["PostedBy"] = "User A";
replyTable.PutItem(thread2Reply2);
}
}
}
The Tic-Tac-Toe game is an example web application built on Amazon DynamoDB. The application
uses the AWS SDK for Python (Boto) to make the necessary DynamoDB calls to store game data in a
DynamoDB table, and the Python web framework Flask to illustrate end-to-end application development
in DynamoDB, including how to model data. It also demonstrates best practices when it comes to
modeling data in DynamoDB, including the table you create for the game application, the primary key
you define, additional indexes you need based on your query requirements, and the use of concatenated
value attributes.
Until another user accepts your invitation, the game status remains as PENDING. After an opponent
accepts the invite, the game status changes to IN_PROGRESS.
3. The game begins after the opponent logs in and accepts the invite.
4. The application stores all game moves and status information in a DynamoDB table.
5. The game ends with a win or a draw, which sets the game status to FINISHED.
• Step 1: Deploy and Test Locally (p. 911) – In this section, you download, deploy, and test the
application on your local computer. You will create the required tables in the downloadable version of
DynamoDB.
• Step 2: Examine the Data Model and Implementation Details (p. 915) – This section first describes
in detail the data model, including the indexes and the use of the concatenated value attribute. Then
the section explains how the application works.
• Step 3: Deploy in Production Using the DynamoDB Service (p. 921) – This section focuses on
deployment considerations in production. In this step, you create a table using the Amazon DynamoDB
service and deploy the application using AWS Elastic Beanstalk. When you have the application in
production, you also grant appropriate permissions so the application can access the DynamoDB table.
The instructions in this section walk you through the end-to-end production deployment.
• Step 4: Clean Up Resources (p. 927) – This section highlights areas that are not covered by this
example. The section also provides steps for you to remove the AWS resources you created in the
preceding steps so that you avoid incurring any charges.
In this step you download, deploy, and test the Tic-Tac-Toe game application on your local computer.
Instead of using the Amazon DynamoDB web service, you will download DynamoDB to your computer,
and create the required table there.
• Python
• Flask (a microframework for Python)
• AWS SDK for Python (Boto)
• DynamoDB running on your computer
• Git
The Tic-Tac-Toe application has been tested using Python version 2.7.
2. Install Flask and AWS SDK for Python (Boto) using the Python Package Installer (PIP):
• Install PIP.
For instructions, go to Install PIP. On the installation page, choose the get-pip.py link, and then
save the file. Then open a command terminal as an administrator, and type the following at the
command prompt:
python.exe get-pip.py
On Linux, you don't specify the .exe extension. You only specify python get-pip.py.
• Using PIP, install the Flask and Boto packages using the following code:
3. Download DynamoDB to your computer. For instructions on how to run it, see Setting Up DynamoDB
Local (Downloadable Version) (p. 43).
4. Download the Tic-Tac-Toe application:
a. Install Git. For instructions, go to git Downloads.
b. Execute the following code to download the application:
1. Start DynamoDB.
To do so, open a command terminal, navigate to the folder where you downloaded the Tic-Tac-Toe
application, and run the application locally using the following code:
http://localhost:5000/
Doing this creates the game by adding an item in the Games table. It sets the game status to
PENDING.
8. Open another browser window, and type the following.
http://localhost:5000/
The browser passes information through cookies, so you should use incognito mode or private
browsing so that your cookies don't carry over.
9. Log in as user2.
The game page appears with an empty tic-tac-toe grid. The page also shows relevant game
information such as the game ID, whose turn it is, and game status.
11. Play the game.
For each user move, the web service sends a request to DynamoDB to conditionally update the game
item in the Games table. For example, the conditions ensure the move is valid, the square the user chose
is available, and it was the turn of the user who made the move. For a valid move, the update operation
adds a new attribute corresponding to the selection on the board. The update operation also sets the
value of the existing attribute to the user who can make the next move.
On the game page, the application makes asynchronous JavaScript calls every second, for up to five
minutes, to check if the game state in DynamoDB has changed. If it has, the application updates the
page with new information. After five minutes, the application stops making the requests and you need
to refresh the page to get updated information.
• Table – In DynamoDB, a table is a collection of items (that is, records), and each item is a collection of
name-value pairs called attributes.
In this Tic-Tac-Toe example, the application stores all game data in a table, Games. The application
creates one item in the table per game and stores all game data as attributes. A tic-tac-toe game
can have up to nine moves. Because DynamoDB tables do not have a schema in cases where only the
primary key is the required attribute, the application can store varying number of attributes per game
item.
The Games table has a simple primary key made of one attribute, GameId, of string type. The
application assigns a unique ID to each game. For more information on DynamoDB primary keys, see
Primary Key (p. 5).
When a user initiates a tic-tac-toe game by inviting another user to play, the application creates a new
item in the Games table with attributes storing game metadata, such as the following:
• HostId, the user who initiated the game.
• Opponent, the user who was invited to play.
• The user whose turn it is to play. The user who initiated the game plays first.
• The user who uses the O symbol on the board. The user who initiates the games uses the O symbol.
In addition, the application creates a StatusDate concatenated attribute, marking the initial game
state as PENDING. The following screenshot shows an example item as it appears in the DynamoDB
console:
As the game progresses, the application adds one attribute to the table for each game move. The
attribute name is the board position, for example TopLeft or BottomRight. For example, a move
might have a TopLeft attribute with the value O, a TopRight attribute with the value O, and a
BottomRight attribute with the value X. The attribute value is either O or X, depending on which user
made the move. For example, consider the following board:
The application then uses the StatusDate attribute in creating secondary indexes by specifying
StatusDate as a sort key for the index. The benefit of using the StatusDate concatenated value
attribute is further illustrated in the indexes discussed next.
• Global secondary indexes – You can use the table's primary key, GameId, to efficiently query the
table to find a game item. To query the table on attributes other than the primary key attributes,
DynamoDB supports the creation of secondary indexes. In this example application, you build the
following two secondary indexes:
• HostId-StatusDate-index. This index has HostId as a partition key and StatusDate as a sort key.
You can use this index to query on HostId, for example to find games hosted by a particular user.
• OpponentId-StatusDate-index. This index has OpponentId as a partition key and StatusDate
as a sort key. You can use this index to query on Opponent, for example to find games where a
particular user is the opponent.
These indexes are called global secondary indexes because the partition key in these indexes is not the
same the partition key (GameId), used in the primary key of the table.
Note that both the indexes specify StatusDate as a sort key. Doing this enables the following:
• You can query using the BEGINS_WITH comparison operator. For example, you can find all games
with the IN_PROGRESS attribute hosted by a particular user. In this case, the BEGINS_WITH
operator checks for the StatusDate value that begins with IN_PROGRESS.
• DynamoDB stores the items in the index in sorted order, by sort key value. So if all status prefixes
are the same (for example, IN_PROGRESS), the ISO format used for the date part will have items
sorted from oldest to the newest. This approach enables certain queries to be performed efficiently,
for example the following:
• Retrieve up to 10 of the most recent IN_PROGRESS games hosted by the user who is logged in.
For this query, you specify the HostId-StatusDate-index index.
• Retrieve up to 10 of the most recent IN_PROGRESS games where the user logged in is the
opponent. For this query, you specify the OpponentId-StatusDate-index index.
API Version 2012-08-10
916
Amazon DynamoDB Developer Guide
Step 2: Examine the Data Model
and Implementation Details
For more information about secondary indexes, see Improving Data Access with Secondary
Indexes (p. 493).
• Home page – This page provides the user a simple login, a CREATE button to create a new tic-tac-toe
game, a list of games in progress, game history, and any active pending game invitations.
The home page is not refreshed automatically; you must refresh the page to refresh the lists.
• Game page – This page shows the tic-tac-toe grid where users play.
The application updates the game page automatically every second. The JavaScript in your browser
calls the Python web server every second to query the Games table whether the game items in the
table have changed. If they have, JavaScript triggers a page refresh so that the user sees the updated
board.
Home Page
After the user logs in, the application displays the following three lists of information:
• Invitations – This list shows up to the 10 most recent invitations from others that are pending
acceptance by the user who is logged in. In the preceding screenshot, user1 has invitations from user5
and user2 pending.
• Games In-Progress – This list shows up to the 10 most recent games that are in progress. These are
games that the user is actively playing, which have the status IN_PROGRESS. In the screenshot, user1
is actively playing a tic-tac-toe game with user3 and user4.
• Recent History – This list shows up to the 10 most recent games that the user finished, which have the
status FINISHED. In game shown in the screenshot, user1 has previously played with user2. For each
completed game, the list shows the game result.
In the code, the index function (in application.py) makes the following three calls to retrieve game
status information:
inviteGames = controller.getGameInvites(session["username"])
inProgressGames = controller.getGamesWithStatus(session["username"], "IN_PROGRESS")
finishedGames = controller.getGamesWithStatus(session["username"], "FINISHED")
Each of these calls return a list of items from DynamoDB that are wrapped by the Game objects. It is easy
to extract data from these objects in the view. The index function passes these object lists to the view to
render the HTML.
return render_template("index.html",
user=session["username"],
invites=inviteGames,
inprogress=inProgressGames,
finished=finishedGames)
The Tic-Tac-Toe application defines the Game class primarily to store game data retrieved from
DynamoDB. These functions return lists of Game objects that enable you to isolate the rest of the
application from code related to Amazon DynamoDB items. Thus, these functions help you decouple
your application code from the details of the data store layer.
The architectural pattern described here is also referred as the model-view-controller (MVC) UI pattern.
In this case, the Game object instances (representing data) are the model, and the HTML page is the view.
The controller is divided into two files. The application.py file has the controller logic for the Flask
framework, and the business logic is isolated in the gameController.py file. That is, the application
stores everything that has to do with DynamoDB SDK in its own separate file in the dynamodb folder.
Let us review the three functions and how they query the Games table using global secondary indexes to
retrieve relevant data.
The getGameInvites function retrieves the list of the 10 most recent pending invitations. These games
have been created by users, but the opponents have not accepted the game invitations. For these games,
the status remains PENDING until the opponent accepts the invite. If the opponent declines the invite,
the application remove the corresponding item from the table.
• It specifies the OpponentId-StatusDate-index index to use with the following index key values
and comparison operators:
• The partition key is OpponentId and takes the index key user ID.
• The sort key is StatusDate and takes the comparison operator and index key value
beginswith="PENDING_".
You use the OpponentId-StatusDate-index index to retrieve games to which the logged-in user is
invited—that is, where the logged-in user is the opponent.
• The query limits the result to 10 items.
gameInvitesIndex = self.cm.getGamesTable().query(
Opponent__eq=user,
StatusDate__beginswith="PENDING_",
index="OpponentId-StatusDate-index",
limit=10)
In the index, for each OpponentId (the partition key) DynamoDB keeps items sorted by StatusDate
(the sort key). Therefore, the games that the query returns will be the 10 most recent games.
After an opponent accepts a game invitation, the game status changes to IN_PROGRESS. After the game
completes, the status changes to FINISHED.
Queries to find games that are either in progress or finished are the same except for the different status
value. Therefore, the application defines the getGamesWithStatus function, which takes the status
value as a parameter.
The following section discusses in-progress games, but the same description also applies to finished
games.
A list of in-progress games for a given user includes both the following:
The getGamesWithStatus function runs the following two queries, each time using the appropriate
secondary index.
• The function queries the Games table using the HostId-StatusDate-index index. For the index,
the query specifies primary key values—both the partition key (HostId) and sort key (StatusDate)
values, along with comparison operators.
oppGamesInProgress = self.cm.getGamesTable().query(Opponent__eq=user,
StatusDate__beginswith=status,
index="OpponentId-StatusDate-index",
limit=10)
• The function then combines the two lists, sorts, and for the first 0 to 10 items creates a list of the
Game objects and returns the list to the calling function (that is, the index).
games = self.mergeQueries(hostGamesInProgress,
oppGamesInProgress)
return games
Game Page
The game page is where the user plays tic-tac-toe games. It shows the game grid along with game-
relevant information. The following screenshot shows an example game in progress:
In this case, the page shows the user as host and the game status as PENDING, waiting for the
opponent to accept.
• The user accepts one of the pending invitations on the home page.
In this case, the page show the user as the opponent and game status as IN_PROGRESS.
A user selection on the board generates a form POST request to the application. That is, Flask calls the
selectSquare function (in application.py) with the HTML form data. This function, in turn, calls the
updateBoardAndTurn function (in gameController.py) to update the game item as follows:
The function returns true if the item update was successful; otherwise, it returns false. Note the
following about the updateBoardAndTurn function:
• The function calls the update_item function of the AWS SDK for Python to make a finite set of
updates to an existing item. The function maps to the UpdateItem operation in DynamoDB. For more
information, see UpdateItem.
Note
The difference between the UpdateItem and PutItem operations is that PutItem replaces
the entire item. For more information, see PutItem.
• The new attribute to add, specific to the current user move, and its value (for example, TopLeft="X").
attributeUpdates = {
position : {
"Action" : "PUT",
"Value" : { "S" : representation }
}
}
self.cm.db.update_item("Games", key=key,
attribute_updates=attributeUpdates,
expected=expectations)
After the function returns, the selectSquare function calls redirect as shown in the following example:
redirect("/game="+gameId)
This call causes the browser to refresh. As part of this refresh, the application checks to see if the game
has ended in a win or draw. If it has, the application will update the game item accordingly.
In the preceding sections, you deployed and tested the Tic-Tac-Toe application locally on your computer
using DynamoDB Local. Now, you deploy the application in production as follows:
• Deploy the application using Elastic Beanstalk, an easy-to-use service for deploying and scaling web
applications and services. For more information, go to Deploying a Flask Application to AWS Elastic
Beanstalk.
Elastic Beanstalk will launch one or more Amazon Elastic Compute Cloud (Amazon EC2) instances,
which you configure through Elastic Beanstalk, on which your Tic-Tac-Toe application will run.
• Using the Amazon DynamoDB service, create a Games table that exists on AWS rather than locally on
your computer.
In addition, you also have to configure permissions. Any AWS resources you create, such as the Games
table in DynamoDB, are private by default. Only the resource owner, that is the AWS account that created
the Games table, can access this table. Thus, by default your Tic-Tac-Toe application cannot update the
Games table.
To grant necessary permissions, you will create an AWS Identity and Access Management (IAM) role and
grant this role permissions to access the Games table. Your Amazon EC2 instance first assumes this role.
In response, AWS returns temporary security credentials that the Amazon EC2 instance can use to update
the Games table on behalf of the Tic-Tac-Toe application. When you configure your Elastic Beanstalk
application, you specify the IAM role that the Amazon EC2 instance or instances can assume. For more
information about IAM roles, go to IAM Roles for Amazon EC2 in the Amazon EC2 User Guide for Linux
Instances.
Note
Before you create Amazon EC2 instances for the Tic-Tac-Toe application, you must first decide
the AWS region where you want Elastic Beanstalk to create the instances. After you create the
Elastic Beanstalk application, you provide the same region name and endpoint in a configuration
file. The Tic-Tac-Toe application uses information in this file to create the Games table and
send subsequent requests in a specific AWS region. Both the DynamoDB Games table and the
Amazon EC2 instances that Elastic Beanstalk launches must be in the same AWS region. For a list
of available regions, go to Amazon DynamoDB in the Amazon Web Services General Reference.
1. Create an IAM role using the AWS IAM service. You will attach a policy to this role granting
permissions for DynamoDB actions to access the Games table.
2. Bundle the Tic-Tac-Toe application code and a configuration file, and create a .zip file. You use this
.zip file to give the Tic-Tac-Toe application code to Elastic Beanstalk to put on your servers. For more
information on creating a bundle, go to Creating an Application Source Bundle in the AWS Elastic
Beanstalk Developer Guide.
In the configuration file (beanstalk.config), you provide AWS region and endpoint information.
The Tic-Tac-Toe application uses this information to determine which DynamoDB region to talk to.
3. Set up the Elastic Beanstalk environment. Elastic Beanstalk will launch an Amazon EC2 instance
or instances and deploy your Tic-Tac-Toe application bundle on them. After the Elastic Beanstalk
environment is ready, you provide the configuration file name by adding the CONFIG_FILE
environment variable.
4. Create the DynamoDB table. Using the Amazon DynamoDB service, you create the Games table on
AWS, rather than locally on your computer. Remember, this table has a simple primary key made of
the GameId partition key of string type.
5. Test the game in production.
{
"Version":"2012-10-17",
"Statement":[
{
"Action":[
"dynamodb:ListTables"
],
"Effect":"Allow",
"Resource":"*"
},
{
"Action":[
"dynamodb:*"
],
"Effect":"Allow",
"Resource":[
"arn:aws:dynamodb:us-west-2:922852403271:table/Games",
"arn:aws:dynamodb:us-west-2:922852403271:table/Games/index/*"
]
}
]
}
For further instructions, go to Creating a Role for an AWS Service (AWS Management Console) in the IAM
User Guide.
After you extract all files, note that you will have a code folder. To hand off this folder to Electric
Beanstalk, you will bundle the contents of this folder as a .zip file. First, you need to add a
configuration file to that folder. Your application will use the region and endpoint information to create
a DynamoDB table in the specified region and make subsequent table operation requests using the
specified endpoint.
[dynamodb]
region=<AWS region>
endpoint=<DynamoDB endpoint>
[dynamodb]
region=us-west-2
endpoint=dynamodb.us-west-2.amazonaws.com
For a list of available regions, go to Amazon DynamoDB in the Amazon Web Services General
Reference.
Important
The region specified in the configuration file is the location where the Tic-Tac-Toe
application creates the Games table in DynamoDB. You must create the Elastic Beanstalk
application discussed in the next section in the same region.
Note
When you create your Elastic Beanstalk application, you will request to launch an
environment where you can choose the environment type. To test the Tic-Tac-Toe example
application, you can choose the Single Instance environment type, skip the following, and
go to the next step.
However, note that the Load balancing, autoscaling environment type provides a highly
available and scalable environment, something you should consider when you create
and deploy other applications. If you choose this environment type, you will also need to
generate a UUID and add it to the configuration file as shown following:
[dynamodb]
region=us-west-2
endpoint=dynamodb.us-west-2.amazonaws.com
[flask]
secret_key= 284e784d-1a25-4a19-92bf-8eeb7a9example
In client-server communication when the server sends response, for security's sake the
server sends a signed cookie that the client sends back to the server in the next request.
When there is only one server, the server can locally generate an encryption key when
it starts. When there are many servers, they all need to know the same encryption
key; otherwise, they won't be able to read cookies set by the peer servers. By adding
secret_key to the configuration file, we tell all servers to use this encryption key.
3. Zip the content of the root folder of the application (which includes the beanstalk.config file)—
for example, TicTacToe.zip.
4. Upload the .zip file to an Amazon Simple Storage Service (Amazon S3) bucket. In the next section,
you provide this .zip file to Elastic Beanstalk to upload on the server or servers.
For instructions on how to upload to an Amazon S3 bucket, go to the Create a Bucket and Add an
Object to a Bucket topics in the Amazon Simple Storage Service Getting Started Guide.
1. Type the following custom URL to set up an Elastic Beanstalk console to set up the environment:
https://console.aws.amazon.com/elasticbeanstalk/?region=<AWS-Region>#/newApplication
?applicationName=TicTacToeyour-name
&solutionStackName=Python
&sourceBundleUrl=https://s3.amazonaws.com/<bucket-name>/TicTacToe.zip
&environmentType=SingleInstance
&instanceType=t1.micro
For more information about custom URLs, go to Constructing a Launch Now URL in the AWS Elastic
Beanstalk Developer Guide. For the URL, note the following:
• You will need to provide an AWS region name (the same as the one you provided in the
configuration file), an Amazon S3 bucket name, and the object name.
• For testing, the URL requests the SingleInstance environment type, and t1.micro as the instance
type.
• The application name must be unique. Thus, in the preceding URL, we suggest you prepend your
name to the applicationName.
Doing this opens the Elastic Beanstalk console. In some cases, you might need to sign in.
2. In the Elastic Beanstalk console, choose Review and Launch, and then choose Launch.
3. Note the URL for future reference. This URL opens your Tic-Tac-Toe application home page.
4. Configure the Tic-Tac-Toe application so it knows the location of the configuration file.
a. Choose the gear box next to Software Configuration, as shown in the following screenshot.
b. At the end of the Environment Properties section, type CONFIG_FILE and its value
beanstalk.config, and then choose Save.
http://<pen-name>.elasticbeanstalk.com
Make sure that you clear all cookies in your browser window so you won't be logged in as same user.
9. Type the same URL to open the application home page, as shown in the following example:
http://<env-name>.elasticbeanstalk.com
Both testuser1 and testuser2 can play the game. For each move, the application will save the move
in the corresponding item in the Games table.
If you are done testing, you can remove the resources you created to test the Tic-Tac-Toe application to
avoid incurring any charges.
The ability to export and import data is useful in many scenarios. For example, suppose you want to
maintain a baseline set of data, for testing purposes. You could put the baseline data into a DynamoDB
table and export it to Amazon S3. Then, after you run an application that modifies the test data, you
could "reset" the data set by importing the baseline from Amazon S3 back into the DynamoDB table.
Another example involves accidental deletion of data, or even an accidental DeleteTable operation. In
these cases, you could restore the data from a previous export file in Amazon S3. You can even copy data
from a DynamoDB table in one AWS region, store the data in Amazon S3, and then import the data from
Amazon S3 to an identical DynamoDB table in a second region. Applications in the second region could
then access their nearest DynamoDB endpoint and work with their own copy of the data, with reduced
network latency.
The following diagram shows an overview of exporting and importing DynamoDB data using AWS Data
Pipeline.
To export a DynamoDB table, you use the AWS Data Pipeline console to create a new pipeline. The
pipeline launches an Amazon EMR cluster to perform the actual export. Amazon EMR reads the data
from DynamoDB, and writes the data to an export file in an Amazon S3 bucket.
The process is similar for an import, except that the data is read from the Amazon S3 bucket and written
to the DynamoDB table.
Important
When you export or import DynamoDB data, you will incur additional costs for the underlying
AWS services that are used:
• Amazon EMR— runs a managed Hadoop cluster to performs reads and writes between
DynamoDB to Amazon S3. The cluster configuration is one m3.xlarge instance master node
and one m3.xlarge instance core node.
For more information see AWS Data Pipeline Pricing, Amazon EMR Pricing, and Amazon S3
Pricing.
You can also control access by creating IAM policies and attaching them to IAM users or groups . These
policies let you specify which users are allowed to import and export your DynamoDB data.
Important
The IAM user that performs the exports and imports must have an active AWS Access Key Id and
Secret Key. For more information, see Administering Access Keys for IAM Users in the IAM User
Guide.
• DataPipelineDefaultRole — the actions that your pipeline can take on your behalf.
• DataPipelineDefaultResourceRole — the AWS resources that the pipeline will provision on your behalf.
For exporting and importing DynamoDB data, these resources include an Amazon EMR cluster and the
Amazon EC2 instances associated with that cluster.
If you have never used AWS Data Pipeline before, you will need to create DataPipelineDefaultRole and
DataPipelineDefaultResourceRole yourself. Once you have created these roles, you can use them any time
you want to export or import DynamoDB data.
Note
If you have previously used the AWS Data Pipeline console to create a pipeline, then
DataPipelineDefaultRole and DataPipelineDefaultResourceRole were created for you at that time.
No further action is required; you can skip this section and begin creating pipelines using the
DynamoDB console. For more information, see Exporting Data From DynamoDB to Amazon
S3 (p. 933) and Importing Data From Amazon S3 to DynamoDB (p. 934).
1. Sign in to the AWS Management Console and open the IAM console at https://
console.aws.amazon.com/iam/.
2. From the IAM Console Dashboard, click Roles.
3. Click Create Role and do the following:
Now that you have created these roles, you can begin creating pipelines using the DynamoDB console.
For more information, see Exporting Data From DynamoDB to Amazon S3 (p. 933) and Importing Data
From Amazon S3 to DynamoDB (p. 934).
1. Sign in to the AWS Management Console and open the IAM console at https://
console.aws.amazon.com/iam/.
2. From the IAM Console Dashboard, click Users and select the user you want to modify.
3. In the Permissions tab, click Attach Policy.
4. In the Attach Policy panel, select AmazonDynamoDBFullAccesswithDataPipeline and click
Attach Policy.
Note
You can use a similar procedure to attach this managed policy to a group, rather than to a user.
For example, suppose that you want to allow an IAM user to export and import only the Forum, Thread,
and Reply tables. This procedure describes how to create a custom policy so that a user can work with
those tables, but no others.
1. Sign in to the AWS Management Console and open the IAM console at https://
console.aws.amazon.com/iam/.
2. From the IAM Console Dashboard, click Policies and then click Create Policy.
3. In the Create Policy panel, go to Copy an AWS Managed Policy and click Select.
4. In the Copy an AWS Managed Policy panel, go to
AmazonDynamoDBFullAccesswithDataPipeline and click Select.
5. In the Review Policy panel, do the following:
a. Review the autogenerated Policy Name and Description. If you want, you can modify these
values.
b. In the Policy Document text box, edit the policy to restrict access to specific tables. By default,
the policy permits all DynamoDB actions on all of your tables:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"cloudwatch:DeleteAlarms",
"cloudwatch:DescribeAlarmHistory",
"cloudwatch:DescribeAlarms",
"cloudwatch:DescribeAlarmsForMetric",
"cloudwatch:GetMetricStatistics",
"cloudwatch:ListMetrics",
"cloudwatch:PutMetricAlarm",
"dynamodb:*",
"sns:CreateTopic",
"sns:DeleteTopic",
"sns:ListSubscriptions",
"sns:ListSubscriptionsByTopic",
"sns:ListTopics",
"sns:Subscribe",
"sns:Unsubscribe"
],
"Effect": "Allow",
"Resource": "*",
"Sid": "DDBConsole"
},
"dynamodb:*",
Next, construct a new Action that allows access to only the Forum, Thread and Reply tables:
{
"Action": [
"dynamodb:*"
],
"Effect": "Allow",
"Resource": [
"arn:aws:dynamodb:us-west-2:123456789012:table/Forum",
"arn:aws:dynamodb:us-west-2:123456789012:table/Thread",
"arn:aws:dynamodb:us-west-2:123456789012:table/Reply"
]
},
Note
Replace us-west-2 with the region in which your DynamoDB tables reside. Replace
123456789012 with your AWS account number.
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"dynamodb:*"
],
"Effect": "Allow",
"Resource": [
"arn:aws:dynamodb:us-west-2:123456789012:table/Forum",
"arn:aws:dynamodb:us-west-2:123456789012:table/Thread",
"arn:aws:dynamodb:us-west-2:123456789012:table/Reply"
]
},
{
"Action": [
"cloudwatch:DeleteAlarms",
"cloudwatch:DescribeAlarmHistory",
"cloudwatch:DescribeAlarms",
"cloudwatch:DescribeAlarmsForMetric",
"cloudwatch:GetMetricStatistics",
"cloudwatch:ListMetrics",
"cloudwatch:PutMetricAlarm",
"sns:CreateTopic",
"sns:DeleteTopic",
"sns:ListSubscriptions",
"sns:ListSubscriptionsByTopic",
"sns:ListTopics",
"sns:Subscribe",
"sns:Unsubscribe"
],
"Effect": "Allow",
"Resource": "*",
"Sid": "DDBConsole"
},
6. When the policy settings are as you want them, click Create Policy.
After you have created the policy, you can attach it to an IAM user.
1. From the IAM Console Dashboard, click Users and select the user you want to modify.
2. In the Permissions tab, click Attach Policy.
3. In the Attach Policy panel, select the name of your policy and click Attach Policy.
Note
You can use a similar procedure to attach your policy to a group, rather than to a user.
1. Sign in to the AWS Management Console and open the AWS Data Pipeline console at https://
console.aws.amazon.com/datapipeline/.
2. If you do not already have any pipelines in the current AWS region, choose Get started now.
Otherwise, if you already have at least one pipeline, choose Create new pipeline.
3. On the Create Pipeline page, do the following:
a. In the Name field, type a name for your pipeline. For example: MyDynamoDBExportPipeline.
b. For the Source parameter, select Build using a template. From the drop-down template list,
choose Export DynamoDB table to S3.
c. In the Source DynamoDB table name field, type the name of the DynamoDB table that you
want to export.
d. In the Output S3 Folder text box, enter an Amazon S3 URI where the export file will be written.
For example: s3://mybucket/exports
The URI format for S3 Log Folder is the same as for Output S3 Folder. The URI must resolve to
a folder; log files cannot be written to the top level of the S3 bucket.
4. When the settings are as you want them, click Activate.
Your pipeline will now be created; this process can take several minutes to complete. You can monitor
the progress in the AWS Data Pipeline console.
When the export has finished, you can go to the Amazon S3 console to view your export file. The output
file name is an identifier value with no extension, such as this example: ae10f955-fb2f-4790-9b11-
fbfea01a871e_000000. The internal format of this file is described at Verify Data Export File in the
AWS Data Pipeline Developer Guide.
We will use the term source table for the original table from which the data was exported, and
destination table for the table that will receive the imported data. You can import data from an export
file in Amazon S3, provided that all of the following are true:
• The destination table already exists. (The import process will not create the table for you.)
• The destination table has the same key schema as the source table.
The destination table does not have to be empty. However, the import process will replace any data
items in the table that have the same keys as the items in the export file. For example, suppose you have
a Customer table with a key of CustomerId, and that there are only three items in the table (CustomerId
1, 2, and 3). If your export file also contains data items for CustomerID 1, 2, and 3, the items in the
destination table will be replaced with those from the export file. If the export file also contains a data
item for CustomerId 4, then that item will be added to the table.
API Version 2012-08-10
934
Amazon DynamoDB Developer Guide
Troubleshooting
The destination table can be in a different AWS region. For example, suppose you have a Customer table
in the US West (Oregon) region and export its data to Amazon S3. You could then import that data into
an identical Customer table in the EU (Ireland) region. This is referred to as a cross-region export and
import. For a list of AWS regions, go to Regions and Endpoints in the AWS General Reference.
Note that the AWS Management Console lets you export multiple source tables at once. However, you
can only import one table at a time.
1. Sign in to the AWS Management Console and open the AWS Data Pipeline console at https://
console.aws.amazon.com/datapipeline/.
2. (Optional) If you want to perform a cross region import, go to the upper right corner of the window
and choose the destination region.
3. Choose Create new pipeline.
4. On the Create Pipeline page, do the following:
a. In the Name field, type a name for your pipeline. For example: MyDynamoDBImportPipeline.
b. For the Source parameter, select Build using a template. From the drop-down template list,
choose Import DynamoDB backup data from S3.
c. In the Input S3 Folder text box, enter an Amazon S3 URI where the export file can be found. For
example: s3://mybucket/exports
The import job will expect to find a file at the specified Amazon S3 location. The internal format
of the file is described at Verify Data Export File in the AWS Data Pipeline Developer Guide.
d. In the Target DynamoDB table name field, type the name of the DynamoDB table into which
you want to import the data.
e. In the S3 location for logs text box, enter an Amazon S3 URI where the log file for the import
will be written. For example: s3://mybucket/logs/
The URI format for S3 Log Folder is the same as for Output S3 Folder. The URI must resolve to
a folder; log files cannot be written to the top level of the S3 bucket.
5. When the settings are as you want them, click Activate.
Your pipeline will now be created; this process can take several minutes to complete. The import job will
begin immediately after the pipeline has been created.
Troubleshooting
This section covers some basic failure modes and troubleshooting for DynamoDB exports.
If an error occurs during an export or import, the pipeline status in the AWS Data Pipeline console will
display as ERROR. If this happens, click the name of the failed pipeline to go to its detail page. This will
show details about all of the steps in the pipeline, and the status of each one. In particular, examine any
execution stack traces that you see.
Finally, go to your Amazon S3 bucket and look for any export or import log files that were written there.
The following are some common issues that may cause a pipeline to fail, along with corrective actions. To
diagnose your pipeline, compare the errors you have seen with the issues noted below.
API Version 2012-08-10
935
Amazon DynamoDB Developer Guide
Predefined Templates for AWS
Data Pipeline and DynamoDB
• For an import, ensure that the destination table already exists, and the destination table has the same
key schema as the source table. These conditions must be met, or the import will fail.
• Ensure that the Amazon S3 bucket you specified has been created, and that you have read and write
permissions on it.
• The pipeline might have exceeded its execution timeout. (You set this parameter when you created the
pipeline.) For example, you might have set the execution timeout for 1 hour, but the export job might
have required more time than this. Try deleting and then re-creating the pipeline, but with a longer
execution timeout interval this time.
• Update the manifest file if you restore from a Amazon S3 bucket that is not the original bucket that
the export was performed with (contains a copy of the export).
• You might not have the correct permissions for performing an export or import. For more information,
see Prerequisites to Export and Import Data (p. 930).
• You might have reached a resource limit in your AWS account, such as the maximum number of
Amazon EC2 instances or the maximum number of AWS Data Pipeline pipelines. For more information,
including how to request increases in these limits, see AWS Service Limits in the AWS General
Reference.
Note
For more details on troubleshooting a pipeline, go to Troubleshooting in the AWS Data Pipeline
Developer Guide.
AWS Data Pipeline offers several templates for creating pipelines; the following templates are relevant
to DynamoDB.
For up to date instructions on the DynamoDB Storage Backend for JanusGraph, see the README.md file.
If you need to write an expression containing an attribute name that conflicts with a DynamoDB reserved
word, you can define an expression attribute name to use in the place of the reserved word. For more
information, see Expression Attribute Names (p. 387).
ABORT
ABSOLUTE
ACTION
ADD
AFTER
AGENT
AGGREGATE
ALL
ALLOCATE
ALTER
ANALYZE
AND
ANY
ARCHIVE
ARE
ARRAY
AS
ASC
ASCII
ASENSITIVE
ASSERTION
ASYMMETRIC
AT
ATOMIC
ATTACH
ATTRIBUTE
AUTH
AUTHORIZATION
AUTHORIZE
AUTO
AVG
BACK
BACKUP
BASE
BATCH
BEFORE
BEGIN
BETWEEN
BIGINT
BINARY
BIT
BLOB
BLOCK
BOOLEAN
BOTH
BREADTH
BUCKET
BULK
BY
BYTE
CALL
CALLED
CALLING
CAPACITY
CASCADE
CASCADED
CASE
CAST
CATALOG
CHAR
CHARACTER
CHECK
CLASS
CLOB
CLOSE
CLUSTER
CLUSTERED
CLUSTERING
CLUSTERS
COALESCE
COLLATE
COLLATION
COLLECTION
COLUMN
COLUMNS
COMBINE
COMMENT
COMMIT
COMPACT
COMPILE
COMPRESS
CONDITION
CONFLICT
CONNECT
CONNECTION
CONSISTENCY
CONSISTENT
CONSTRAINT
CONSTRAINTS
CONSTRUCTOR
CONSUMED
CONTINUE
CONVERT
COPY
CORRESPONDING
COUNT
COUNTER
CREATE
CROSS
CUBE
CURRENT
CURSOR
CYCLE
DATA
DATABASE
DATE
DATETIME
DAY
DEALLOCATE
DEC
DECIMAL
DECLARE
DEFAULT
DEFERRABLE
DEFERRED
DEFINE
DEFINED
DEFINITION
DELETE
DELIMITED
DEPTH
DEREF
DESC
DESCRIBE
DESCRIPTOR
DETACH
DETERMINISTIC
DIAGNOSTICS
DIRECTORIES
DISABLE
DISCONNECT
DISTINCT
DISTRIBUTE
DO
DOMAIN
DOUBLE
DROP
DUMP
DURATION
DYNAMIC
EACH
ELEMENT
ELSE
ELSEIF
EMPTY
ENABLE
END
EQUAL
EQUALS
ERROR
ESCAPE
ESCAPED
EVAL
EVALUATE
EXCEEDED
EXCEPT
EXCEPTION
EXCEPTIONS
EXCLUSIVE
EXEC
EXECUTE
EXISTS
EXIT
EXPLAIN
EXPLODE
EXPORT
EXPRESSION
EXTENDED
EXTERNAL
EXTRACT
FAIL
FALSE
FAMILY
FETCH
FIELDS
FILE
FILTER
FILTERING
FINAL
FINISH
FIRST
FIXED
FLATTERN
FLOAT
FOR
FORCE
FOREIGN
FORMAT
FORWARD
FOUND
FREE
FROM
FULL
FUNCTION
FUNCTIONS
GENERAL
GENERATE
GET
GLOB
GLOBAL
GO
GOTO
GRANT
GREATER
GROUP
GROUPING
HANDLER
HASH
HAVE
HAVING
HEAP
HIDDEN
HOLD
HOUR
IDENTIFIED
IDENTITY
IF
IGNORE
IMMEDIATE
IMPORT
IN
INCLUDING
INCLUSIVE
INCREMENT
INCREMENTAL
INDEX
INDEXED
INDEXES
INDICATOR
INFINITE
INITIALLY
INLINE
INNER
INNTER
INOUT
INPUT
INSENSITIVE
INSERT
INSTEAD
INT
INTEGER
INTERSECT
INTERVAL
INTO
INVALIDATE
IS
ISOLATION
ITEM
ITEMS
ITERATE
JOIN
KEY
KEYS
LAG
LANGUAGE
LARGE
LAST
LATERAL
LEAD
LEADING
LEAVE
LEFT
LENGTH
LESS
LEVEL
LIKE
LIMIT
LIMITED
LINES
LIST
LOAD
LOCAL
LOCALTIME
LOCALTIMESTAMP
LOCATION
LOCATOR
LOCK
LOCKS
LOG
LOGED
LONG
LOOP
LOWER
MAP
MATCH
MATERIALIZED
MAX
MAXLEN
MEMBER
MERGE
METHOD
METRICS
MIN
MINUS
MINUTE
MISSING
MOD
MODE
MODIFIES
MODIFY
MODULE
MONTH
MULTI
MULTISET
NAME
NAMES
NATIONAL
NATURAL
NCHAR
NCLOB
NEW
NEXT
NO
NONE
NOT
NULL
NULLIF
NUMBER
NUMERIC
OBJECT
OF
OFFLINE
OFFSET
OLD
ON
ONLINE
ONLY
OPAQUE
OPEN
OPERATOR
OPTION
OR
ORDER
ORDINALITY
OTHER
OTHERS
OUT
OUTER
OUTPUT
OVER
OVERLAPS
OVERRIDE
OWNER
PAD
PARALLEL
PARAMETER
PARAMETERS
PARTIAL
PARTITION
PARTITIONED
PARTITIONS
PATH
PERCENT
PERCENTILE
PERMISSION
PERMISSIONS
PIPE
PIPELINED
PLAN
POOL
POSITION
PRECISION
PREPARE
PRESERVE
PRIMARY
PRIOR
PRIVATE
PRIVILEGES
PROCEDURE
PROCESSED
PROJECT
PROJECTION
PROPERTY
PROVISIONING
PUBLIC
PUT
QUERY
QUIT
QUORUM
RAISE
RANDOM
RANGE
RANK
RAW
READ
READS
REAL
REBUILD
RECORD
RECURSIVE
REDUCE
REF
REFERENCE
REFERENCES
REFERENCING
REGEXP
REGION
REINDEX
RELATIVE
RELEASE
REMAINDER
RENAME
REPEAT
REPLACE
REQUEST
RESET
RESIGNAL
RESOURCE
RESPONSE
RESTORE
RESTRICT
RESULT
RETURN
RETURNING
RETURNS
REVERSE
REVOKE
RIGHT
ROLE
ROLES
ROLLBACK
ROLLUP
ROUTINE
ROW
ROWS
RULE
RULES
SAMPLE
SATISFIES
SAVE
SAVEPOINT
SCAN
SCHEMA
SCOPE
SCROLL
SEARCH
SECOND
SECTION
SEGMENT
SEGMENTS
SELECT
SELF
SEMI
SENSITIVE
SEPARATE
SEQUENCE
SERIALIZABLE
SESSION
SET
SETS
SHARD
SHARE
SHARED
SHORT
SHOW
SIGNAL
SIMILAR
SIZE
SKEWED
SMALLINT
SNAPSHOT
SOME
SOURCE
SPACE
SPACES
SPARSE
SPECIFIC
SPECIFICTYPE
SPLIT
SQL
SQLCODE
SQLERROR
SQLEXCEPTION
SQLSTATE
SQLWARNING
START
STATE
STATIC
STATUS
STORAGE
STORE
STORED
STREAM
STRING
STRUCT
STYLE
SUB
SUBMULTISET
SUBPARTITION
SUBSTRING
SUBTYPE
SUM
SUPER
SYMMETRIC
SYNONYM
SYSTEM
TABLE
TABLESAMPLE
TEMP
TEMPORARY
TERMINATED
TEXT
THAN
THEN
THROUGHPUT
TIME
TIMESTAMP
TIMEZONE
TINYINT
TO
TOKEN
TOTAL
TOUCH
TRAILING
TRANSACTION
TRANSFORM
TRANSLATE
TRANSLATION
TREAT
TRIGGER
TRIM
TRUE
TRUNCATE
TTL
TUPLE
TYPE
UNDER
UNDO
UNION
UNIQUE
UNIT
UNKNOWN
UNLOGGED
UNNEST
UNPROCESSED
UNSIGNED
UNTIL
UPDATE
UPPER
URL
USAGE
USE
USER
USERS
USING
UUID
VACUUM
VALUE
VALUED
VALUES
VARCHAR
VARIABLE
VARIANCE
VARINT
VARYING
VIEW
VIEWS
VIRTUAL
VOID
WAIT
WHEN
WHENEVER
WHERE
WHILE
WINDOW
WITH
WITHIN
WITHOUT
WORK
WRAPPED
WRITE
YEAR
ZONE
With the introduction of expression parameters (see Using Expressions in DynamoDB (p. 383)), several
older parameters have been deprecated. New applications should not use these legacy parameters,
but should use expression parameters instead. (For more information, see Using Expressions in
DynamoDB (p. 383).)
Note
DynamoDB does not allow mixing legacy conditional parameters and expression parameters
in a single call. For example, calling the Query operation with AttributesToGet and
ConditionExpression will result in an error.
The following table shows the DynamoDB APIs that still support these legacy parameters, and which
expression parameter to use instead. This table can be helpful if you are considering updating your
applications so that they use expression parameters instead.
If You Use This API... With These Legacy Use This Expression Parameter
Parameters... Instead
KeyConditions KeyConditionExpression
QueryFilter FilterExpression
ScanFilter FilterExpression
Expected ConditionExpression
The following sections provide more information about legacy conditional parameters.
Topics
• AttributesToGet (p. 946)
• AttributeUpdates (p. 947)
• ConditionalOperator (p. 949)
• Expected (p. 949)
• KeyConditions (p. 952)
• QueryFilter (p. 954)
• ScanFilter (p. 955)
• Writing Conditions With Legacy Parameters (p. 956)
AttributesToGet
AttributesToGet is an array of one or more attributes to retrieve from DynamoDB. If no attribute
names are provided, then all attributes will be returned. If any of the requested attributes are not found,
they will not appear in the result.
AttributesToGet allows you to retrieve attributes of type List or Map; however, it cannot retrieve
individual elements within a List or a Map.
AttributeUpdates
In an UpdateItem operation, AttributeUpdates are the names of attributes to be modified, the
action to perform on each, and the new value for each. If you are updating an attribute that is an index
key attribute for any indexes on that table, the attribute type must match the index key type defined in
the AttributesDefinition of the table description. You can use UpdateItem to update any non-key
attributes.
Attribute values cannot be null. String and Binary type attributes must have lengths greater than
zero. Set type attributes must not be empty. Requests with empty values will be rejected with a
ValidationException exception.
Each AttributeUpdates element consists of an attribute name to modify, along with the following:
If an item with the specified primary key is found in the table, the following values perform the
following actions:
• PUT - Adds the specified attribute to the item. If the attribute already exists, it is replaced by the new
value.
• DELETE - Removes the attribute and its value, if no value is specified for DELETE. The data type of
the specified value must match the existing value's data type.
If a set of values is specified, then those values are subtracted from the old set. For example, if the
attribute value was the set [a,b,c] and the DELETE action specifies [a,c], then the final attribute
value is [b]. Specifying an empty set is an error.
• ADD - Adds the specified value to the item, if the attribute does not already exist. If the attribute
does exist, then the behavior of ADD depends on the data type of the attribute:
• If the existing attribute is a number, and if Value is also a number, then Value is mathematically
added to the existing attribute. If Value is a negative number, then it is subtracted from the
existing attribute.
Note
If you use ADD to increment or decrement a number value for an item that doesn't exist
before the update, DynamoDB uses 0 as the initial value.
Similarly, if you use ADD for an existing item to increment or decrement an attribute value
that doesn't exist before the update, DynamoDB uses 0 as the initial value. For example,
suppose that the item you want to update doesn't have an attribute named itemcount,
but you decide to ADD the number 3 to this attribute anyway. DynamoDB will create the
itemcount attribute, set its initial value to 0, and finally add 3 to it. The result will be a
new itemcount attribute, with a value of 3.
• If the existing data type is a set, and if Value is also a set, then Value is appended to the existing
set. For example, if the attribute value is the set [1,2], and the ADD action specified [3], then
the final attribute value is [1,2,3]. An error occurs if an ADD action is specified for a set attribute
and the attribute type specified does not match the existing set type.
Both sets must have the same primitive data type. For example, if the existing data type is a set of
strings, Value must also be a set of strings.
If no item with the specified key is found in the table, the following values perform the following
actions:
• PUT - Causes DynamoDB to create a new item with the specified primary key, and then adds the
attribute.
• DELETE - Nothing happens, because attributes cannot be deleted from a nonexistent item. The
operation succeeds, but DynamoDB does not create a new item.
• ADD - Causes DynamoDB to create an item with the supplied primary key and number (or set of
numbers) for the attribute value. The only data types allowed are Number and Number Set.
If you provide any attributes that are part of an index key, then the data types for those attributes must
match those of the schema in the table's attribute definition.
ConditionalOperator
A logical operator to apply to the conditions in a Expected, QueryFilter or ScanFilter map:
• AND - If all of the conditions evaluate to true, then the entire map evaluates to true.
• OR - If at least one of the conditions evaluate to true, then the entire map evaluates to true.
The operation will succeed only if the entire map evaluates to true.
Note
This parameter does not support attributes of type List or Map.
Expected
Expected is a conditional block for an UpdateItem operation. Expected is a map of attribute/
condition pairs. Each element of the map consists of an attribute name, a comparison operator, and one
or more values. DynamoDB compares the attribute with the value(s) you supplied, using the comparison
operator. For each Expected element, the result of the evaluation is either true or false.
If you specify more than one element in the Expected map, then by default all of the conditions
must evaluate to true. In other words, the conditions are ANDed together. (You can use the
ConditionalOperator parameter to OR the conditions instead. If you do this, then at least one of the
conditions must evaluate to true, rather than all of them.)
If the Expected map evaluates to true, then the conditional operation succeeds; otherwise, it fails.
• AttributeValueList - One or more values to evaluate against the supplied attribute. The number
of values in the list depends on the ComparisonOperator being used.
String value comparisons for greater than, equals, or less than are based on Unicode with UTF-8 binary
encoding. For example, a is greater than A, and a is greater than B.
For type Binary, DynamoDB treats each byte of the binary data as unsigned when it compares binary
values.
• ComparisonOperator - A comparator for evaluating attributes in the AttributeValueList. When
performing the comparison, DynamoDB uses strongly consistent reads.
AttributeValueList can contain only one AttributeValue element of type String, Number,
Binary, String Set, Number Set, or Binary Set. If an item contains an AttributeValue element
of a different type than the one provided in the request, the value does not match. For example,
{"S":"6"} does not equal {"N":"6"}. Also, {"N":"6"} does not equal {"NS":["6", "2",
"1"]}.
• NE : Not equal. NE is supported for all datatypes, including lists and maps.
AttributeValueList can contain only one AttributeValue of type String, Number, Binary,
String Set, Number Set, or Binary Set. If an item contains an AttributeValue of a different type
than the one provided in the request, the value does not match. For example, {"S":"6"} does not
equal {"N":"6"}. Also, {"N":"6"} does not equal {"NS":["6", "2", "1"]}.
• LE : Less than or equal.
AttributeValueList can contain only one AttributeValue element of type String, Number,
or Binary (not a set type). If an item contains an AttributeValue element of a different type than
the one provided in the request, the value does not match. For example, {"S":"6"} does not equal
{"N":"6"}. Also, {"N":"6"} does not compare to {"NS":["6", "2", "1"]}.
• LT : Less than.
AttributeValueList can contain only one AttributeValue of type String, Number, or Binary
(not a set type). If an item contains an AttributeValue element of a different type than the
one provided in the request, the value does not match. For example, {"S":"6"} does not equal
{"N":"6"}. Also, {"N":"6"} does not compare to {"NS":["6", "2", "1"]}.
• GE : Greater than or equal.
AttributeValueList can contain only one AttributeValue element of type String, Number,
or Binary (not a set type). If an item contains an AttributeValue element of a different type than
the one provided in the request, the value does not match. For example, {"S":"6"} does not equal
{"N":"6"}. Also, {"N":"6"} does not compare to {"NS":["6", "2", "1"]}.
• GT : Greater than.
AttributeValueList can contain only one AttributeValue element of type String, Number,
or Binary (not a set type). If an item contains an AttributeValue element of a different type than
the one provided in the request, the value does not match. For example, {"S":"6"} does not equal
{"N":"6"}. Also, {"N":"6"} does not compare to {"NS":["6", "2", "1"]}.
• NOT_NULL : The attribute exists. NOT_NULL is supported for all datatypes, including lists and maps.
Note
This operator tests for the existence of an attribute, not its data type. If the data type of
attribute "a" is null, and you evaluate it using NOT_NULL, the result is a Boolean true.
This result is because the attribute "a" exists; its data type is not relevant to the NOT_NULL
comparison operator.
• NULL : The attribute does not exist. NULL is supported for all datatypes, including lists and maps.
Note
This operator tests for the nonexistence of an attribute, not its data type. If the data type
of attribute "a" is null, and you evaluate it using NULL, the result is a Boolean false. This
is because the attribute "a" exists; its data type is not relevant to the NULL comparison
operator.
• CONTAINS : Checks for a subsequence, or value in a set.
API Version 2012-08-10
950
Amazon DynamoDB Developer Guide
Expected
AttributeValueList can contain only one AttributeValue element of type String, Number, or
Binary (not a set type). If the target attribute of the comparison is of type String, then the operator
checks for a substring match. If the target attribute of the comparison is of type Binary, then the
operator looks for a subsequence of the target that matches the input. If the target attribute of the
comparison is a set ("SS", "NS", or "BS"), then the operator evaluates to true if it finds an exact match
with any member of the set.
CONTAINS is supported for lists: When evaluating "a CONTAINS b", " a" can be a list; however, "b"
cannot be a set, a map, or a list.
• NOT_CONTAINS : Checks for absence of a subsequence, or absence of a value in a set.
AttributeValueList can contain only one AttributeValue element of type String, Number,
or Binary (not a set type). If the target attribute of the comparison is a String, then the operator
checks for the absence of a substring match. If the target attribute of the comparison is Binary, then
the operator checks for the absence of a subsequence of the target that matches the input. If the
target attribute of the comparison is a set ("SS", "NS", or "BS"), then the operator evaluates to true if
it does not find an exact match with any member of the set.
NOT_CONTAINS is supported for lists: When evaluating "a NOT CONTAINS b", " a" can be a list;
however, "b" cannot be a set, a map, or a list.
• BEGINS_WITH : Checks for a prefix.
AttributeValueList can contain only one AttributeValue of type String or Binary (not a
Number or a set type). The target attribute of the comparison must be of type String or Binary (not
a Number or a set type).
• IN : Checks for matching elements within two sets.
AttributeValueList must contain two AttributeValue elements of the same type, either
String, Number, or Binary (not a set type). A target attribute matches if the target value is greater
than, or equal to, the first element and less than, or equal to, the second element. If an item
contains an AttributeValue element of a different type than the one provided in the request, the
value does not match. For example, {"S":"6"} does not compare to {"N":"6"}. Also, {"N":"6"}
does not compare to {"NS":["6", "2", "1"]}
The Value and Exists parameters are incompatible with AttributeValueList and
ComparisonOperator. Note that if you use both sets of parameters at once, DynamoDB will return a
ValidationException exception.
Note
This parameter does not support attributes of type List or Map.
KeyConditions
KeyConditions are the selection criteria for a Query operation. For a query on a table, you can have
conditions only on the table primary key attributes. You must provide the partition key name and value
as an EQ condition. You can optionally provide a second condition, referring to the sort key.
Note
If you don't provide a sort key condition, all of the items that match the partition key will be
retrieved. If a FilterExpression or QueryFilter is present, it will be applied after the items
are retrieved.
For a query on an index, you can have conditions only on the index key attributes. You must provide the
index partition key name and value as an EQ condition. You can optionally provide a second condition,
referring to the index sort key.
Each KeyConditions element consists of an attribute name to compare, along with the following:
• AttributeValueList - One or more values to evaluate against the supplied attribute. The number
of values in the list depends on the ComparisonOperator being used.
String value comparisons for greater than, equals, or less than are based on Unicode with UTF-8 binary
encoding. For example, a is greater than A, and a is greater than B.
For Binary, DynamoDB treats each byte of the binary data as unsigned when it compares binary values.
• ComparisonOperator - A comparator for evaluating attributes, for example, equals, greater than,
less than, and so on.
EQ | LE | LT | GE | GT | BEGINS_WITH | BETWEEN
AttributeValueList can contain only one AttributeValue of type String, Number, or Binary
(not a set type). If an item contains an AttributeValue element of a different type than the
one specified in the request, the value does not match. For example, {"S":"6"} does not equal
{"N":"6"}. Also, {"N":"6"} does not equal {"NS":["6", "2", "1"]}.
• LE : Less than or equal.
AttributeValueList can contain only one AttributeValue element of type String, Number,
or Binary (not a set type). If an item contains an AttributeValue element of a different type than
the one provided in the request, the value does not match. For example, {"S":"6"} does not equal
{"N":"6"}. Also, {"N":"6"} does not compare to {"NS":["6", "2", "1"]}.
• LT : Less than.
AttributeValueList can contain only one AttributeValue of type String, Number, or Binary
(not a set type). If an item contains an AttributeValue element of a different type than the
one provided in the request, the value does not match. For example, {"S":"6"} does not equal
{"N":"6"}. Also, {"N":"6"} does not compare to {"NS":["6", "2", "1"]}.
• GE : Greater than or equal.
AttributeValueList can contain only one AttributeValue element of type String, Number,
or Binary (not a set type). If an item contains an AttributeValue element of a different type than
the one provided in the request, the value does not match. For example, {"S":"6"} does not equal
{"N":"6"}. Also, {"N":"6"} does not compare to {"NS":["6", "2", "1"]}.
• GT : Greater than.
AttributeValueList can contain only one AttributeValue element of type String, Number,
or Binary (not a set type). If an item contains an AttributeValue element of a different type than
the one provided in the request, the value does not match. For example, {"S":"6"} does not equal
{"N":"6"}. Also, {"N":"6"} does not compare to {"NS":["6", "2", "1"]}.
• BEGINS_WITH : Checks for a prefix.
AttributeValueList can contain only one AttributeValue of type String or Binary (not a
Number or a set type). The target attribute of the comparison must be of type String or Binary (not
a Number or a set type).
• BETWEEN : Greater than or equal to the first value, and less than or equal to the second value.
API Version 2012-08-10
953
Amazon DynamoDB Developer Guide
QueryFilter
AttributeValueList must contain two AttributeValue elements of the same type, either
String, Number, or Binary (not a set type). A target attribute matches if the target value is greater
than, or equal to, the first element and less than, or equal to, the second element. If an item
contains an AttributeValue element of a different type than the one provided in the request, the
value does not match. For example, {"S":"6"} does not compare to {"N":"6"}. Also, {"N":"6"}
does not compare to {"NS":["6", "2", "1"]}
QueryFilter
In a Query operation, QueryFilter is a condition that evaluates the query results after the items are
read and returns only the desired values.
If you provide more than one condition in the QueryFilter map, then by default all of the
conditions must evaluate to true. In other words, the conditions are ANDed together. (You can use the
ConditionalOperator (p. 949) parameter to OR the conditions instead. If you do this, then at least one
of the conditions must evaluate to true, rather than all of them.)
Note that QueryFilter does not allow key attributes. You cannot define a filter condition on a partition
key or a sort key.
Each QueryFilter element consists of an attribute name to compare, along with the following:
• AttributeValueList - One or more values to evaluate against the supplied attribute. The number
of values in the list depends on the operator specified in ComparisonOperator.
String value comparisons for greater than, equals, or less than are based on UTF-8 binary encoding.
For example, a is greater than A, and a is greater than B.
For type Binary, DynamoDB treats each byte of the binary data as unsigned when it compares binary
values.
For information on specifying data types in JSON, see DynamoDB Low-Level API (p. 216).
• ComparisonOperator - A comparator for evaluating attributes. For example, equals, greater than,
less than, etc.
ScanFilter
In a Scan operation, ScanFilter is a condition that evaluates the scan results and returns only the
desired values.
Note
This parameter does not support attributes of type List or Map.
If you specify more than one condition in the ScanFilter map, then by default all of the conditions
must evaluate to true. In other words, the conditions are ANDed together. (You can use the
ConditionalOperator (p. 949) parameter to OR the conditions instead. If you do this, then at least one
of the conditions must evaluate to true, rather than all of them.)
Each ScanFilter element consists of an attribute name to compare, along with the following:
• AttributeValueList - One or more values to evaluate against the supplied attribute. The number
of values in the list depends on the operator specified in ComparisonOperator .
String value comparisons for greater than, equals, or less than are based on UTF-8 binary encoding.
For example, a is greater than A, and a is greater than B.
For Binary, DynamoDB treats each byte of the binary data as unsigned when it compares binary values.
For information on specifying data types in JSON, see DynamoDB Low-Level API (p. 216).
• ComparisonOperator - A comparator for evaluating attributes. For example, equals, greater than,
less than, etc.
Simple Conditions
With attribute values, you can write conditions for comparisons against table attributes. A condition
always evaluates to true or false, and consists of:
The following sections describe the various comparison operators, along with examples of how to use
them in conditions.
Use these operators to check whether an attribute exists, or doesn't exist. Because there is no value to
compare against, do not specify AttributeValueList.
Example
...
"Dimensions": {
ComparisonOperator: "NOT_NULL"
}
...
AttributeValueList can contain only one value of type String, Number, Binary, String Set, Number
Set, or Binary Set. If an item contains a value of a different type than the one specified in the request,
the value does not match. For example, the string "3" is not equal to the number 3. Also, the number
3 is not equal to the number set [3, 2, 1].
• NE - true if an attribute is not equal to a value.
AttributeValueList can contain only one value of type String, Number, Binary, String Set, Number
Set, or Binary Set. If an item contains a value of a different type than the one specified in the request,
the value does not match.
• LE - true if an attribute is less than or equal to a value.
AttributeValueList can contain only one value of type String, Number, or Binary (not a set). If an
item contains an AttributeValue of a different type than the one specified in the request, the value
does not match.
• LT - true if an attribute is less than a value.
AttributeValueList can contain only one value of type String, Number, or Binary (not a set). If
an item contains a value of a different type than the one specified in the request, the value does not
match.
AttributeValueList can contain only one value of type String, Number, or Binary (not a set). If
an item contains a value of a different type than the one specified in the request, the value does not
match.
• GT - true if an attribute is greater than a value.
AttributeValueList can contain only one value of type String, Number, or Binary (not a set). If
an item contains a value of a different type than the one specified in the request, the value does not
match.
• CONTAINS - true if a value is present within a set, or if one value contains another.
AttributeValueList can contain only one value of type String, Number, or Binary (not a set). If the
target attribute of the comparison is a String, then the operator checks for a substring match. If the
target attribute of the comparison is Binary, then the operator looks for a subsequence of the target
that matches the input. If the target attribute of the comparison is a set, then the operator evaluates
to true if it finds an exact match with any member of the set.
• NOT_CONTAINS - true if a value is not present within a set, or if one value does not contain another
value.
AttributeValueList can contain only one value of type String, Number, or Binary (not a set). If the
target attribute of the comparison is a String, then the operator checks for the absence of a substring
match. If the target attribute of the comparison is Binary, then the operator checks for the absence of
a subsequence of the target that matches the input. If the target attribute of the comparison is a set,
then the operator evaluates to true if it does not find an exact match with any member of the set.
• BEGINS_WITH - true if the first few characters of an attribute match the provided value. Do not use
this operator for comparing numbers.
AttributeValueList can contain only one value of type String or Binary (not a Number or a set).
The target attribute of the comparison must be a String or Binary (not a Number or a set).
Use these operators to compare an attribute with a value. You must specify an AttributeValueList
consisting of a single value. For most of the operators, this value must be a scalar; however, the EQ and
NE operators also support sets.
Examples
...
"Price": {
ComparisonOperator: "GT",
AttributeValueList: [ {"N":100"} ]
}
...
...
"ProductCategory": {
ComparisonOperator: "BEGINS_WITH",
AttributeValueList: [ {"S":"Bo"} ]
}
...
...
"Color": {
ComparisonOperator: "EQ",
AttributeValueList: [
[ {"S":"Black"}, {"S":"Red"}, {"S":"Green"} ]
]
}
...
Note
When comparing set data types, the order of the elements does not matter. DynamoDB will
return only the items with the same set of values, regardless of the order in which you specify
them in your request.
AttributeValueList must contain two elements of the same type, either String, Number, or Binary
(not a set). A target attribute matches if the target value is greater than, or equal to, the first element
and less than, or equal to, the second element. If an item contains a value of a different type than the
one specified in the request, the value does not match.
Use this operator to determine if an attribute value is within a range. The AttributeValueList must
contain two scalar elements of the same type - String, Number, or Binary.
Example
The following expression evaluates to true if a product's price is between 100 and 200.
...
"Price": {
ComparisonOperator: "BETWEEN",
AttributeValueList: [ {"N":"100"}, {"N":"200"} ]
}
...
AttributeValueList can contain one or more elements of type String, Number, or Binary (not
a set). These attributes are compared against an existing non-set type attribute of an item. If any
elements of the input set are present in the item attribute, the expression evaluates to true.
AttributeValueList can contain one or more values of type String, Number, or Binary (not a set).
The target attribute of the comparison must be of the same type and exact value to match. A String
never matches a String set.
Use this operator to determine whether the supplied value is within an enumerated list. You can specify
any number of scalar values in AttributeValueList, but they all must be of the same data type.
Example
The following expression evaluates to true if the value for Id is 201, 203, or 205.
...
"Id": {
ComparisonOperator: "IN",
AttributeValueList: [ {"N":"201"}, {"N":"203"}, {"N":"205"} ]
}
...
By default, when you specify more than one condition, all of the conditions must evaluate to true in
order for the entire expression to evaluate to true. In other words, an implicit AND operation takes place.
Example
The following expression evaluates to true if a product is a book which has at least 600 pages. Both of
the conditions must evaluate to true, since they are implicitly ANDed together.
...
"ProductCategory": {
ComparisonOperator: "EQ",
AttributeValueList: [ {"S":"Book"} ]
},
"PageCount": {
ComparisonOperator: "GE",
AttributeValueList: [ {"N":600"} ]
}
...
You can use ConditionalOperator (p. 949) to clarify that an AND operation will take place. The
following example behaves in the same manner as the previous one.
...
"ConditionalOperator" : "AND",
"ProductCategory": {
"ComparisonOperator": "EQ",
"AttributeValueList": [ {"N":"Book"} ]
},
"PageCount": {
"ComparisonOperator": "GE",
"AttributeValueList": [ {"N":600"} ]
}
...
You can also set ConditionalOperator to OR, which means that at least one of the conditions must
evaluate to true.
Example
The following expression evaluates to true if a product is a mountain bike, if it is a particular brand name,
or if its price is greater than 100.
...
ConditionalOperator : "OR",
"BicycleType": {
"ComparisonOperator": "EQ",
"AttributeValueList": [ {"S":"Mountain" ]
},
"Brand": {
"ComparisonOperator": "EQ",
"AttributeValueList": [ {"S":"Brand-Company A" ]
},
"Price": {
"ComparisonOperator": "GT",
"AttributeValueList": [ {"N":"100"} ]
}
...
Note
In a complex expression, the conditions are processed in order, from the first condition to the
last.
You cannot use both AND and OR in a single expression.
The Value and Exists options continue to be supported in DynamoDB; however, they only let
you test for an equality condition, or whether an attribute exists. We recommend that you use
ComparisonOperator and AttributeValueList instead, because these options let you construct a
much wider range of conditions.
Example
A DeleteItem can check to see whether a book is no longer in publication, and only delete it if this
condition is true. Here is an AWS CLI example using a legacy condition:
The following example does the same thing, but does not use a legacy condition:
"AttributeValueList": [ {"BOOL":false} ]
}
}'
Example
A PutItem operation can protect against overwriting an existing item with the same primary key
attributes. Here is an example using a legacy condition:
The following example does the same thing, but does not use a legacy condition:
Note
For conditions in the Expected map, do not use the legacy Value and Exists options
together with ComparisonOperator and AttributeValueList. If you do this, your
conditional write will fail.
New applications should use the current API version (2012-08-10). For more information, see API
Reference (p. 882).
Note
We recommend that you migrate your applications to the latest API version (2012-08-10), since
new DynamoDB features will not be backported to the previous API version.
Topics
• BatchGetItem (p. 963)
• BatchWriteItem (p. 967)
• CreateTable (p. 972)
• DeleteItem (p. 977)
• DeleteTable (p. 981)
BatchGetItem
Important
This section refers to API version 2011-12-05, which is deprecated and
should not be used for new applications.
For documentation on the current low-level API, see the Amazon DynamoDB API Reference.
Description
The BatchGetItem operation returns the attributes for multiple items from multiple tables using their
primary keys. The maximum number of items that can be retrieved for a single operation is 100. Also,
the number of items retrieved is constrained by a 1 MB size limit. If the response size limit is exceeded
or a partial result is returned because the table’s provisioned throughput is exceeded, or because of
an internal processing failure, DynamoDB returns an UnprocessedKeys value so you can retry the
operation starting with the next item to get. DynamoDB automatically adjusts the number of items
returned per page to enforce this limit. For example, even if you ask to retrieve 100 items, but each
individual item is 50 KB in size, the system returns 20 items and an appropriate UnprocessedKeys
value so you can get the next page of results. If desired, your application can include its own logic to
assemble the pages of results into one set.
If no items could be processed because of insufficient provisioned throughput on each of the tables
involved in the request, DynamoDB returns a ProvisionedThroughputExceededException error.
Note
By default, BatchGetItem performs eventually consistent reads on every table in the request.
You can set the ConsistentRead parameter to true, on a per-table basis, if you want
consistent reads instead.
BatchGetItem fetches items in parallel to minimize response latencies.
When designing your application, keep in mind that DynamoDB does not guarantee how
attributes are ordered in the returned response. Include the primary key values in the
AttributesToGet for the items in your request to help parse the response by item.
If the requested items do not exist, nothing is returned in the response for those items. Requests
for non-existent items consume the minimum read capacity units according to the type of read.
For more information, see DynamoDB Item Sizes (p. 343).
Requests
Syntax
// This header is abbreviated. For a sample of a complete header, see DynamoDB Low-Level
API.
POST / HTTP/1.1
x-amz-target: DynamoDB_20111205.BatchGetItem
content-type: application/x-amz-json-1.0
{"RequestItems":
{"Table1":
{"Keys":
[{"HashKeyElement": {"S":"KeyValue1"}, "RangeKeyElement":{"N":"KeyValue2"}},
{"HashKeyElement": {"S":"KeyValue3"}, "RangeKeyElement":{"N":"KeyValue4"}},
{"HashKeyElement": {"S":"KeyValue5"}, "RangeKeyElement":{"N":"KeyValue6"}}],
"AttributesToGet":["AttributeName1", "AttributeName2", "AttributeName3"]},
"Table2":
{"Keys":
[{"HashKeyElement": {"S":"KeyValue4"}},
{"HashKeyElement": {"S":"KeyValue5"}}],
"AttributesToGet": ["AttributeName4", "AttributeName5", "AttributeName6"]
}
}
}
Type: String
Default: None
Type: String
Default: None
Type: Keys
Type: Array
Type: Boolean
Responses
Syntax
HTTP/1.1 200
x-amzn-RequestId: 8966d095-71e9-11e0-a498-71d736f27375
content-type: application/x-amz-json-1.0
content-length: 855
{"Responses":
{"Table1":
{"Items":
[{"AttributeName1": {"S":"AttributeValue"},
"AttributeName2": {"N":"AttributeValue"},
"AttributeName3": {"SS":["AttributeValue", "AttributeValue", "AttributeValue"]}
},
{"AttributeName1": {"S": "AttributeValue"},
"AttributeName2": {"S": "AttributeValue"},
"AttributeName3": {"NS": ["AttributeValue", "AttributeValue", "AttributeValue"]}
}],
"ConsumedCapacityUnits":1},
"Table2":
{"Items":
[{"AttributeName1": {"S":"AttributeValue"},
"AttributeName2": {"N":"AttributeValue"},
"AttributeName3": {"SS":["AttributeValue", "AttributeValue", "AttributeValue"]}
},
{"AttributeName1": {"S": "AttributeValue"},
"AttributeName2": {"S": "AttributeValue"},
"AttributeName3": {"NS": ["AttributeValue", "AttributeValue","AttributeValue"]}
}],
"ConsumedCapacityUnits":1}
},
"UnprocessedKeys":
{"Table3":
{"Keys":
[{"HashKeyElement": {"S":"KeyValue1"}, "RangeKeyElement":{"N":"KeyValue2"}},
{"HashKeyElement": {"S":"KeyValue3"}, "RangeKeyElement":{"N":"KeyValue4"}},
{"HashKeyElement": {"S":"KeyValue5"}, "RangeKeyElement":{"N":"KeyValue6"}}],
"AttributesToGet":["AttributeName1", "AttributeName2", "AttributeName3"]}
}
}
Name Description
Type: Map
Type: String
Name Description
Type: Number
Type: Array
UnprocessedKeys: Table: Keys The primary key attribute values that define
the items and the attributes associated with the
items. For more information about primary keys,
see Primary Key (p. 5) .
Type: Boolean.
Special Errors
Error Description
Examples
The following examples show an HTTP POST request and response using the BatchGetItem operation.
For examples using the AWS SDK, see Working with Items in DynamoDB (p. 372).
Sample Request
The following sample requests attributes from two different tables.
{"RequestItems":
{"comp1":
{"Keys":
[{"HashKeyElement":{"S":"Casey"},"RangeKeyElement":{"N":"1319509152"}},
{"HashKeyElement":{"S":"Dave"},"RangeKeyElement":{"N":"1319509155"}},
{"HashKeyElement":{"S":"Riley"},"RangeKeyElement":{"N":"1319509158"}}],
"AttributesToGet":["user","status"]},
"comp2":
{"Keys":
[{"HashKeyElement":{"S":"Julie"}},{"HashKeyElement":{"S":"Mingus"}}],
"AttributesToGet":["user","friends"]}
}
}
Sample Response
The following sample is the response.
HTTP/1.1 200 OK
x-amzn-RequestId: GTPQVRM4VJS792J1UFJTKUBVV4KQNSO5AEMVJF66Q9ASUAAJG
content-type: application/x-amz-json-1.0
content-length: 373
Date: Fri, 02 Sep 2011 23:07:39 GMT
{"Responses":
{"comp1":
{"Items":
[{"status":{"S":"online"},"user":{"S":"Casey"}},
{"status":{"S":"working"},"user":{"S":"Riley"}},
{"status":{"S":"running"},"user":{"S":"Dave"}}],
"ConsumedCapacityUnits":1.5},
"comp2":
{"Items":
[{"friends":{"SS":["Elisabeth", "Peter"]},"user":{"S":"Mingus"}},
{"friends":{"SS":["Dave", "Peter"]},"user":{"S":"Julie"}}],
"ConsumedCapacityUnits":1}
},
"UnprocessedKeys":{}
}
BatchWriteItem
Important
This section refers to API version 2011-12-05, which is deprecated and
should not be used for new applications.
For documentation on the current low-level API, see the Amazon DynamoDB API Reference.
Description
This operation enables you to put or delete several items across multiple tables in a single call.
To upload one item, you can use PutItem, and to delete one item, you can use DeleteItem. However,
when you want to upload or delete large amounts of data, such as uploading large amounts of
data from Amazon EMR (Amazon EMR) or migrating data from another database in to DynamoDB,
BatchWriteItem offers an efficient alternative.
If you use languages such as Java, you can use threads to upload items in parallel. This adds complexity
in your application to handle the threads. Other languages don't support threading. For example, if you
are using PHP, you must upload or delete items one at a time. In both situations, BatchWriteItem
provides an alternative where the specified put and delete operations are processed in parallel, giving
you the power of the thread pool approach without having to introduce complexity in your application.
Note that each individual put and delete specified in a BatchWriteItem operation costs the same
in terms of consumed capacity units. However, because BatchWriteItem performs the specified
operations in parallel, you get lower latency. Delete operations on non-existent items consume 1
write capacity unit. For more information about consumed capacity units, see Working with Tables in
DynamoDB (p. 333).
DynamoDB rejects the entire batch write operation if any one of the following is true:
• If one or more tables specified in the BatchWriteItem request does not exist.
• If primary key attributes specified on an item in the request does not match the corresponding table's
primary key schema.
• If you try to perform multiple operations on the same item in the same BatchWriteItem request. For
example, you cannot put and delete the same item in the same BatchWriteItem request.
• If the total request size exceeds the 1 MB request size (the HTTP payload) limit.
• If any individual item in a batch exceeds the 64 KB item size limit.
Requests
Syntax
// This header is abbreviated. For a sample of a complete header, see DynamoDB Low-Level
API.
POST / HTTP/1.1
x-amz-target: DynamoDB_20111205.BatchGetItem
content-type: application/x-amz-json-1.0
{
"RequestItems" : RequestItems
}
RequestItems
{
"TableName1" : [ Request, Request, ... ],
"TableName2" : [ Request, Request, ... ],
...
}
Request ::=
PutRequest | DeleteRequest
PutRequest ::=
{
"PutRequest" : {
"Item" : {
"Attribute-Name1" : Attribute-Value,
"Attribute-Name2" : Attribute-Value,
...
}
}
}
DeleteRequest ::=
{
"DeleteRequest" : {
"Key" : PrimaryKey-Value
}
}
HashTypePK ::=
{
"HashKeyElement" : Attribute-Value
}
HashAndRangeTypePK
{
"HashKeyElement" : Attribute-Value,
"RangeKeyElement" : Attribute-Value,
}
Numeric ::=
{
"N": "Number"
}
String ::=
{
"S": "String"
}
Binary ::=
{
"B": "Base64 encoded binary data"
}
StringSet ::=
{
"SS": [ "String1", "String2", ... ]
}
NumberSet ::=
{
"NS": [ "Number1", "Number2", ... ]
}
BinarySet ::=
{
"BS": [ "Binary1", "Binary2", ... ]
}
In the request body, the RequestItems JSON object describes the operations that you want to perform.
The operations are grouped by tables. You can use BatchWriteItem to update or delete several items
across multiple tables. For each specific write request, you must identify the type of request (PutItem,
DeleteItem) followed by detail information about the operation.
• For a PutRequest, you provide the item, that is, a list of attributes and their values.
• For a DeleteRequest, you provide the primary key name and value.
Responses
Syntax
The following is the syntax of the JSON body returned in the response.
{
"Responses" : ConsumedCapacityUnitsByTable
"UnprocessedItems" : RequestItems
}
ConsumedCapacityUnitsByTable
{
"TableName1" : { "ConsumedCapacityUnits", : NumericValue },
"TableName2" : { "ConsumedCapacityUnits", : NumericValue },
...
}
RequestItems
This syntax is identical to the one described in the JSON syntax in the request.
Special Errors
No errors specific to this operation.
Examples
The following example shows an HTTP POST request and the response of a BatchWriteItem operation.
The request specifies the following operations on the Reply and the Thread tables:
For examples using the AWS SDK, see Working with Items in DynamoDB (p. 372).
Sample Request
// This header is abbreviated. For a sample of a complete header, see DynamoDB Low-Level
API.
POST / HTTP/1.1
x-amz-target: DynamoDB_20111205.BatchGetItem
content-type: application/x-amz-json-1.0
{
"RequestItems":{
"Reply":[
{
"PutRequest":{
"Item":{
"ReplyDateTime":{
"S":"2012-04-03T11:04:47.034Z"
},
"Id":{
"S":"DynamoDB#DynamoDB Thread 5"
}
}
}
},
{
"DeleteRequest":{
"Key":{
"HashKeyElement":{
"S":"DynamoDB#DynamoDB Thread 4"
},
"RangeKeyElement":{
"S":"oops - accidental row"
}
}
}
}
],
"Thread":[
{
"PutRequest":{
"Item":{
"ForumName":{
"S":"DynamoDB"
},
"Subject":{
"S":"DynamoDB Thread 5"
}
}
}
}
]
}
}
Sample Response
The following example response shows a put operation on both the Thread and Reply tables succeeded
and a delete operation on the Reply table failed (for reasons such as throttling that is caused when you
exceed the provisioned throughput on the table). Note the following in the JSON response:
• The Responses object shows one capacity unit was consumed on both the Thread and Reply tables
as a result of the successful put operation on each of these tables.
• The UnprocessedItems object shows the unsuccessful delete operation on the Reply table. You can
then issue a new BatchWriteItem call to address these unprocessed requests.
HTTP/1.1 200 OK
x-amzn-RequestId: G8M9ANLOE5QA26AEUHJKJE0ASBVV4KQNSO5AEMVJF66Q9ASUAAJG
Content-Type: application/x-amz-json-1.0
Content-Length: 536
Date: Thu, 05 Apr 2012 18:22:09 GMT
{
"Responses":{
"Thread":{
"ConsumedCapacityUnits":1.0
},
"Reply":{
"ConsumedCapacityUnits":1.0
}
},
"UnprocessedItems":{
"Reply":[
{
"DeleteRequest":{
"Key":{
"HashKeyElement":{
"S":"DynamoDB#DynamoDB Thread 4"
},
"RangeKeyElement":{
"S":"oops - accidental row"
}
}
}
}
]
}
}
CreateTable
Important
This section refers to API version 2011-12-05, which is deprecated and
should not be used for new applications.
For documentation on the current low-level API, see the Amazon DynamoDB API Reference.
Description
The CreateTable operation adds a new table to your account.
The table name must be unique among those associated with the AWS Account issuing the request,
and the AWS region that receives the request (such as dynamodb.us-west-2.amazonaws.com). Each
DynamoDB endpoint is entirely independent. For example, if you have two tables called "MyTable," one
The CreateTable operation triggers an asynchronous workflow to begin creating the table. DynamoDB
immediately returns the state of the table (CREATING) until the table is in the ACTIVE state. Once the
table is in the ACTIVE state, you can perform data plane operations.
Use the DescribeTables (p. 984) operation to check the status of the table.
Requests
Syntax
{"TableName":"Table1",
"KeySchema":
{"HashKeyElement":{"AttributeName":"AttributeName1","AttributeType":"S"},
"RangeKeyElement":{"AttributeName":"AttributeName2","AttributeType":"N"}},
"ProvisionedThroughput":{"ReadCapacityUnits":5,"WriteCapacityUnits":10}
}
Type: String
Type: Array
Type: Number
Type: Number
Responses
Syntax
HTTP/1.1 200 OK
x-amzn-RequestId: CSOC7TJPLR0OOKIRLGOHVAICUFVV4KQNSO5AEMVJF66Q9ASUAAJG
content-type: application/x-amz-json-1.0
content-length: 311
Date: Tue, 12 Jul 2011 21:31:03 GMT
{"TableDescription":
{"CreationDateTime":1.310506263362E9,
"KeySchema":
{"HashKeyElement":{"AttributeName":"AttributeName1","AttributeType":"S"},
"RangeKeyElement":{"AttributeName":"AttributeName2","AttributeType":"N"}},
"ProvisionedThroughput":{"ReadCapacityUnits":5,"WriteCapacityUnits":10},
"TableName":"Table1",
"TableStatus":"CREATING"
}
}
Name Description
Type: Number
Type: Array
Type: Number
Type: Number
Name Description
Type: String
Type: String
Special Errors
Error Description
Examples
The following example creates a table with a composite primary key containing a string and a number.
For examples using the AWS SDK, see Working with Tables in DynamoDB (p. 333).
Sample Request
{"TableName":"comp-table",
"KeySchema":
{"HashKeyElement":{"AttributeName":"user","AttributeType":"S"},
"RangeKeyElement":{"AttributeName":"time","AttributeType":"N"}},
"ProvisionedThroughput":{"ReadCapacityUnits":5,"WriteCapacityUnits":10}
}
Sample Response
HTTP/1.1 200 OK
x-amzn-RequestId: CSOC7TJPLR0OOKIRLGOHVAICUFVV4KQNSO5AEMVJF66Q9ASUAAJG
content-type: application/x-amz-json-1.0
content-length: 311
{"TableDescription":
{"CreationDateTime":1.310506263362E9,
"KeySchema":
{"HashKeyElement":{"AttributeName":"user","AttributeType":"S"},
"RangeKeyElement":{"AttributeName":"time","AttributeType":"N"}},
"ProvisionedThroughput":{"ReadCapacityUnits":5,"WriteCapacityUnits":10},
"TableName":"comp-table",
"TableStatus":"CREATING"
}
}
Related Actions
• DescribeTables (p. 984)
• DeleteTable (p. 981)
DeleteItem
Important
This section refers to API version 2011-12-05, which is deprecated and
should not be used for new applications.
For documentation on the current low-level API, see the Amazon DynamoDB API Reference.
Description
Deletes a single item in a table by primary key. You can perform a conditional delete operation that
deletes the item if it exists, or if it has an expected attribute value.
Note
If you specify DeleteItem without attributes or values, all the attributes for the item are
deleted.
Unless you specify conditions, the DeleteItem is an idempotent operation; running it multiple
times on the same item or attribute does not result in an error response.
Conditional deletes are useful for only deleting items and attributes if specific conditions are
met. If the conditions are met, DynamoDB performs the delete. Otherwise, the item is not
deleted.
You can perform the expected conditional check on one attribute per operation.
Requests
Syntax
{"TableName":"Table1",
"Key":
{"HashKeyElement":{"S":"AttributeValue1"},"RangeKeyElement":
{"N":"AttributeValue2"}},
"Expected":{"AttributeName3":{"Value":{"S":"AttributeValue3"}}},
"ReturnValues":"ALL_OLD"}
}
Type: String
Type: String
"Expected" :
{"Color":{"Exists":false}}
"Expected" :
{"Color":{"Exists":true},
{"Value":{"S":"Yellow"}}}
"Expected" :
{"Color":{"Value":
{"S":"Yellow"}}}
Note
If you specify
{"Exists":true}
without an attribute
value to check,
DynamoDB returns an
error.
Type: String
Responses
Syntax
HTTP/1.1 200 OK
x-amzn-RequestId: CSOC7TJPLR0OOKIRLGOHVAICUFVV4KQNSO5AEMVJF66Q9ASUAAJG
content-type: application/x-amz-json-1.0
content-length: 353
Date: Tue, 12 Jul 2011 21:31:03 GMT
{"Attributes":
{"AttributeName3":{"SS":["AttributeValue3","AttributeValue4","AttributeValue5"]},
"AttributeName2":{"S":"AttributeValue2"},
"AttributeName1":{"N":"AttributeValue1"}
},
"ConsumedCapacityUnits":1
}
Name Description
Type: Number
Special Errors
Error Description
Examples
Sample Request
{"TableName":"comp-table",
"Key":
{"HashKeyElement":{"S":"Mingus"},"RangeKeyElement":{"N":"200"}},
"Expected":
{"status":{"Value":{"S":"shopping"}}},
"ReturnValues":"ALL_OLD"
}
Sample Response
HTTP/1.1 200 OK
x-amzn-RequestId: U9809LI6BBFJA5N2R0TB0P017JVV4KQNSO5AEMVJF66Q9ASUAAJG
content-type: application/x-amz-json-1.0
content-length: 353
Date: Tue, 12 Jul 2011 22:31:23 GMT
{"Attributes":
{"friends":{"SS":["Dooley","Ben","Daisy"]},
"status":{"S":"shopping"},
"time":{"N":"200"},
"user":{"S":"Mingus"}
},
"ConsumedCapacityUnits":1
}
Related Actions
• PutItem (p. 991)
DeleteTable
Important
This section refers to API version 2011-12-05, which is deprecated and
should not be used for new applications.
For documentation on the current low-level API, see the Amazon DynamoDB API Reference.
Description
The DeleteTable operation deletes a table and all of its items. After a DeleteTable request, the
specified table is in the DELETING state until DynamoDB completes the deletion. If the table is in
the ACTIVE state, you can delete it. If a table is in CREATING or UPDATING states, then DynamoDB
returns a ResourceInUseException error. If the specified table does not exist, DynamoDB returns a
ResourceNotFoundException. If table is already in the DELETING state, no error is returned.
Note
DynamoDB might continue to accept data plane operation requests, such as GetItem and
PutItem, on a table in the DELETING state until the table deletion is complete.
Tables are unique among those associated with the AWS Account issuing the request, and the AWS
region that receives the request (such as dynamodb.us-west-1.amazonaws.com). Each DynamoDB
endpoint is entirely independent. For example, if you have two tables called "MyTable," one in
dynamodb.us-west-2.amazonaws.com and one in dynamodb.us-west-1.amazonaws.com, they are
completely independent and do not share any data; deleting one does not delete the other.
Use the DescribeTables (p. 984) operation to check the status of the table.
Requests
Syntax
{"TableName":"Table1"}
Responses
Syntax
HTTP/1.1 200 OK
x-amzn-RequestId: 4HONCKIVH1BFUDQ1U68CTG3N27VV4KQNSO5AEMVJF66Q9ASUAAJG
content-type: application/x-amz-json-1.0
content-length: 311
Date: Sun, 14 Aug 2011 22:56:22 GMT
{"TableDescription":
{"CreationDateTime":1.313362508446E9,
"KeySchema":
{"HashKeyElement":{"AttributeName":"user","AttributeType":"S"},
"RangeKeyElement":{"AttributeName":"time","AttributeType":"N"}},
"ProvisionedThroughput":{"ReadCapacityUnits":10,"WriteCapacityUnits":10},
"TableName":"Table1",
"TableStatus":"DELETING"
}
}
Name Description
Type: Number
Type: Number
Name Description
Type: Number
Type: String
Type: String
Special Errors
Error Description
Examples
Sample Request
// This header is abbreviated. For a sample of a complete header, see DynamoDB Low-Level
API.
POST / HTTP/1.1
x-amz-target: DynamoDB_20111205.DeleteTable
content-type: application/x-amz-json-1.0
content-length: 40
{"TableName":"favorite-movies-table"}
Sample Response
HTTP/1.1 200 OK
x-amzn-RequestId: 4HONCKIVH1BFUDQ1U68CTG3N27VV4KQNSO5AEMVJF66Q9ASUAAJG
content-type: application/x-amz-json-1.0
content-length: 160
Date: Sun, 14 Aug 2011 17:20:03 GMT
{"TableDescription":
{"CreationDateTime":1.313362508446E9,
"KeySchema":
{"HashKeyElement":{"AttributeName":"name","AttributeType":"S"}},
"TableName":"favorite-movies-table",
"TableStatus":"DELETING"
}
Related Actions
• CreateTable (p. 972)
• DescribeTables (p. 984)
DescribeTables
Important
This section refers to API version 2011-12-05, which is deprecated and
should not be used for new applications.
For documentation on the current low-level API, see the Amazon DynamoDB API Reference.
Description
Returns information about the table, including the current status of the table, the primary key schema
and when the table was created. DescribeTable results are eventually consistent. If you use DescribeTable
too early in the process of creating a table, DynamoDB returns a ResourceNotFoundException. If you
use DescribeTable too early in the process of updating a table, the new values might not be immediately
available.
Requests
Syntax
{"TableName":"Table1"}
Type: String
Responses
Syntax
HTTP/1.1 200
x-amzn-RequestId: 8966d095-71e9-11e0-a498-71d736f27375
content-type: application/x-amz-json-1.0
Content-Length: 543
{"Table":
{"CreationDateTime":1.309988345372E9,
ItemCount:1,
"KeySchema":
{"HashKeyElement":{"AttributeName":"AttributeName1","AttributeType":"S"},
"RangeKeyElement":{"AttributeName":"AttributeName2","AttributeType":"N"}},
"ProvisionedThroughput":{"LastIncreaseDateTime": Date, "LastDecreaseDateTime": Date,
"ReadCapacityUnits":10,"WriteCapacityUnits":10},
"TableName":"Table1",
"TableSizeBytes":1,
"TableStatus":"ACTIVE"
}
}
Name Description
Type: String
Type: Number
Type: Array
Type: String
Name Description
every six hours. Recent changes might not be
reflected in this value.
Type: Number
Special Errors
No errors are specific to this operation.
Examples
The following examples show an HTTP POST request and response using the DescribeTable operation for
a table named "comp-table". The table has a composite primary key.
Sample Request
{"TableName":"users"}
Sample Response
HTTP/1.1 200
x-amzn-RequestId: 8966d095-71e9-11e0-a498-71d736f27375
content-type: application/x-amz-json-1.0
content-length: 543
{"Table":
{"CreationDateTime":1.309988345372E9,
"ItemCount":23,
"KeySchema":
{"HashKeyElement":{"AttributeName":"user","AttributeType":"S"},
"RangeKeyElement":{"AttributeName":"time","AttributeType":"N"}},
"ProvisionedThroughput":{"LastIncreaseDateTime": 1.309988345384E9,
"ReadCapacityUnits":10,"WriteCapacityUnits":10},
"TableName":"users",
"TableSizeBytes":949,
"TableStatus":"ACTIVE"
}
}
Related Actions
• CreateTable (p. 972)
• DeleteTable (p. 981)
• ListTables (p. 989)
GetItem
Important
This section refers to API version 2011-12-05, which is deprecated and
should not be used for new applications.
For documentation on the current low-level API, see the Amazon DynamoDB API Reference.
Description
The GetItem operation returns a set of Attributes for an item that matches the primary key. If there
is no matching item, GetItem does not return any data.
The GetItem operation provides an eventually consistent read by default. If eventually consistent reads
are not acceptable for your application, use ConsistentRead. Although this operation might take
longer than a standard read, it always returns the last updated value. For more information, see Read
Consistency (p. 15).
Requests
Syntax
{"TableName":"Table1",
"Key":
{"HashKeyElement": {"S":"AttributeValue1"},
"RangeKeyElement": {"N":"AttributeValue2"}
},
"AttributesToGet":["AttributeName3","AttributeName4"],
"ConsistentRead":Boolean
}
Type: String
Type: Boolean
Responses
Syntax
HTTP/1.1 200
x-amzn-RequestId: 8966d095-71e9-11e0-a498-71d736f27375
content-type: application/x-amz-json-1.0
content-length: 144
{"Item":{
"AttributeName3":{"S":"AttributeValue3"},
"AttributeName4":{"N":"AttributeValue4"},
"AttributeName5":{"B":"dmFsdWU="}
},
"ConsumedCapacityUnits": 0.5
}
Name Description
Type: Number
Special Errors
No errors specific to this operation.
Examples
For examples using the AWS SDK, see Working with Items in DynamoDB (p. 372).
Sample Request
{"TableName":"comptable",
"Key":
{"HashKeyElement":{"S":"Julie"},
"RangeKeyElement":{"N":"1307654345"}},
"AttributesToGet":["status","friends"],
"ConsistentRead":true
}
Sample Response
Notice the ConsumedCapacityUnits value is 1, because the optional parameter ConsistentRead is set
to true. If ConsistentRead is set to false (or not specified) for the same request, the response is
eventually consistent and the ConsumedCapacityUnits value would be 0.5.
HTTP/1.1 200
x-amzn-RequestId: 8966d095-71e9-11e0-a498-71d736f27375
content-type: application/x-amz-json-1.0
content-length: 72
{"Item":
{"friends":{"SS":["Lynda, Aaron"]},
"status":{"S":"online"}
},
"ConsumedCapacityUnits": 1
}
ListTables
Important
This section refers to API version 2011-12-05, which is deprecated and
should not be used for new applications.
For documentation on the current low-level API, see the Amazon DynamoDB API Reference.
Description
Returns an array of all the tables associated with the current account and endpoint.
Each DynamoDB endpoint is entirely independent. For example, if you have two tables called "MyTable,"
one in dynamodb.us-west-2.amazonaws.com and one in dynamodb.us-east-1.amazonaws.com, they
are completely independent and do not share any data. The ListTables operation returns all of the table
names associated with the account making the request, for the endpoint that receives the request.
Requests
Syntax
{"ExclusiveStartTableName":"Table1","Limit":3}
The ListTables operation, by default, requests all of the table names associated with the account making
the request, for the endpoint that receives the request.
Type: Integer
Type: String
Responses
Syntax
HTTP/1.1 200 OK
x-amzn-RequestId: S1LEK2DPQP8OJNHVHL8OU2M7KRVV4KQNSO5AEMVJF66Q9ASUAAJG
content-type: application/x-amz-json-1.0
content-length: 81
Date: Fri, 21 Oct 2011 20:35:38 GMT
{"TableNames":["Table1","Table2","Table3"], "LastEvaluatedTableName":"Table3"}
Name Description
Type: Array
Type: String
Special Errors
No errors are specific to this operation.
Examples
The following examples show an HTTP POST request and response using the ListTables operation.
Sample Request
{"ExclusiveStartTableName":"comp2","Limit":3}
Sample Response
HTTP/1.1 200 OK
x-amzn-RequestId: S1LEK2DPQP8OJNHVHL8OU2M7KRVV4KQNSO5AEMVJF66Q9ASUAAJG
content-type: application/x-amz-json-1.0
content-length: 81
Date: Fri, 21 Oct 2011 20:35:38 GMT
{"LastEvaluatedTableName":"comp5","TableNames":["comp3","comp4","comp5"]}
Related Actions
• DescribeTables (p. 984)
• CreateTable (p. 972)
• DeleteTable (p. 981)
PutItem
Important
This section refers to API version 2011-12-05, which is deprecated and
should not be used for new applications.
For documentation on the current low-level API, see the Amazon DynamoDB API Reference.
Description
Creates a new item, or replaces an old item with a new item (including all the attributes). If an item
already exists in the specified table with the same primary key, the new item completely replaces the
existing item. You can perform a conditional put (insert a new item if one with the specified primary key
doesn't exist), or replace an existing item if it has certain attribute values.
Attribute values may not be null; string and binary type attributes must have lengths greater than
zero; and set type attributes must not be empty. Requests with empty values will be rejected with a
ValidationException.
Note
To ensure that a new item does not replace an existing item, use a conditional put operation
with Exists set to false for the primary key attribute, or attributes.
For more information about using PutItem, see Working with Items in DynamoDB (p. 372).
Requests
Syntax
{"TableName":"Table1",
"Item":{
"AttributeName1":{"S":"AttributeValue1"},
"AttributeName2":{"N":"AttributeValue2"},
"AttributeName5":{"B":"dmFsdWU="}
},
"Expected":{"AttributeName3":{"Value": {"S":"AttributeValue"}, "Exists":Boolean}},
"ReturnValues":"ReturnValuesConstant"}
Type: String
Type: String
"Expected" :
{"Color":{"Exists":false}}
"Expected" :
{"Color":{"Exists":true,
{"Value":{"S":"Yellow"}}}
"Expected" :
{"Color":{"Value":
{"S":"Yellow"}}}
Note
If you specify
{"Exists":true}
without an attribute
value to check,
DynamoDB returns an
error.
Type: String
Responses
Syntax
The following syntax example assumes the request specified a ReturnValues parameter of ALL_OLD;
otherwise, the response has only the ConsumedCapacityUnits element.
HTTP/1.1 200
x-amzn-RequestId: 8966d095-71e9-11e0-a498-71d736f27375
content-type: application/x-amz-json-1.0
content-length: 85
{"Attributes":
{"AttributeName3":{"S":"AttributeValue3"},
"AttributeName2":{"SS":"AttributeValue2"},
"AttributeName1":{"SS":"AttributeValue1"},
},
"ConsumedCapacityUnits":1
}
Name Description
Type: Number
Special Errors
Error Description
Examples
For examples using the AWS SDK, see Working with Items in DynamoDB (p. 372).
Sample Request
// This header is abbreviated. For a sample of a complete header, see DynamoDB Low-Level
API.
POST / HTTP/1.1
x-amz-target: DynamoDB_20111205.PutItem
content-type: application/x-amz-json-1.0
{"TableName":"comp5",
"Item":
{"time":{"N":"300"},
"feeling":{"S":"not surprised"},
"user":{"S":"Riley"}
},
"Expected":
{"feeling":{"Value":{"S":"surprised"},"Exists":true}}
"ReturnValues":"ALL_OLD"
}
Sample Response
HTTP/1.1 200
x-amzn-RequestId: 8952fa74-71e9-11e0-a498-71d736f27375
content-type: application/x-amz-json-1.0
content-length: 84
{"Attributes":
{"feeling":{"S":"surprised"},
"time":{"N":"300"},
"user":{"S":"Riley"}},
"ConsumedCapacityUnits":1
}
Related Actions
• UpdateItem (p. 1015)
• DeleteItem (p. 977)
• GetItem (p. 987)
• BatchGetItem (p. 963)
Query
Important
This section refers to API version 2011-12-05, which is deprecated and
should not be used for new applications.
For documentation on the current low-level API, see the Amazon DynamoDB API Reference.
Description
A Query operation gets the values of one or more items and their attributes by primary key (Query is
only available for hash-and-range primary key tables). You must provide a specific HashKeyValue, and
can narrow the scope of the query using comparison operators on the RangeKeyValue of the primary
key. Use the ScanIndexForward parameter to get results in forward or reverse order by range key.
Queries that do not return results consume the minimum read capacity units according to the type of
read.
Note
If the total number of items meeting the query parameters exceeds the 1MB limit, the query
stops and results are returned to the user with a LastEvaluatedKey to continue the query in a
subsequent operation. Unlike a Scan operation, a Query operation never returns an empty result
set and a LastEvaluatedKey. The LastEvaluatedKey is only provided if the results exceed
1MB, or if you have used the Limit parameter.
The result can be set for a consistent read using the ConsistentRead parameter.
Requests
Syntax
{"TableName":"Table1",
"Limit":2,
"ConsistentRead":true,
"HashKeyValue":{"S":"AttributeValue1":},
"RangeKeyCondition": {"AttributeValueList":
[{"N":"AttributeValue2"}],"ComparisonOperator":"GT"}
"ScanIndexForward":true,
"ExclusiveStartKey":{
"HashKeyElement":{"S":"AttributeName1"},
"RangeKeyElement":{"N":"AttributeName2"}
},
"AttributesToGet":["AttributeName1", "AttributeName2", "AttributeName3"]},
}
Type: String
Type: Array
Type: Number
Type: Boolean
Type: Boolean
Type: Map
Type: A map of
AttributeValue to a
ComparisonOperator.
EQ : Equal.
LT : Less than.
GT : Greater than.
For BEGINS_WITH,
AttributeValueList
can contain only one
AttributeValue of type String
or Binary (not a Number or a
set). The target attribute of the
comparison must be a String or
Binary (not a Number or a set).
For BETWEEN,
AttributeValueList must
contain two AttributeValue
elements of the same type,
either String, Number, or Binary
(not a set). A target attribute
matches if the target value
is greater than, or equal to,
the first element and less
than, or equal to, the second
element. If an item contains
an AttributeValue of a
different type than the one
specified in the request, the
value does not match. For
example, {"S":"6"} does not
compare to {"N":"6"}. Also,
{"N":"6"} does not compare
to {"NS":["6", "2", "1"]}.
Type: Boolean
Type: HashKeyElement,
or HashKeyElement and
RangeKeyElement for a
composite primary key.
Responses
Syntax
HTTP/1.1 200
x-amzn-RequestId: 8966d095-71e9-11e0-a498-71d736f27375
content-type: application/x-amz-json-1.0
content-length: 308
{"Count":2,"Items":[{
"AttributeName1":{"S":"AttributeValue1"},
"AttributeName2":{"N":"AttributeValue2"},
"AttributeName3":{"S":"AttributeValue3"}
},{
"AttributeName1":{"S":"AttributeValue3"},
"AttributeName2":{"N":"AttributeValue4"},
"AttributeName3":{"S":"AttributeValue3"},
"AttributeName5":{"B":"dmFsdWU="}
}],
"LastEvaluatedKey":{"HashKeyElement":{"AttributeValue3":"S"},
"RangeKeyElement":{"AttributeValue4":"N"}
},
"ConsumedCapacityUnits":1
}
Name Description
Type: Number
Type: Number
Special Errors
Error Description
Examples
For examples using the AWS SDK, see Working with Queries (p. 455).
Sample Request
// This header is abbreviated. For a sample of a complete header, see DynamoDB Low-Level
API.
POST / HTTP/1.1
x-amz-target: DynamoDB_20111205.Query
content-type: application/x-amz-json-1.0
{"TableName":"1-hash-rangetable",
"Limit":2,
"HashKeyValue":{"S":"John"},
"ScanIndexForward":false,
"ExclusiveStartKey":{
"HashKeyElement":{"S":"John"},
"RangeKeyElement":{"S":"The Matrix"}
}
}
Sample Response
HTTP/1.1 200
x-amzn-RequestId: 3647e778-71eb-11e0-a498-71d736f27375
content-type: application/x-amz-json-1.0
content-length: 308
{"Count":2,"Items":[{
"fans":{"SS":["Jody","Jake"]},
"name":{"S":"John"},
"rating":{"S":"***"},
"title":{"S":"The End"}
},{
"fans":{"SS":["Jody","Jake"]},
"name":{"S":"John"},
"rating":{"S":"***"},
"title":{"S":"The Beatles"}
}],
"LastEvaluatedKey":{"HashKeyElement":{"S":"John"},"RangeKeyElement":{"S":"The Beatles"}},
"ConsumedCapacityUnits":1
}
Sample Request
// This header is abbreviated. For a sample of a complete header, see DynamoDB Low-Level
API.
POST / HTTP/1.1
x-amz-target: DynamoDB_20111205.Query
content-type: application/x-amz-json-1.0
{"TableName":"1-hash-rangetable",
"Limit":2,
"HashKeyValue":{"S":"Airplane"},
"RangeKeyCondition":{"AttributeValueList":[{"N":"1980"}],"ComparisonOperator":"EQ"},
"ScanIndexForward":false}
Sample Response
HTTP/1.1 200
x-amzn-RequestId: 8b9ee1ad-774c-11e0-9172-d954e38f553a
content-type: application/x-amz-json-1.0
content-length: 119
{"Count":1,"Items":[{
"fans":{"SS":["Dave","Aaron"]},
"name":{"S":"Airplane"},
"rating":{"S":"***"},
"year":{"N":"1980"}
}],
"ConsumedCapacityUnits":1
}
Related Actions
• Scan (p. 1005)
Scan
Important
This section refers to API version 2011-12-05, which is deprecated and
should not be used for new applications.
For documentation on the current low-level API, see the Amazon DynamoDB API Reference.
Description
The Scan operation returns one or more items and its attributes by performing a full scan of a table.
Provide a ScanFilter to get more specific results.
Note
If the total number of scanned items exceeds the 1MB limit, the scan stops and results
are returned to the user with a LastEvaluatedKey to continue the scan in a subsequent
operation. The results also include the number of items exceeding the limit. A scan can result in
no table data meeting the filter criteria.
The result set is eventually consistent.
Requests
Syntax
{"TableName":"Table1",
"Limit": 2,
"ScanFilter":{
"AttributeName1":{"AttributeValueList":
[{"S":"AttributeValue"}],"ComparisonOperator":"EQ"}
},
"ExclusiveStartKey":{
"HashKeyElement":{"S":"AttributeName1"},
"RangeKeyElement":{"N":"AttributeName2"}
},
"AttributesToGet":["AttributeName1", "AttributeName2", "AttributeName3"]},
}
Type: String
Type: Array
Type: Number
Type: Boolean
Type: A map of
AttributeValue to a
Condition.
EQ : Equal.
NE : Not Equal.
LT : Less than.
GT : Greater than.
For CONTAINS,
AttributeValueList
can contain only one
AttributeValue of type
String, Number, or Binary (not a
set). If the target attribute of the
comparison is a String, then the
operation checks for a substring
match. If the target attribute
of the comparison is Binary,
then the operation looks for a
subsequence of the target that
matches the input. If the target
attribute of the comparison is a
set ("SS", "NS", or "BS"), then the
operation checks for a member
of the set (not as a substring).
For NOT_CONTAINS,
AttributeValueList
can contain only one
AttributeValue of type
String, Number, or Binary (not
a set). If the target attribute of
the comparison is a String, then
the operation checks for the
absence of a substring match.
If the target attribute of the
comparison is Binary, then
the operation checks for the
absence of a subsequence of the
target that matches the input.
If the target attribute of the
comparison is a set ("SS", "NS",
or "BS"), then the operation
checks for the absence of a
member of the set (not as a
substring).
For BEGINS_WITH,
AttributeValueList
can contain only one
AttributeValue of type String
or Binary (not a Number or a
set). The target attribute of the
comparison must be a String or
Binary (not a Number or a set).
For BETWEEN,
AttributeValueList must
contain two AttributeValue
elements of the same type,
either String, Number, or Binary
(not a set). A target attribute
matches if the target value
is greater than, or equal to,
the first element and less
than, or equal to, the second
element. If an item contains
an AttributeValue of a
different type than the one
specified in the request, the
value does not match. For
example, {"S":"6"} does not
compare to {"N":"6"}. Also,
{"N":"6"} does not compare
to {"NS":["6", "2", "1"]}.
Type: HashKeyElement,
or HashKeyElement and
RangeKeyElement for a
composite primary key.
Responses
Syntax
HTTP/1.1 200
x-amzn-RequestId: 8966d095-71e9-11e0-a498-71d736f27375
content-type: application/x-amz-json-1.0
content-length: 229
{"Count":2,"Items":[{
"AttributeName1":{"S":"AttributeValue1"},
"AttributeName2":{"S":"AttributeValue2"},
"AttributeName3":{"S":"AttributeValue3"}
},{
"AttributeName1":{"S":"AttributeValue4"},
"AttributeName2":{"S":"AttributeValue5"},
"AttributeName3":{"S":"AttributeValue6"},
"AttributeName5":{"B":"dmFsdWU="}
}],
"LastEvaluatedKey":
{"HashKeyElement":{"S":"AttributeName1"},
"RangeKeyElement":{"N":"AttributeName2"},
"ConsumedCapacityUnits":1,
"ScannedCount":2}
}
Name Description
Name Description
Type: Number
Type: Number
Type: Number
Special Errors
Error Description
Examples
For examples using the AWS SDK, see Working with Scans (p. 473).
Sample Request
// This header is abbreviated. For a sample of a complete header, see DynamoDB Low-Level
API.
POST / HTTP/1.1
x-amz-target: DynamoDB_20111205.Scan
content-type: application/x-amz-json-1.0
{"TableName":"1-hash-rangetable","ScanFilter":{}}
Sample Response
HTTP/1.1 200
x-amzn-RequestId: 4e8a5fa9-71e7-11e0-a498-71d736f27375
content-type: application/x-amz-json-1.0
content-length: 465
{"Count":4,"Items":[{
"date":{"S":"1980"},
"fans":{"SS":["Dave","Aaron"]},
"name":{"S":"Airplane"},
"rating":{"S":"***"}
},{
"date":{"S":"1999"},
"fans":{"SS":["Ziggy","Laura","Dean"]},
"name":{"S":"Matrix"},
"rating":{"S":"*****"}
},{
"date":{"S":"1976"},
"fans":{"SS":["Riley"]},"
name":{"S":"The Shaggy D.A."},
"rating":{"S":"**"}
},{
"date":{"S":"1985"},
"fans":{"SS":["Fox","Lloyd"]},
"name":{"S":"Back To The Future"},
"rating":{"S":"****"}
}],
"ConsumedCapacityUnits":0.5
"ScannedCount":4}
Sample Request
// This header is abbreviated. For a sample of a complete header, see DynamoDB Low-Level
API.
POST / HTTP/1.1
x-amz-target: DynamoDB_20111205.Scan
content-type: application/x-amz-json-1.0
content-length: 125
{"TableName":"comp5",
"ScanFilter":
{"time":
{"AttributeValueList":[{"N":"400"}],
"ComparisonOperator":"GT"}
}
}
Sample Response
HTTP/1.1 200 OK
x-amzn-RequestId: PD1CQK9QCTERLTJP20VALJ60TRVV4KQNSO5AEMVJF66Q9ASUAAJG
content-type: application/x-amz-json-1.0
content-length: 262
Date: Mon, 15 Aug 2011 16:52:02 GMT
{"Count":2,
"Items":[
{"friends":{"SS":["Dave","Ziggy","Barrie"]},
"status":{"S":"chatting"},
"time":{"N":"2000"},
"user":{"S":"Casey"}},
{"friends":{"SS":["Dave","Ziggy","Barrie"]},
"status":{"S":"chatting"},
"time":{"N":"2000"},
"user":{"S":"Fredy"}
}],
"ConsumedCapacityUnits":0.5
"ScannedCount":4
Sample Request
// This header is abbreviated. For a sample of a complete header, see DynamoDB Low-Level
API.
POST / HTTP/1.1
x-amz-target: DynamoDB_20111205.Scan
content-type: application/x-amz-json-1.0
{"TableName":"comp5",
"Limit":2,
"ScanFilter":
{"time":
{"AttributeValueList":[{"N":"400"}],
"ComparisonOperator":"GT"}
},
"ExclusiveStartKey":
{"HashKeyElement":{"S":"Fredy"},"RangeKeyElement":{"N":"2000"}}
}
Sample Response
HTTP/1.1 200 OK
x-amzn-RequestId: PD1CQK9QCTERLTJP20VALJ60TRVV4KQNSO5AEMVJF66Q9ASUAAJG
content-type: application/x-amz-json-1.0
content-length: 232
Date: Mon, 15 Aug 2011 16:52:02 GMT
{"Count":1,
"Items":[
{"friends":{"SS":["Jane","James","John"]},
"status":{"S":"exercising"},
"time":{"N":"2200"},
"user":{"S":"Roger"}}
],
"LastEvaluatedKey":{"HashKeyElement":{"S":"Riley"},"RangeKeyElement":{"N":"250"}},
"ConsumedCapacityUnits":0.5
"ScannedCount":2
}
Related Actions
• Query (p. 996)
• BatchGetItem (p. 963)
UpdateItem
Important
This section refers to API version 2011-12-05, which is deprecated and
should not be used for new applications.
For documentation on the current low-level API, see the Amazon DynamoDB API Reference.
Description
Edits an existing item's attributes. You can perform a conditional update (insert a new attribute name-
value pair if it doesn't exist, or replace an existing name-value pair if it has certain expected attribute
values).
Note
You cannot update the primary key attributes using UpdateItem. Instead, delete the item and
use PutItem to create a new item with new attributes.
The UpdateItem operation includes an Action parameter, which defines how to perform the update.
You can put, delete, or add attribute values.
Attribute values may not be null; string and binary type attributes must have lengths greater than
zero; and set type attributes must not be empty. Requests with empty values will be rejected with a
ValidationException.
• PUT— Adds the specified attribute. If the attribute exists, it is replaced by the new value.
• DELETE— If no value is specified, this removes the attribute and its value. If a set of values is specified,
then the values in the specified set are removed from the old set. So if the attribute value contains
[a,b,c] and the delete action contains [a,c], then the final attribute value is [b]. The type of the
specified value must match the existing value type. Specifying an empty set is not valid.
• ADD— Only use the add action for numbers or if the target attribute is a set (including string sets).
ADD does not work if the target attribute is a single string value or a scalar binary value. The specified
value is added to a numeric value (incrementing or decrementing the existing numeric value) or added
as an additional value in a string set. If a set of values is specified, the values are added to the existing
set. For example if the original set is [1,2] and supplied value is [3], then after the add operation the
set is [1,2,3], not [4,5]. An error occurs if an Add action is specified for a set attribute and the attribute
type specified does not match the existing set type.
If you use ADD for an attribute that does not exist, the attribute and its values are added to the item.
• PUT— Creates a new item with specified primary key. Then adds the specified attribute.
• DELETE— Nothing happens.
• ADD— Creates an item with supplied primary key and number (or set of numbers) for the attribute
value. Not valid for a string or a binary type.
Note
If you use ADD to increment or decrement a number value for an item that doesn't exist before
the update, DynamoDB uses 0 as the initial value. Also, if you update an item using ADD to
increment or decrement a number value for an attribute that doesn't exist before the update
(but the item does) DynamoDB uses 0 as the initial value. For example, you use ADD to add +3 to
an attribute that did not exist before the update. DynamoDB uses 0 for the initial value, and the
value after the update is 3.
For more information about using this operation, see Working with Items in DynamoDB (p. 372).
Requests
Syntax
{"TableName":"Table1",
"Key":
{"HashKeyElement":{"S":"AttributeValue1"},
"RangeKeyElement":{"N":"AttributeValue2"}},
"AttributeUpdates":{"AttributeName3":{"Value":
{"S":"AttributeValue3_New"},"Action":"PUT"}},
"Expected":{"AttributeName3":{"Value":{"S":"AttributeValue3_Current"}}},
"ReturnValues":"ReturnValuesConstant"
}
Type: String
Type: String
Default: PUT
Type: String
"Expected" :
{"Color":{"Exists":false}}
"Expected" :
{"Color":{"Exists":true},
{"Value":{"S":"Yellow"}}}
"Expected" :
{"Color":{"Value":
{"S":"Yellow"}}}
Note
If you specify
{"Exists":true}
without an attribute
value to check,
DynamoDB returns an
error.
Type: String
Responses
Syntax
The following syntax example assumes the request specified a ReturnValues parameter of ALL_OLD;
otherwise, the response has only the ConsumedCapacityUnits element.
HTTP/1.1 200
x-amzn-RequestId: 8966d095-71e9-11e0-a498-71d736f27375
content-type: application/x-amz-json-1.0
content-length: 140
{"Attributes":{
"AttributeName1":{"S":"AttributeValue1"},
"AttributeName2":{"S":"AttributeValue2"},
"AttributeName3":{"S":"AttributeValue3"},
"AttributeName5":{"B":"dmFsdWU="}
},
"ConsumedCapacityUnits":1
}
Name Description
Name Description
applied toward your provisioned throughput.
For more information see Managing Throughput
Settings on Provisioned Tables (p. 339).
Type: Number
Special Errors
Error Description
Examples
For examples using the AWS SDK, see Working with Items in DynamoDB (p. 372).
Sample Request
// This header is abbreviated. For a sample of a complete header, see DynamoDB Low-Level
API.
POST / HTTP/1.1
x-amz-target: DynamoDB_20111205.UpdateItem
content-type: application/x-amz-json-1.0
{"TableName":"comp5",
"Key":
{"HashKeyElement":{"S":"Julie"},"RangeKeyElement":{"N":"1307654350"}},
"AttributeUpdates":
{"status":{"Value":{"S":"online"},
"Action":"PUT"}},
"Expected":{"status":{"Value":{"S":"offline"}}},
"ReturnValues":"ALL_NEW"
}
Sample Response
HTTP/1.1 200 OK
x-amzn-RequestId: 5IMHO7F01Q9P7Q6QMKMMI3R3QRVV4KQNSO5AEMVJF66Q9ASUAAJG
content-type: application/x-amz-json-1.0
content-length: 121
Date: Fri, 26 Aug 2011 21:05:00 GMT
{"Attributes":
{"friends":{"SS":["Lynda, Aaron"]},
"status":{"S":"online"},
"time":{"N":"1307654350"},
"user":{"S":"Julie"}},
"ConsumedCapacityUnits":1
}
Related Actions
• PutItem (p. 991)
• DeleteItem (p. 977)
UpdateTable
Important
This section refers to API version 2011-12-05, which is deprecated and
should not be used for new applications.
For documentation on the current low-level API, see the Amazon DynamoDB API Reference.
Description
Updates the provisioned throughput for the given table. Setting the throughput for a table helps
you manage performance and is part of the provisioned throughput feature of DynamoDB. For more
information, see Managing Throughput Settings on Provisioned Tables (p. 339).
The provisioned throughput values can be upgraded or downgraded based on the maximums and
minimums listed in Limits in DynamoDB (p. 873).
The table must be in the ACTIVE state for this operation to succeed. UpdateTable is an asynchronous
operation; while executing the operation, the table is in the UPDATING state. While the table is in the
UPDATING state, the table still has the provisioned throughput from before the call. The new provisioned
throughput setting is in effect only when the table returns to the ACTIVE state after the UpdateTable
operation.
Requests
Syntax
{"TableName":"Table1",
"ProvisionedThroughput":{"ReadCapacityUnits":5,"WriteCapacityUnits":15}
}
Type: String
Type: Array
Type: Number
Type: Number
Responses
Syntax
HTTP/1.1 200 OK
x-amzn-RequestId: CSOC7TJPLR0OOKIRLGOHVAICUFVV4KQNSO5AEMVJF66Q9ASUAAJG
Content-Type: application/json
Content-Length: 311
Date: Tue, 12 Jul 2011 21:31:03 GMT
{"TableDescription":
{"CreationDateTime":1.321657838135E9,
"KeySchema":
{"HashKeyElement":{"AttributeName":"AttributeValue1","AttributeType":"S"},
"RangeKeyElement":{"AttributeName":"AttributeValue2","AttributeType":"N"}},
"ProvisionedThroughput":
{"LastDecreaseDateTime":1.321661704489E9,
"LastIncreaseDateTime":1.321663607695E9,
"ReadCapacityUnits":5,
"WriteCapacityUnits":10},
"TableName":"Table1",
"TableStatus":"UPDATING"}}
Name Description
Name Description
Type: Number
Type: Array
Type: String
Type: String
Special Errors
Error Description
Examples
Sample Request
content-type: application/x-amz-json-1.0
{"TableName":"comp1",
"ProvisionedThroughput":{"ReadCapacityUnits":5,"WriteCapacityUnits":15}
}
Sample Response
HTTP/1.1 200 OK
content-type: application/x-amz-json-1.0
content-length: 390
Date: Sat, 19 Nov 2011 00:46:47 GMT
{"TableDescription":
{"CreationDateTime":1.321657838135E9,
"KeySchema":
{"HashKeyElement":{"AttributeName":"user","AttributeType":"S"},
"RangeKeyElement":{"AttributeName":"time","AttributeType":"N"}},
"ProvisionedThroughput":
{"LastDecreaseDateTime":1.321661704489E9,
"LastIncreaseDateTime":1.321663607695E9,
"ReadCapacityUnits":5,
"WriteCapacityUnits":10},
"TableName":"comp1",
"TableStatus":"UPDATING"}
}
Related Actions
• CreateTable (p. 972)
• DescribeTables (p. 984)
• DeleteTable (p. 981)
DAX adds support for DAX supports the February 14, 2019
transactional operations using TransactWriteItems and
Python and .NET (p. 1025) TransactGetItems APIs for
applications written in Go,
Java, .NET, Node.js, and Python.
For more information, see In-
Memory Acceleration with DAX.
Amazon DynamoDB encrypts all DynamoDB encryption at rest November 15, 2018
customer data at rest (p. 1025) provides an additional layer of
data protection by securing your
data in the encrypted table,
including its primary key, local
and global secondary indexes,
Use Amazon DynamoDB Local Now, it’s easier to use August 22, 2018
More Easily with the New Docker Amazon DynamoDB local,
Image (p. 1025) the downloadable version of
DynamoDB, to help you develop
and test your DynamoDB
applications by using the new
DynamoDB local Docker image.
For more information, see
DynamoDB (Downloadable
Version) and Docker.
Updates now available over You can now subscribe to the July 3, 2018
RSS (p. 1025) RSS feed (on the top right
corner of this page) to receive
notifications about updates
to the Amazon DynamoDB
Developer Guide.
Earlier Updates
The following table describes important changes of the DynamoDB Developer Guide before July 3, 2018.
Go support for DAX Now, you can enable June 26, 2018
microsecond read performance
for Amazon DynamoDB tables
in your applications written
DynamoDB Backup and restore On-Demand Backup allows November 29, 2017
you to create full backups of
your DynamoDB tables data
for data archival, helping
you meet your corporate and
governmental regulatory
requirements. You can backup
tables from a few megabytes
to hundreds of terabytes
of data, with no impact on
performance and availability to
your production applications.
For more information, see On-
Demand Backup and Restore for
DynamoDB (p. 596).
DynamoDB Global tables Global Tables builds upon November 29, 2017
DynamoDB’s global footprint
to provide you with a fully
managed, multi-region, and
multi-master database that
provides fast, local, read and
write performance for massively
scaled, global applications.
Global Tables replicates your
Amazon DynamoDB tables
automatically across your
choice of AWS regions. For
more information, see Global
Tables (p. 612).
Node.js support for DAX Node.js developers can leverage October 5, 2017
Amazon DynamoDB Accelerator
(DAX), using the DAX client for
Node.js. For more information,
see In-Memory Acceleration with
DAX (p. 633).
VPC Endpoints for DynamoDB DynamoDB endpoints allow August 16, 2017
Amazon EC2 instances in
your Amazon VPC to access
DynamoDB, without exposure
to the public Internet. Network
traffic between your VPC and
DynamoDB does not leave
the Amazon network. For
more information, see Using
Amazon VPC Endpoints to
Access DynamoDB (p. 795).
Auto Scaling for DynamoDB DynamoDB auto scaling June 14, 2017
eliminates the need for
manually defining or adjust
provisioned throughput settings.
Instead, DynamoDB auto
scaling dynamically adjusts
read and write capacity in
response to actual traffic
patterns. This allows a table
or a global secondary index to
increase its provisioned read
and write capacity to handle
sudden increases in traffic,
without throttling. When the
workload decreases, DynamoDB
auto scaling decreases
the provisioned capacity.
For more information, see
Managing Throughput Capacity
Automatically with DynamoDB
Auto Scaling (p. 343).
DynamoDB now supports Cost You can now add tags to Jan 19, 2017
Allocation Tags your Amazon DynamoDB
tables for improved usage
categorization and more
granular cost reporting. For
more information, see Tagging
for DynamoDB (p. 357).
DynamoDB Console Update and The DynamoDB management November 12, 2015
New Terminology for Primary console has been redesigned to
Key Attributes be more intuitive and easy to
use. As part of this update, we
are introducing new terminology
for primary key attributes:
Amazon DynamoDB Storage The DynamoDB Storage Backend August 20, 2015
Backend for Titan for Titan is a storage backend
for the Titan graph database
implemented on top of Amazon
DynamoDB. When using the
DynamoDB Storage Backend for
Titan, your data benefits from
the protection of DynamoDB,
which runs across Amazon’s
high-availability data centers.
The plugin is available for Titan
version 0.4.4 (primarily for
compatibility with existing
applications) and Titan version
0.5.4 (recommended for
new applications). Like other
storage backends for Titan, this
plugin supports the Tinkerpop
stack (versions 2.4 and 2.5),
including the Blueprints API
and the Gremlin shell. For
more information, see Amazon
DynamoDB Storage Backend for
Titan (p. 936).
DynamoDB cross-region
replication is a client-side
solution for maintaining
identical copies of DynamoDB
tables across different AWS
regions, in near real time. You
can use cross region replication
to back up DynamoDB tables,
or to provide low-latency
access to data where users are
geographically distributed. For
more information, see Cross-
Region Replication (p. 587).
AWS CloudTrail support for DynamoDB is now integrated May 28, 2015
Amazon DynamoDB with CloudTrail. CloudTrail
captures API calls made from
the DynamoDB console or
from the DynamoDB API and
tracks them in log files. For
more information, see Logging
DynamoDB Operations by Using
AWS CloudTrail (p. 774) and the
AWS CloudTrail User Guide.
Improved support for Query This release adds a new April 27, 2015
expressions KeyConditionExpression
parameter to the Query API.
A Query reads items from
a table or an index using
primary key values. The
KeyConditionExpression
parameter is a string that
identifies primary key
names, and conditions to be
applied to the key values;
the Query retrieves only
those items that satisfy the
expression. The syntax of
KeyConditionExpression
is similar to that of other
expression parameters in
DynamoDB, and allows you to
define substitution variables
for names and values within
the expression. For more
information, see Working with
Queries (p. 455).
Scan API for secondary indexes In DynamoDB, a Scan operation February 10, 2015
reads all of the items in a table,
applies user-defined filtering
criteria, and returns the selected
data items to the application.
This same capability is now
available for secondary indexes
too. To scan a local secondary
index or a global secondary
index, you specify the index
name and the name of its
parent table. By default, an
index Scan returns all of the
data in the index; you can use
a filter expression to narrow
the results that are returned
to the application. For more
information, see Working with
Scans (p. 473).
Online operations for global Online indexing lets you add January 27, 2015
secondary indexes or remove global secondary
indexes on existing tables. With
online indexing, you do not
need to define all of a table's
indexes when you create a table;
instead, you can add a new
index at any time. Similarly, if
you decide you no longer need
an index, you can remove it
at any time. Online indexing
operations are non-blocking, so
that the table remains available
for read and write activity while
indexes are being added or
removed. For more information,
see Managing Global Secondary
Indexes (p. 503).
Document model support with DynamoDB allows you to store October 7, 2014
JSON and retrieve documents with
full support for document
models. New data types are
fully compatible with the
JSON standard and allow you
to nest document elements
within one another. You can
use document path dereference
operators to read and write
individual elements, without
having to retrieve the entire
document. This release also
introduces new expression
parameters for specifying
projections, conditions and
update actions when reading
or writing data items. To
learn more about document
model support with JSON,
see Data Types (p. 12) and Using
Expressions in
DynamoDB (p. 383).
Data export and import using The DynamoDB console has March 6, 2014
the AWS Management Console been enhanced to simplify
exports and imports of data
in DynamoDB tables. With
just a few clicks, you can set
up an AWS Data Pipeline to
orchestrate the workflow, and
an Amazon Elastic MapReduce
cluster to copy data from
DynamoDB tables to an Amazon
S3 bucket, or vice-versa. You can
perform an export or import
one time only, or set up a
daily export job. You can even
perform cross-region exports
and imports, copying DynamoDB
data from a table in one AWS
region to a table in another AWS
region. For more information,
see Exporting and Importing
DynamoDB Data Using AWS
Data Pipeline (p. 928).
Reorganized higher-level API Information about the following January 20, 2014
documentation APIs is now easier to find:
• Java: DynamoDBMappper
• .NET: Document model and
object-persistence model
Global secondary indexes DynamoDB adds support for December 12, 2013
global secondary indexes. As
with a local secondary index,
you define a global secondary
index by using an alternate
key from a table and then
issuing Query requests on the
index. Unlike a local secondary
index, the partition key for the
global secondary index does not
have to be the same as that of
the table; it can be any scalar
attribute from the table. The
sort key is optional and can also
be any scalar table attribute.
A global secondary index
also has its own provisioned
throughput settings, which
are separate from those of
the parent table. For more
information, see Improving
Data Access with Secondary
Indexes (p. 493) and Global
Secondary Indexes (p. 496).
Fine-grained access control DynamoDB adds support for October 29, 2013
fine-grained access control.
This feature allows customers
to specify which principals
(users, groups, or roles) can
access individual items and
attributes in a DynamoDB table
or secondary index. Applications
can also leverage web identity
federation to offload the task
of user authentication to a
third-party identity provider,
such as Facebook, Google, or
Login with Amazon. In this
way, applications (including
mobile apps) can handle very
large numbers of users, while
ensuring that no one can access
DynamoDB data items unless
they are authorized to do so.
For more information, see Using
IAM Policy Conditions for Fine-
Grained Access Control (p. 730).
4 KB read capacity unit size The capacity unit size for reads May 14, 2013
has increased from 1 KB to 4 KB.
This enhancement can reduce
the number of provisioned read
capacity units required for many
applications. For example, prior
to this release, reading a 10 KB
item would consume 10 read
capacity units; now that same
10 KB read would consume only
3 units (10 KB / 4 KB, rounded
up to the next 4 KB boundary).
For more information, see Read/
Write Capacity Mode (p. 16).
Local secondary indexes DynamoDB adds support for April 18, 2013
local secondary indexes. You
can define sort key indexes on
non-key attributes, and then
use these indexes in Query
requests. With local secondary
indexes, applications can
efficiently retrieve data items
across multiple dimensions. For
more information, see Local
Secondary Indexes (p. 532).
New API version With this release, DynamoDB April 18, 2013
introduces a new API version
(2012-08-10). The previous
API version (2011-12-05) is
still supported for backward
compatibility with existing
applications. New applications
should use the new API version
2012-08-10. We recommend
that you migrate your existing
applications to API version
2012-08-10, since new
DynamoDB features (such
as local secondary indexes)
will not be backported to the
previous API version. For more
information on API version
2012-08-10, see the Amazon
DynamoDB API Reference.
IAM policy variable support The IAM access policy language April 4, 2013
now supports variables. When
a policy is evaluated, any policy
variables are replaced with
values that are supplied by
context-based information from
the authenticated user's session.
You can use policy variables to
define general purpose policies
without explicitly listing all the
components of the policy. For
more information about policy
variables, go to Policy Variables
in the AWS Identity and Access
Management Using IAM guide.
PHP code examples updated for Version 2 of the AWS SDK for January 23, 2013
AWS SDK for PHP version 2 PHP is now available. The PHP
code examples in the Amazon
DynamoDB Developer Guide
have been updated to use this
new SDK. For more information
on Version 2 of the SDK, see
AWS SDK for PHP.
Support for binary data type In addition to the Number and August 21, 2012
String types, DynamoDB now
supports Binary data type.
DynamoDB table items can be DynamoDB users can now August 14, 2012
updated and copied using the update and copy table items
DynamoDB console using the DynamoDB Console,
in addition to being able to
add and delete items. This new
functionality simplifies making
changes to individual items
through the Console.
Table explorer support in The DynamoDB Console now May 22, 2012
DynamoDB Console supports a table explorer that
enables you to browse and
query the data in your tables.
You can also insert new items
or delete existing items. The
Creating Tables and Loading
Sample Data (p. 323) and Using
the Console (p. 51) sections have
been updated for these features.
Documented more error codes For more information, see Error April 5, 2012
Handling (p. 220).
Working with
Items: .NET (p. 432).