Amazon DynamoDB: Scalable NoSQL with Predictable Performance

Deep dive into Amazon DynamoDB architecture, partitioned tables, eventual consistency, on-demand capacity, and the single-digit millisecond SLA.

published: reading time: 25 min read

Amazon DynamoDB: Scalable NoSQL with Predictable Performance

DynamoDB traces back to an internal Amazon project tackling the shopping cart’s scalability problems in the mid-2000s. The 2007 Dynamo paper introduced consistent hashing, vector clocks, and eventual consistency, ideas that shifted how engineers approached distributed storage.

Amazon launched DynamoDB as a managed service in 2012. Today it is one of the most widely-used NoSQL databases, powering massive-scale applications. The core insight from the original paper: give up strong consistency, get scalability and availability.


Core Architecture

DynamoDB organizes data in tables. Each table holds items, and each item has attributes. DynamoDB is schemaless except for the primary key, so items can have different attributes.

Primary keys come in two flavors:

  • Simple: Single attribute (partition key only)
  • Composite: Two attributes (partition key + sort key)
# DynamoDB item example
{
    "UserId": "user-12345",        # Partition key
    "OrderId": "order-9876",        # Sort key (if composite)
    "Status": "shipped",
    "TotalAmount": 129.99,
    "Items": ["SKU-001", "SKU-042"],  # List attribute
    "Metadata": {                    # Map attribute
        "ShippingAddress": "123 Main St",
        "Carrier": "UPS"
    }
}

DynamoDB spreads data across storage nodes called partitions. The partition key determines which partition holds your data, and DynamoDB routes requests accordingly.


Data Distribution: Consistent Hashing

DynamoDB uses consistent hashing for data distribution. Hash the partition key to find the partition. Add or remove nodes, and only a fraction of the data needs to move. This is the core mechanism behind DynamoDB’s horizontal scaling.

graph TD
    A[Key Space 0-2^32] --> B[Partition 1]
    A --> C[Partition 2]
    A --> D[Partition 3]
    A --> E[Partition N]

    B --> F[Replica 1]
    C --> G[Replica 2]
    D --> H[Replica 3]

    F --> I[us-east-1a]
    G --> J[us-east-1b]
    H --> K[us-east-1c]

Each partition replicates across three availability zones automatically. AWS uses a Paxos variant for partition leader election, though the details are internal.

Partitions split when they exceed capacity (roughly 10GB or 3,000 write capacity units). As your table grows, DynamoDB handles the redistribution.


Hot Partitions: Problems and Mitigation

A hot partition occurs when one partition key receives disproportionately high traffic. Since DynamoDB routes all requests for that key to a single partition, it becomes a bottleneck while other partitions sit idle.

Common causes:

  • Using a low-cardinality attribute as partition key (e.g., “status” with values like “active”, “pending”, “completed”)
  • Viral content where one item gets massively more reads than others
  • Time-based keys causing all writes during peak hours to hit one partition

Mitigation strategies:

  1. Randomized partition keys: Add a random suffix to distribute writes across partitions
import random

def generate_partition_key(user_id):
    # Spread writes across 10 virtual partitions
    random_suffix = random.randint(0, 9)
    return f"{user_id}#{random_suffix}"

# Trade-off: now you must query all 10 partitions to find a user
  1. Write sharding with high-cardinality salts: Use multiple distinct values that map to the same underlying entity

  2. Read集中化: For read-heavy hot items, use DAX (DynamoDB Accelerator) to cache aggressively

  3. Adaptive capacity: DynamoDB now automatically distributes traffic for eventually consistent reads, but writes still hit the partition directly

Warning signs of hot partitions:

  • ProvisionedThroughputExceededException errors affecting only certain keys
  • ConsumedWriteCapacityUnits showing one partition at 100% while others are under 10%
  • Latency spikes on specific keys during traffic bursts
# Monitoring hot partition with CloudWatch
import boto3

cloudwatch = boto3.client('cloudwatch')

# Get partition-level metrics
response = cloudwatch.get_metric_statistics(
    Namespace='AWS/DynamoDB',
    MetricName='ConsumedWriteCapacityUnits',
    Dimensions=[
        {'Name': 'TableName', 'Value': 'Orders'}
    ],
    StartTime='2024-01-01T00:00:00Z',
    EndTime='2024-01-02T00:00:00Z',
    Period=3600,
    Statistics=['Sum']
)

When hot partitions are unavoidable: Consider whether DynamoDB is the right fit, or split the entity into multiple tables with different partition schemes.


Consistency Models

DynamoDB lets you pick consistency per request:

Eventually Consistent Reads (default):

  • Returns data within milliseconds
  • Might occasionally return stale data
  • Highest throughput, lowest latency
  • Costs 0.5 RCU

Strongly Consistent Reads:

  • Always returns the most recent write
  • Higher latency due to synchronous replication
  • Costs 1 RCU
import boto3

dynamodb = boto3.resource('dynamodb', region_name='us-east-1')
table = dynamodb.Table('Orders')

# Eventually consistent read (default)
response = table.get_item(Key={'OrderId': '123'})

# Strongly consistent read
response = table.get_item(
    Key={'OrderId': '123'},
    ConsistentRead=True
)

For read-heavy applications, eventual consistency is the obvious choice. For inventory checks or anything needing fresh data, strong consistency is worth the extra RCU.


Capacity Management

DynamoDB offers two capacity modes, switchable once per day:

Provisioned Mode

You declare expected reads and writes per second. DynamoDB reserves that capacity.

# Provisioned capacity specification
{
    'TableName': 'Orders',
    'ProvisionedThroughput': {
        'ReadCapacityUnits': 100,    # 100 strongly consistent reads/sec
        'WriteCapacityUnits': 50     # 50 writes/sec
    }
}

Auto-scaling can adjust capacity based on utilization, scaling up during spikes and down during quiet periods.

On-Demand Mode

Pay per request. No capacity planning. More expensive at sustained high throughput, but simpler for unpredictable workloads.

ModeBest ForCost Model
ProvisionedSteady workloadsFixed hourly rate + scaling
On-DemandVariable or spiky workloadsPer request pricing

A table with 1M reads and 100k writes daily might cost $50/month provisioned versus $200/month on-demand. But for wildly varying traffic, on-demand sidesteps the need to provision for peak.

On-Demand vs Provisioned Break-Even Calculator

Use this framework to decide which capacity mode makes financial sense for your workload:

def calculate_monthly_cost_dynamodb(mode, rcu_per_second, wcu_per_second):
    """
    Calculate monthly DynamoDB cost for a given mode and capacity.

    Pricing as of 2024 (us-east-1):
    - Provisioned RCU: $0.00013 per RCU-hour
    - Provisioned WCU: $0.000065 per WCU-hour
    - On-demand: $0.25 per million read units
    - On-demand: $1.25 per million write units

    Each RCU supports 1 strongly consistent read/sec or 2 eventually consistent reads/sec
    Each WCU supports 1 write/sec
    """
    if mode == 'provisioned':
        hours_per_month = 730  # Average month
        rcu_cost = rcu_per_second * hours_per_month * 0.00013
        wcu_cost = wcu_per_second * hours_per_month * 0.000065
        return rcu_cost + wcu_cost
    elif mode == 'ondemand':
        # Convert per-second capacity to monthly request units
        reads_per_month = rcu_per_second * 3600 * 24 * 30
        writes_per_month = wcu_per_second * 3600 * 24 * 30
        # On-demand pricing (reads are per RCU, not per read unit)
        # 1 RCU = 1 strongly consistent read/sec = 2 eventually consistent reads/sec
        read_units = reads_per_month  # Assuming strongly consistent
        write_units = writes_per_month
        return (read_units / 1_000_000) * 0.25 + (write_units / 1_000_000) * 1.25

# Example: 100 RCU, 50 WCU sustained
rcu = 100
wcu = 50
provisioned_cost = calculate_monthly_cost_dynamodb('provisioned', rcu, wcu)
ondemand_cost = calculate_monthly_cost_dynamodb('ondemand', rcu, wcu)

print(f"Provisioned: ${provisioned_cost:.2f}/month")
print(f"On-demand: ${ondemand_cost:.2f}/month")
print(f"On-demand is {ondemand_cost/provisioned_cost:.1f}x more expensive at sustained load")

Break-even calculation:

The break-even point occurs when on-demand pricing equals provisioned pricing:

provisioned_monthly = RCU * 730 * $0.00013 + WCU * 730 * $0.000065
ondemand_monthly = (RCU * 2592000 / 1M) * $0.25 + (WCU * 2592000 / 1M) * $1.25

For 100 RCU and 50 WCU sustained: provisioned is ~$12.47/month while on-demand is ~$52.19/month at the same throughput. The break-even multiplier is roughly 4-5x for typical workloads.

Practical decision matrix:

ScenarioRecommended ModeReason
Steady 24/7 trafficProvisioned4-5x cheaper at sustained load
Predictable daily peaksProvisioned + auto-scalingBase capacity + elastic peak handling
Unpredictable trafficOn-demandNo throttling risk, pay per use
New table / unknown loadOn-demandAvoid over-provisioning
Migration with known loadProvisionedMore cost-effective once load known
Burst-heavy (night/day)Provisioned + auto-scalingReserve for base, scale for bursts
< 25 RCU or < 25 WCU averageOn-demand minimumBelow provisioned floor pricing

Auto-scaling as a hybrid approach:

For workloads with a predictable baseline but occasional spikes, provisioned with auto-scaling captures most savings while handling bursts:

# Configure auto-scaling for a table
appautoscaling = boto3.client('application-autoscaling')

# Register scalable target
appautoscaling.register_scalable_target(
    ServiceNamespace='dynamodb',
    ResourceId='table/Orders',
    ScalableDimension='dynamodb:table:WriteCapacityUnits',
    MinCapacity=10,   # Never below 10 WCU
    MaxCapacity=1000  # Can scale to 1000 WCU
)

# Define scaling policy
appautoscaling.put_scaling_policy(
    ServiceNamespace='dynamodb',
    ResourceId='table/Orders',
    ScalableDimension='dynamodb:table:WriteCapacityUnits',
    PolicyName='OrdersWriteCapacityScalingPolicy',
    TargetTrackingConfiguration={
        'TargetValue': 70.0,  # Keep utilization at 70%
        'PredefinedMetricSpecification': {
            'PredefinedMetricType': 'DynamoDBWriteCapacityUtilization'
        }
    }
)

Auto-scaling has a 4-5 minute adjustment period. For very spiky workloads that exceed auto-scaling response time, on-demand or a manual capacity increase handles the spike.


Primary Operations

Core DynamoDB operations:

PutItem / BatchWriteItem: Write items, with optional conditional writes

table.put_item(
    Item={'UserId': '123', 'Email': 'user@example.com'},
    ConditionExpression='attribute_not_exists(UserId)'  # Prevent overwrites
)

GetItem / BatchGetItem: Retrieve items by primary key

# Single item retrieval
response = table.get_item(Key={'UserId': '123'})

# Batch retrieval (up to 100 items)
response = table.batch_get_item(
    RequestItems={
        'Orders': {'Keys': [{'OrderId': '1'}, {'OrderId': '2'}]}
    }
)

Query: Retrieve items by partition key and optional sort key range

# Query all orders for user-123 placed in 2024
response = table.query(
    KeyConditionExpression='UserId = :uid AND begins_with(OrderId, :year)',
    ExpressionAttributeValues={
        ':uid': 'user-123',
        ':year': '2024'
    }
)

Scan: Full table scan - expensive, avoid in production

# Scan reads every item - use sparingly
response = table.scan()

Error Handling and Retry Logic

DynamoDB throws specific exceptions that require different handling strategies. The most common is ProvisionedThroughputExceededException, which occurs when you exceed your reserved read or write capacity.

import boto3
from botocore.config import Config
from botocore.exceptions import ClientError
import time
import random

dynamodb = boto3.resource('dynamodb')

# Configure retries with exponential backoff and jitter
config = Config(
    retries={
        'max_attempts': 10,
        'mode': 'adaptive'  # Adaptive mode adds rate-limiting awareness
    }
)
dynamodb = boto3.resource('dynamodb', config=config)

def put_item_with_backoff(table_name, item, max_retries=8):
    """Put item with exponential backoff and jitter."""
    table = dynamodb.Table(table_name)
    base_delay = 0.025  # 25ms base (DynamoDB SDK default)

    for attempt in range(max_retries):
        try:
            table.put_item(Item=item)
            return True
        except ClientError as e:
            error_code = e.response['Error']['Code']

            if error_code == 'ProvisionedThroughputExceededException':
                # Exponential backoff with full jitter
                delay = random.uniform(0, base_delay * (2 ** attempt))
                time.sleep(delay)
                base_delay = min(base_delay * 2, 5.0)  # Cap at 5 seconds
            elif error_code == 'ThrottlingException':
                # Same handling as ProvisionedThroughputExceeded
                delay = random.uniform(0, base_delay * (2 ** attempt))
                time.sleep(delay)
            elif error_code == 'InternalServerError':
                # Transient server-side error - retry
                delay = random.uniform(0, base_delay * (2 ** attempt))
                time.sleep(delay)
            else:
                # Non-retryable error
                raise

    raise Exception(f"Failed after {max_retries} attempts")

Common DynamoDB exceptions and their handling:

ExceptionCauseRetry Strategy
ProvisionedThroughputExceededExceptionRCU/WCU exceededExponential backoff with jitter, 25ms base
ThrottlingExceptionRequest rate too highSame as above
InternalServerErrorDynamoDB internal errorRetry with backoff
RequestLimitExceededToo many requests simultaneouslyBackoff and reduce concurrency
ResourceNotFoundExceptionTable or GSI does not existDo not retry - fix the code
ConditionalCheckFailedExceptionCondition expression not metDo not retry - business logic issue
TransactionCanceledExceptionTransaction conflictRetry with backoff after resolution

Exponential backoff with jitter formula:

delay = random(0, min(cap, base * 2^attempt))
ParameterValueNotes
base_delay25msSDK default, adjust based on table size
max_delay5000msCap to avoid long waits
max_attempts10After 10 failures, something is wrong
jitterfullRandom value between 0 and calculated delay

For provisioned capacity with auto-scaling, the SDK’s built-in retry handles most throttling automatically. For on-demand, throttling is more frequent during traffic spikes since DynamoDB adapts capacity in 4-minute windows.


Secondary Indexes

Primary key lookups are efficient, but access patterns often need other attributes. DynamoDB provides two index types:

Global Secondary Index (GSI):

  • Own partition key and optional sort key
  • Queries across the entire table
  • eventual or strong consistency
  • Projects attributes from main table

Local Secondary Index (LSI):

  • Shares partition key with main table
  • Different sort key
  • Strongly consistent reads only
  • Must be created with table, cannot be modified later
# Creating a GSI for email lookups
table.update(
    GlobalSecondaryIndexUpdates=[{
        'Create': {
            'IndexName': 'EmailIndex',
            'KeySchema': [{'AttributeName': 'Email', 'KeyType': 'HASH'}],
            'Projection': {'ProjectionType': 'ALL'},
            'ProvisionedThroughput': {
                'ReadCapacityUnits': 10,
                'WriteCapacityUnits': 10
            }
        }
    }]
)

GSIs handle most flexible access patterns. LSI only matters when you need strongly consistent queries within a partition key.

Sparse Index Tricks for GSIs

GSIs only index items that have the GSI key attribute. This behavior enables powerful patterns where you selectively include items in an index.

Pattern: Sparse GSI for Rarely-Accessed Conditional Data. Only items with the GSI key attribute get indexed. Items without that attribute are invisible to the GSI. This lets you maintain a sparse index containing only a subset of items.

# Example: GSI for "orders with issues only"
# Items without IssueFlag are not indexed, keeping GSI small

# Order item with no issue - not indexed
table.put_item(Item={
    'OrderId': 'order-001',
    'CustomerId': 'cust-123',
    'Status': 'delivered',
    'Total': 99.99
    # No IssueFlag attribute - invisible to IssueIndex GSI
})

# Order item with issue - indexed
table.put_item(Item={
    'OrderId': 'order-002',
    'CustomerId': 'cust-123',
    'Status': 'delivered',
    'Total': 99.99,
    'IssueFlag': 'REFUND_REQUESTED',
    'IssueReason': 'damaged_in_transit'
    # IssueFlag present - appears in IssueIndex GSI
})

Pattern: Sparse GSI for Soft-Deleted Items. Instead of deleting items immediately, set a Deleted attribute. Query the GSI for items without the deleted flag to find active records. This pattern is useful when you need to maintain deleted item history in DynamoDB Streams but want efficient queries for active items.

# Soft delete - add Deleted attribute instead of deleting
table.update_item(
    Key={'OrderId': 'order-001'},
    UpdateExpression='SET Deleted = :del, DeletedAt = :now',
    ExpressionAttributeValues={':del': True, ':now': int(time.time())}
)

# Query GSI for non-deleted orders only
response = table.query(
    IndexName='StatusIndex',
    KeyConditionExpression='CustomerId = :cid AND #status = :status',
    FilterExpression='attribute_not_exists(Deleted)',
    ExpressionAttributeNames={'#status': 'Status'},
    ExpressionAttributeValues={':cid': 'cust-123', ':status': 'pending'}
)

Pattern: Versioned Data with Sparse GSI. Store multiple versions of an item using a version key. Only the current version has the CurrentVersion flag, making the GSI index small and efficient for “current” queries.

# Historical versions without CurrentVersion
table.put_item(Item={
    'EntityId': 'user-profile-123',
    'VersionId': 'v1',
    'Data': {'name': 'Alice', 'email': 'alice@v1.com'},
    'UpdatedAt': 1704067200
})

# Current version with CurrentVersion attribute
table.put_item(Item={
    'EntityId': 'user-profile-123',
    'VersionId': 'v2',
    'Data': {'name': 'Alice', 'email': 'alice@v2.com'},
    'UpdatedAt': 1704153600,
    'CurrentVersion': True  # Sparse index only includes this
})

# Query GSI to get current version only
response = table.query(
    IndexName='CurrentVersionIndex',
    KeyConditionExpression='EntityId = :eid',
    FilterExpression='attribute_exists(CurrentVersion)'
)

Sparse GSI trade-offs:

AspectConsideration
GSI sizeGSI contains only items with the key attribute - can be much smaller than main table
Write amplificationEvery write with the GSI key attribute consumes GSI write capacity
Null handlingItems without GSI key are invisible - cannot query for “missing” items
TTL interactionTTL-deleted items remain in GSI until TTL processes them
Update behaviorUpdating an item to remove the GSI key removes it from the GSI

Common sparse GSI use cases:

  • Orders with issues (separate index from all orders)
  • Active subscriptions vs expired
  • Featured products flag
  • User notifications that need efficient unread queries
  • Entities pending approval vs approved

Streams and Triggers

DynamoDB Streams captures a time-ordered sequence of item changes:

  • INSERT: New item added
  • MODIFY: Item updated
  • REMOVE: Item deleted

DynamoDB Streams Limitations

Streams are powerful but come with constraints that matter at scale:

Ordered per-partition only:

DynamoDB Streams maintains arrival order only within a single partition key. If you have events for UserId=A and UserId=B interleaved in time, the stream guarantees ordering for A’s events among themselves and B’s events among themselves, but not across A and B. This matters for event processing where causality across partition keys matters.

# Stream record example - note the partition key determines ordering scope
for record in stream_records:
    event_source_arn = record['eventSourceARN']  # Table and stream ARN
    event_name = record['eventName']             # INSERT/MODIFY/REMOVE
    partition_key = record['dynamodb']['Keys']['UserId']['S']  # Ordering scope
    sequence_number = record['dynamodb']['SequenceNumber']
    # Events for same UserId arrive in sequence; cross-partition ordering is not guaranteed

24-hour retention cap:

Streams retain only 24 hours of data. For longer retention, you must pipe events to Kinesis Data Streams, Kafka, or a custom consumer that writes to persistent storage.

# Kinesis Data Streams as a longer-retention alternative
import boto3

kinesis = boto3.client('kinesis')
dynamodb = boto3.client('dynamodb')

# Enable DynamoDB Streams to Kinesis integration
dynamodb.enable_kinesis_streaming_destination(
    TableName='Users',
    StreamArn='arn:aws:kinesis:us-east-1:123456789:stream/user-events'
)

Shard consumption parallelism:

A stream’s shards determine your Lambda concurrency. A single shard processes roughly 1MB/second and supports one concurrent Lambda invocation. If your table has 100 active partitions, you need at least 100 shards for full parallel processing. You can pre-split shards using UpdateTable with ProvisionedThroughput.

LimitationImpactMitigation
Per-partition ordering onlyCross-partition event ordering not guaranteedDesign partition keys around event co-ordering needs
24-hour retentionLong-term event history unavailablePipe to Kinesis/Kafka for archival
Shard-level parallelismHigh-partition tables need corresponding shardsPre-split shards before enabling streams
No dead-letter queueFailed Lambda events lost after retry exhaustionConfigure SQS DLQ on Lambda async invocation
GSI updates not trackedGSI changes do not generate stream eventsQuery GSI separately if GSI change capture needed

Build trigger-like behavior with Lambda:

# Lambda function triggered by DynamoDB stream
def lambda_handler(event, context):
    for record in event['Records']:
        if record['eventName'] == 'INSERT':
            new_item = dynamodb_types.unmarshal(record['dynamodb']['NewImage'])
            send_welcome_email(new_item['Email'])

Streams retain 24 hours of changes, fine for most use cases. For longer retention, pipe events to Kinesis or a custom consumer.

TTL Behavior and Tombstone Garbage Collection

DynamoDB TTL automatically deletes items after a specified timestamp, but the deletion process works differently than a direct DeleteItem call.

How TTL deletion works:

  1. When an item’s TTL attribute passes the current time, DynamoDB marks it as expired
  2. The expired item appears in your table but becomes invisible to queries
  3. A background process removes the item within 48 hours (typically minutes)
  4. The deletion generates a REMOVE event in DynamoDB Streams
# Setting TTL on an item - TTL attribute must be a Unix timestamp in seconds
import time

table.put_item(
    Item={
        'UserId': 'user-123',
        'Email': 'user@example.com',
        'SessionData': '...',
        'TTL': int(time.time()) + 86400 * 30  # Expire in 30 days
    }
)

# Check if TTL is enabled on the table
table.meta.client.describe_time_to_live(TableName='Users')

Tombstone behavior:

Unlike Cassandra where deletions create tombstones that persist until compaction, DynamoDB tombstones exist only during the 48-hour deletion window. After that, the item is permanently removed.

Garbage collection impact:

  • TTL deletions do not consume write capacity
  • Items deleted by TTL do not appear in on-demand backup exports
  • Point-in-time recovery includes TTL-deleted items within the retention window
  • Global Tables replicate TTL deletions across regions

Common TTL pitfalls:

PitfallImpactMitigation
TTL attribute in pastItem immediately invisibleAlways set TTL to future timestamps
TTL on GSI partition keyGSI updates queued, delayed deletionAvoid TTL on frequently indexed attributes
TTL during backupsBackup may include soon-to-expire itemsCheck TTL values after restore
Timezone confusionTTL is Unix epoch seconds, not local timeUse int(time.time()) or equivalent

TTL is approximate: DynamoDB guarantees items expire within 48 hours of the TTL timestamp, though in practice it usually happens within minutes. Do not use TTL for time-sensitive deletions like session expiry where exact timing matters.


Global Tables

Global Tables replicate across AWS regions:

  • Active-active replication for low-latency global access
  • Automatic conflict resolution (last-writer-wins by default)
  • Sub-second replication between regions
# Create global table in two regions
dynamodb_client.create_table(
    TableName='Users',
    # ... table definition ...
    StreamSpecification={'StreamViewType': 'NEW_AND_OLD_IMAGES'}
)

# Enable multi-region replication
dynamodb_client.create_table(
    TableName='Users',
    # Specify replicas in us-west-2 and eu-west-1
)

Global Tables use all-or-nothing replication: either all replicas accept a write or none do. This gives eventual consistency with conflict resolution.


Backup and Restore

DynamoDB provides three backup mechanisms with different trade-offs: on-demand backups, point-in-time recovery, and cross-region backup replication.

On-Demand Backups

On-demand backups create a full snapshot of the table at a point in time. They do not affect read or write performance and retain until you delete them.

import boto3

dynamodb = boto3.client('dynamodb')

# Create on-demand backup
response = dynamodb.create_backup(
    TableName='Orders',
    BackupName='Orders-backup-2024-01-15'
)

backup_arn = response['BackupDetails']['BackupArn']
backup_creation_date = response['BackupDetails']['BackupCreationDateTime']

# Restore from backup
dynamodb.restore_table_from_backup(
    TargetTableName='Orders-Restored',
    BackupARN=backup_arn
)

# List all backups
backups = dynamodb.list_backups(TableName='Orders')
for backup in backups['BackupSummaries']:
    print(f"{backup['BackupName']}: {backup['BackupStatus']}")

On-demand backups capture the entire table including TTL-deleted items still within the 48-hour window.

Point-in-Time Recovery

Point-in-time recovery (PITR) continuously backs up your table with second-level granularity for the last 35 days. Enable it per table:

# Enable point-in-time recovery
dynamodb.update_continuous_backups(
    TableName='Orders',
    PointInTimeRecoverySpecification={
        'PointInTimeRecoveryEnabled': True
    }
)

# Check PITR status
status = dynamodb.describe_continuous_backups(TableName='Orders')
pitr_status = status['ContinuousBackupsDescription']['PointInTimeRecoveryDescription']['PointInTimeRecoveryStatus']
print(f"PITR status: {pitr_status}")

# Restore to specific time (must be within last 35 days)
dynamodb.restore_table_to_point_in_time(
    SourceTableName='Orders',
    TargetTableName='Orders-PITR-Restore',
    RestoreDateTime=datetime(2024, 1, 10, 12, 0, 0)
)

PITR has no performance impact and does not consume write capacity. Restoration typically takes minutes to hours depending on table size.

Cross-Region Backup with AWS Backup

For disaster recovery across regions, AWS Backup manages cross-region backup copies:

# Create cross-region backup plan with AWS Backup
backup = boto3.client('backup')

# Define backup rule with copy to another region
backup_plan = {
    'BackupPlanName': 'DynamoDB-CrossRegion-Backup',
    'Rules': [{
        'RuleName': 'CopyToDrRegion',
        'TargetBackupVaultName': 'backup-vault-us-east-1',
        'CopyActions': [{
            'DestinationBackupVaultArn': 'arn:aws:backup:us-west-2:123456789:backup-vault/dr-vault',
            'Lifecycle': {'DeleteAfterDays': 30}
        }],
        'Lifecycle': {'DeleteAfterDays': 7}
    }]
}

backup.create_backup_plan(BackupPlan=backup_plan)

Backup Comparison Table

FeatureOn-Demand BackupPoint-in-Time RecoveryAWS Backup (Cross-Region)
RetentionUntil deleted35 daysConfigurable (e.g., 30 days)
GranularityFull table snapshotSecond-level within windowConfigurable schedule
Performance impactNoneNoneNone
CostPer-backup storageContinuous backup storagePer-copy storage + transfer
Cross-regionManual export to S3Not nativeNative managed copies
Restore timeMinutes to hoursMinutes to hoursMinutes to hours

Export to S3 for Long-Term Archival

For compliance or analytics needs, export table data to S3:

# Export to S3 (using Data Pipeline historically, now S3 Export directly)
dynamodb.export_table_to_point_in_time(
    TableArn='arn:aws:dynamodb:us-east-1:123456789:table/Orders',
    ExportTime=datetime(2024, 1, 15, 0, 0, 0),
    S3Bucket='my-dynamodb-exports',
    S3Prefix='exports/orders-2024-01-15/',
    ExportFormat='DYNAMODB_JSON'
)

Exports do not impact production performance and read from backup storage, not the live table.


DAX: DynamoDB Accelerator In-Memory Cache

DAX is a fully managed, in-memory cache for DynamoDB. It sits in front of DynamoDB and caches read results, dramatically reducing read latency for frequently accessed items.

When DAX Makes Sense

  • Read-heavy workloads with hot data that does not change frequently
  • Microsecond latency requirements that DynamoDB’s single-digit milliseconds cannot meet
  • Burst read patterns where you need to handle traffic spikes without throttling

When DAX is Not the Answer

  • Write-heavy workloads - DAX does not cache writes
  • Real-time data requirements - DAX can serve stale data (though it invalidates on writes)
  • Strongly consistent reads only - DAX returns eventually consistent data

DAX Architecture

graph LR
    A[Application] --> B[DAX Cluster]
    B --> C[Node 1 Cache]
    B --> D[Node 2 Cache]
    B --> E[Node 3 Cache]
    C --> F[DynamoDB]
    D --> F
    E --> F

DAX clusters run across multiple nodes for HA. You get automatic failover if a node goes down. The cluster manages cache invalidation when items change in DynamoDB.

Code Example

import boto3
from amazon.dax.runtime import Cluster

# DAX client - same API as DynamoDB
dax = boto3.resource('dynamodb', endpoint_url='https://dax.us-east-1.amazonaws.com')

# Same get_item call, but now goes through DAX cache
table = dax.Table('Orders')
response = table.get_item(Key={'OrderId': '123'})

# DAX automatically handles:
# - Cache hit: returns from memory (< 1ms)
# - Cache miss: fetches from DynamoDB, caches result, returns
# - Item updated in DynamoDB: DAX invalidates that item

DAX vs ElastiCache Comparison

AspectDAXElastiCache (Redis)
IntegrationNative DynamoDB APIRequires code changes
Cache invalidationAutomatic on DynamoDB writesManual invalidation logic
Strong consistencyNot supported (eventual only)Possible with write-through
Item-level cachingYesYes
Query cachingNoYes
Cluster managementFully managedRequires cluster management

Bottom line: DAX is simpler if you only need item-level caching with DynamoDB. Use ElastiCache if you need query result caching or cross-table data caching.


When to Use / When Not to Use DynamoDB

When to Use DynamoDB

DynamoDB makes sense when you need predictable performance at any scale. Single-digit millisecond latency with consistent throughput comes standard, whether you use provisioned capacity or on-demand mode. Serverless is a natural fit — DynamoDB scales to zero and you pay only for what you use, so variable workloads do not require capacity planning. High write throughput plays to DynamoDB’s strengths too; it handles massive write loads from IoT telemetry, event streams, and gaming leaderboards without breaking a sweat. Multi-region replication via Global Tables gives you active-active replication across regions with automatic conflict resolution. And when your access patterns are simple — key-value lookups or document access by partition key and sort key range — DynamoDB is straightforward and efficient.

When Not to Use DynamoDB

DynamoDB is the wrong choice when you need complex queries across multiple entities. There are no JOINs, no aggregations, and no ad-hoc filtering across tables — PostgreSQL or Elasticsearch handle those better. Strong consistency across multiple tables also does not work well since transactions can span multiple items but not multiple tables; cross-table consistency demands application-level coordination. Full-text search is not available natively — you would need OpenSearch or Elasticsearch as a sidecar. The 400KB item size limit rules out large objects, so media and big blobs belong in S3 instead. And if SQL compatibility matters, DynamoDB has none — migrating from relational databases is painful.

Production Failure Scenarios

FailureImpactMitigation
Hot partition keyOne partition absorbs disproportionate traffic; throttles while other partitions are underutilizedUse partition key design that distributes traffic (add random suffix, use composite keys); split high-traffic items across multiple keys
Provisioned capacity exceededRequests throttled with ProvisionedThroughputExceededExceptionEnable auto-scaling; set appropriate RCU/WCU; use exponential backoff with jitter on retries
On-demand mode cost spikeUnpredictable traffic patterns can cause unexpectedly high billsSet billing alerts; use provisioned capacity for predictable baseline + on-demand for spikes
DynamoDB outage in single regionApplications depending on that region fail until DNS failoverUse Global Tables for multi-region active-active; implement cross-region read replicas if active-active not needed
Cache invalidation raceDAX returns stale data after DynamoDB write due to eventual consistencyDAX is not strong-consistency compatible; use short TTL; invalidate DAX entry explicitly on writes
Transaction conflictConcurrent transactions modifying same item in opposite directions both failDesign item access patterns to minimize conflicts; use conditional writes with exponential backoff
TTL expired but not deleted immediatelyItems remain viewable briefly after TTL expiresTTL is approximate (within 48 hours of actual expiry); if strict expiry needed, handle deletion in application logic

Summary

DynamoDB embraces eventual consistency to achieve scale and availability. Predictable performance, managed operations, and flexible data modeling make it a solid choice for many modern applications.

Key points:

  • Partition key design matters for performance and cost
  • Eventually consistent reads halve RCU costs
  • GSIs enable query flexibility at the cost of additional throughput
  • On-demand mode simplifies capacity planning for variable workloads
  • Global Tables give active-active replication across regions

DynamoDB is not the right fit for complex queries (look at PostgreSQL), multi-item transactions (Aurora), or teams who cannot accept vendor lock-in. For high-throughput, access-pattern-driven data with predictable scaling needs, DynamoDB performs where few alternatives can.

The 2017 DynamoDB paper clarified the hard choices: always writable, scalable, and simple came at the cost of strong consistency. The managed service keeps those trade-offs while adding operational simplicity the original Dynamo lacked.

Category

Related Posts

Apache Cassandra: Distributed Column Store Built for Scale

Explore Apache Cassandra's peer-to-peer architecture, CQL query language, tunable consistency, compaction strategies, and use cases at scale.

#distributed-systems #databases #cassandra

Google Spanner: Globally Distributed SQL at Scale

Google Spanner architecture combining relational model with horizontal scalability, TrueTime API for global consistency, and F1 database implementation.

#distributed-systems #databases #google

Column-Family Databases: Cassandra and HBase Architecture

Cassandra and HBase data storage explained. Learn partition key design, column families, time-series modeling, and consistency tradeoffs.

#database #nosql #column-family