Migrating DynamoDB tables to another AWS account.

This is a summery of an already published article on the AWS Blog, with some clarity added.

Requirements:

  1. AWS SAM tool, (note, this tool does not seem to play well on MacOS, install on a linux host)
  2. AWS CLI , (note, you should configure this tool using profiles with your source and destination credentials. Side note, never used the default profile)
  3. Python 3.9 on the machine you have the SAM CLI installed on.

Preparation

Building the Change Data Capture (CDC) Tool

git clone https://github.com/aws-samples/cross-account-amazon-dynamodb-replication

Update all python references to version 3.9 in the ChangeDataCapture folder.

cd cross-account-amazon-dynamodb-replication/ChangeDataCapture
sam build

IAM Roles/Policies on the Destination Account

Policy: DynamoDBRemoteAccessPolicy, substitute <tablePrefix> with your table prefix or singularly list the tables you want to be updated when the source account tables are changed.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "dynamodb:BatchGetItem",
                "dynamodb:BatchWriteItem",
                "dynamodb:PutItem",
                "dynamodb:DescribeTable",
                "dynamodb:DeleteItem",
                "dynamodb:GetItem",
                "dynamodb:Scan",
                "dynamodb:Query",
                "dynamodb:UpdateItem"
            ],
            "Resource": 
                "arn:aws:dynamodb:*:587232818839:table/<tablePrefix>*"
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": "dynamodb:ListTables",
            "Resource": "*"
        }
    ]
}

Role: DynamoDBRemoteAccessRole

Create a new role with "AWS account" configured to allow another account to use the role. Use the Source AWS account it as the account id, and attach the policy you created above "DynamoDBRemoteAccessPolicy". We will use this Role later with the CDC tool to update the new tables when the source tables change.

IAM Roles/Policies on the Source Account

The user running the migration will also need to access, Lets create the following Policy and attach it to the user doing the work. be sure to update "<s3prefix>" to your values

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt1605019439671",
            "Action": [
                "s3:ListBucket",
                "s3:PutObject",
                "s3:PutObjectAcl"
            ],
            "Effect": "Allow",
            "Resource": "arn:aws:s3:::<s3prefix>*"
        }
    ]
}

Ok, Let's do this!

  1. On the destination AWS account, create an s3 bucket to hold the export and attach the following policy directly to the bucket. be sure to update all the values enclosed in brackets "<>"
{
    "Version": "2012-10-17",
    "Id": "Policy1605099029795",
    "Statement": [
        {
            "Sid": "Stmt1605098975368",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::<source-aws-account-id>:user/<username>"
            },
            "Action": [
                "s3:ListBucket",
                "s3:PutObjectAcl",
                "s3:AbortMultipartUpload",
                "s3:PutObject"
            ],
            "Resource": [
                "arn:aws:s3:::<bucketname>",
                "arn:aws:s3:::<bucketname>/*"
            ]
        }
    ]
}
  1. Before we export to S3 we need to enable the stream on the table

WARNING: you need to wait at least 5 min for the stream to start capturing changes

aws --profile=<source profile name> dynamodb update-table \
    --table-name ${table_name} \
    --stream-specification StreamEnabled=true,StreamViewType=NEW_IMAGE

echo "Waiting for stream to be enabled..."
sleep 300
  1. Export the table to S3, After waiting for 5 Min

NOTE: the bucket_name is lowercased and prefixed with "dynamodb"

table_arn="arn:aws:dynamodb:us-west-2:xxxxxxxxx:table/${table_name}"
bucket_name=$(echo $table_name | tr '[:upper:]' '[:lower:]')
bucket_name="dynamodb$bucket_name"
echo "Bucket name: $bucket_name"



aws --profile=<aws profile name of the source account> dynamodb export-table-to-point-in-time \
    --table-arn ${table_arn} \
    --s3-bucket ${bucket_name} \
    --export-format DYNAMODB_JSON \
    --s3-bucket-owner <account number of the reciving data>
  1. Once the export has completed we need to import the data from s3 into your new tables. Use the AWS Console to import your data.

WARNING: When importing the table dump from S3, it's critical you create the EXACT same indexes or the import will fail.

HINT: do not import the entire dump, only point the import too to the DATA folder and select GZIP as the compression.

sam deploy --profile=<source account> --guided

Follow the questions and deploy, once deployed you need to go to the Lamda console for the new function and ENABLE the trigger, as it will configure it disabled by default.

  1. Check your results. and your done!

Do you have any question to us?

Contact us and we'll get back to you as soon as possible.