DEV Community

1

Terraform — Deploying Multi-Region AWS RDS Cluster with Failover Setup using Terraform

https://medium.com/devops-pro/terraform-deploying-multi-region-aws-rds-cluster-with-failover-setup-using-terraform-89e98da026f7

In this article, we will go through how to deploy a multi-region AWS RDS cluster with an automatic failover setup using Terraform. By leveraging AWS RDS (Relational Database Service) and Terraform, we can set up highly available, fault-tolerant database architectures across multiple regions. This ensures that your applications remain online even in the event of regional outages, providing resilience and scalability for critical applications.

Prerequisites

Before starting, ensure you have the following:

  • AWS Account: An active AWS account with the necessary permissions.
  • AWS CLI: AWS CLI should be configured with your AWS credentials.
  • Terraform Installed: Terraform must be installed on your local machine. You can download it from Terraform’s official site.

So, let’s start!

→ Create a ”provider.tf”

The provider file tells Terraform which provider you are using.

provider "aws" {
  region = local.region_0
  profile = "<profile-name>"

  default_tags {
    tags = {
      Owner       = "primary"
      Project     = "AWS Multi Region rds with active/active setup"
      Provisioner = "Terraform"
    }
  }
}

provider "aws" {
  alias  = "secondory"
  region = local.region_1
  profile = "<profile-name>"

  default_tags {
    tags = {
      Owner       = "secondory"
      Project     = "AWS Multi Region rds with active/active setup"
      Provisioner = "Terraform"
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

→ Create “data.tf”

Terraform data sources allow you to fetch information from your cloud provider and use it within your configuration. This can be particularly useful for validating inputs, such as checking the list of availability zones in a given region to see if a specific Amazon Machine Image (AMI) ID exists in that region before creating an EC2 instance.

data "aws_availability_zones" "region_0" {}
data "aws_availability_zones" "region_1" {
  provider = aws.secondory
}
data "aws_caller_identity" "this" {}
data "aws_kms_key" "rds_0" {
  key_id = "alias/aws/rds"
}
data "aws_kms_key" "rds_1" {
  provider = aws.secondory
  key_id   = "alias/aws/rds"
}
Enter fullscreen mode Exit fullscreen mode

→ Create “locals.tf”

Terraform Locals are named values that can be assigned and used in your code. It mainly serves the purpose of reducing duplication within the Terraform code. When you use Locals in the code, since you are reducing duplication of the same value, you also increase the readability of the code.

locals {
environment = replace("${var.environment_input}", "_", "-")

availability_zones_0 = data.aws_availability_zones.region_0.names
  public_subnets_0 = [
    cidrsubnet(local.vpc_cidr_0, 6, 0),
    cidrsubnet(local.vpc_cidr_0, 6, 1),
    cidrsubnet(local.vpc_cidr_0, 6, 2),
  ]
  private_subnets_0 = [
    cidrsubnet(local.vpc_cidr_0, 6, 4),
    cidrsubnet(local.vpc_cidr_0, 6, 5),
    cidrsubnet(local.vpc_cidr_0, 6, 6),
  ]
  database_subnets_0 = [
    cidrsubnet(local.vpc_cidr_0, 6, 7),
    cidrsubnet(local.vpc_cidr_0, 6, 8),
    cidrsubnet(local.vpc_cidr_0, 6, 9),
  ]

  availability_zones_1 = data.aws_availability_zones.region_1.names
  public_subnets_1 = [
    cidrsubnet(local.vpc_cidr_1, 6, 0),
    cidrsubnet(local.vpc_cidr_1, 6, 1),
    cidrsubnet(local.vpc_cidr_1, 6, 2),
  ]
  private_subnets_1 = [
    cidrsubnet(local.vpc_cidr_1, 6, 4),
    cidrsubnet(local.vpc_cidr_1, 6, 5),
    cidrsubnet(local.vpc_cidr_1, 6, 6),
  ]
  database_subnets_1 = [
    cidrsubnet(local.vpc_cidr_1, 6, 7),
    cidrsubnet(local.vpc_cidr_1, 6, 8),
    cidrsubnet(local.vpc_cidr_1, 6, 9),
  ]

  database_username = "aurora_admin"
  database_password = "aurora_admin123"

  region_0 = "us-west-1"
  region_1 = "us-west-2"

  vpc_cidr_0 = "20.1.0.0/16"
  vpc_cidr_1 = "30.2.0.0/16"

  vpc_route_tables_0 = flatten([module.vpc_0.private_route_table_ids, module.vpc_0.public_route_table_ids])
  vpc_route_tables_1 = flatten([module.vpc_1.private_route_table_ids, module.vpc_1.public_route_table_ids])
}
Enter fullscreen mode Exit fullscreen mode

Terraform Locals vs Variables

How does Terraform local differ from a Terraform variable?

The first difference can be pointed towards the scope. A Local is only accessible within the local module vs a Terraform variable, which can be scoped globally.

Another thing to note is that a local in Terraform doesn’t change its value once assigned. A variable value can be manipulated via expressions. This makes it easier to assign expression outputs to locals and use them throughout the code instead of using the expression itself at multiple places.

Note: For demonstration, I have used both locals and variables; however, you can modify them according to your use case.

→ Create “variables.tf”

All variables will be in this file.

variable "environment_input" {
  description = "Environment name we are building"
  default     = "aws_multi_region"
}

variable "tags" {
  description = "Default tags for this environment"
  default     = {}
}
Enter fullscreen mode Exit fullscreen mode

If you are using terraform.tfvars, you need to add a description only.

→ Create a "terraform.tfvars

To persist variable values, create a file and assign variables within this file. Create a file named terraform.tfvars With the following contents:

environment = "aws_multi_region_aurora"
tags        = {}
Enter fullscreen mode Exit fullscreen mode

For all files that match terraform.tfvars or *.auto.tfvars present in the current directory, Terraform automatically loads them to populate variables. If the file is named something else, you can use the -var-file flag directly to specify a file.

I don’t recommend saving usernames and passwords to version control, but you can create a local secret variables file and use -var-file to load it.

→ Create a “vpc.tf"

For this demo, I'm using different VPC modules to show how we can use external modules in Terraform, but you can follow this article and use my VPC module for this setup as well

data "aws_region" "current" {}

module "vpc_0" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "~> 5.13.0"

  azs                                             = local.availability_zones_0
  cidr                                            = local.vpc_cidr_0
  create_database_subnet_group                    = true
  create_flow_log_cloudwatch_iam_role             = true
  create_flow_log_cloudwatch_log_group            = true
  # database_subnets                                = local.database_subnets_0
  database_subnets                                = local.database_subnets_0
  enable_dhcp_options                             = true
  enable_dns_hostnames                            = true
  enable_dns_support                              = true
  enable_flow_log                                 = true
  enable_ipv6                                     = false
  # enable_nat_gateway                              = true
  flow_log_cloudwatch_log_group_retention_in_days = 7
  flow_log_max_aggregation_interval               = 60
  name                                            = local.environment
  # one_nat_gateway_per_az                          = false
  private_subnet_suffix                           = "private"
  private_subnets                                 = local.private_subnets_0
  public_subnets                                  = local.public_subnets_0
  # single_nat_gateway                              = true
  tags                                            = var.tags
}

module "vpc_1" {
  providers = { aws = aws.secondory }
  source    = "terraform-aws-modules/vpc/aws"
  version   = "~> 5.13.0"

  azs                                             = local.availability_zones_1
  cidr                                            = local.vpc_cidr_1
  create_database_subnet_group                    = true
  create_flow_log_cloudwatch_iam_role             = true
  create_flow_log_cloudwatch_log_group            = true
  # database_subnets                                = local.database_subnets_1
  database_subnets                                = local.database_subnets_1
  enable_dhcp_options                             = true
  enable_dns_hostnames                            = true
  enable_dns_support                              = true
  enable_flow_log                                 = true
  enable_ipv6                                     = false
  # enable_nat_gateway                              = true
  flow_log_cloudwatch_log_group_retention_in_days = 7
  flow_log_max_aggregation_interval               = 60
  name                                            = local.environment
  # one_nat_gateway_per_az                          = false
  private_subnet_suffix                           = "private"
  private_subnets                                 = local.private_subnets_1
  public_subnets                                  = local.public_subnets_1
  # single_nat_gateway                              = true
  tags                                            = var.tags
}

Enter fullscreen mode Exit fullscreen mode

→ Create a “rds.tf”
This file will contain the necessary Terraform configuration for the RDS Cluster setup.

resource "aws_rds_global_cluster" "this" {
  global_cluster_identifier = local.environment
  storage_encrypted         = true
  engine                    = "aurora-postgresql"
  engine_version            = "15.5"
  database_name             = "multiregion"
}

module "aurora_primary" {
  source = "terraform-aws-modules/rds-aurora/aws"

  name                      = "${local.environment}-${local.region_0}"
  database_name             = aws_rds_global_cluster.this.database_name
  engine                    = aws_rds_global_cluster.this.engine
  engine_version            = aws_rds_global_cluster.this.engine_version
  global_cluster_identifier = aws_rds_global_cluster.this.id
  instance_class            = "db.r6g.large"
  instances                 = { for i in range(2) : i => {} }

  kms_key_id = data.aws_kms_key.rds_0.arn

  publicly_accessible = true
  vpc_id               = module.vpc_0.vpc_id
  db_subnet_group_name = module.vpc_0.database_subnet_group_name
  security_group_rules = {
    vpc_ingress = {
      cidr_blocks = concat(
        module.vpc_0.public_subnets_cidr_blocks,
        module.vpc_1.public_subnets_cidr_blocks,
      )
    }
  }

  master_username = local.database_username
  master_password = local.database_password

  skip_final_snapshot = true

  tags = var.tags
}

module "aurora_secondary" {
  source = "terraform-aws-modules/rds-aurora/aws"

  providers = { aws = aws.secondory }

  is_primary_cluster = false

  name                      = "${local.environment}-${local.region_1}"
  engine                    = aws_rds_global_cluster.this.engine
  engine_version            = aws_rds_global_cluster.this.engine_version
  global_cluster_identifier = aws_rds_global_cluster.this.id
  source_region             = local.region_0
  instance_class            = "db.r6g.large"
  instances                 = { for i in range(2) : i => {} }

  kms_key_id = data.aws_kms_key.rds_1.arn

  publicly_accessible = true
  vpc_id               = module.vpc_1.vpc_id
  db_subnet_group_name = module.vpc_1.database_subnet_group_name
  security_group_rules = {
    vpc_ingress = {
      cidr_blocks = concat(
        module.vpc_0.public_subnets_cidr_blocks,
        module.vpc_1.public_subnets_cidr_blocks,
      )
    }
  }

  skip_final_snapshot = true

  depends_on = [
    module.aurora_primary
  ]

  tags = var.tags
}

resource "random_password" "master" {
  length  = 20
  special = false
}

resource "aws_secretsmanager_secret" "rds_credentials" {
  name                    = "${local.environment}-aurora-credentials-multi-region-0"
  description             = "${local.environment} aurora username and password"

  depends_on = [module.aurora_primary]
}

resource "aws_secretsmanager_secret_version" "rds_credentials" {
  secret_id = aws_secretsmanager_secret.rds_credentials.id
  secret_string = jsonencode(
    {
      username = module.aurora_primary.cluster_master_username
      password = module.aurora_primary.cluster_master_password
    }
  )

  depends_on = [module.aurora_primary]
}
Enter fullscreen mode Exit fullscreen mode

Initialize, plan and Apply the Terraform Configuration

Initialize Terraform: This command initializes the project, downloads the necessary provider plugins, and sets up the backend for storing the state.

terraform init

Plan Terraform: This command creates an execution plan, showing what actions Terraform will take to reach the desired infrastructure state. It compares the current state with the configuration and highlights the resources to be created, modified, or destroyed, without making any actual changes.

terraform plan

Apply the Configuration: This command applies the changes required to reach the desired state of the configuration.

terraform apply

You’ll be prompted to confirm the changes. Type yes and hit Enter.

Verify the RDS Cluster Configuration

After applying the Terraform configuration, you can verify the multi-region database cluster.

Image description

Thank you for reading. If you have anything to add, please send a response or add a note!

This article was originally published here:

Top comments (0)