Terraform pt 1: Create S3 bucket (DevOps the Hard Way series)
Per the author of the DevOps the Hard Way repository:
The purpose of the Terraform section is to create all of the AWS cloud services you’ll need from an environment/infrastructure perspective to run the Uber application.
Terraform: creating an S3 bucket to store TFSTATE files
The overarching DevOps workflow will include using Terraform, an infrastructure-as-code tool which needs a place to store state configuration data. In this blog post, an S3 bucket is created on the AWS cloud for that purpose.
Before creating any infrastructure on AWS, the Terraform CLI must be provided with a configuration file. From the Terraform documentation:
Configuration files you write in Terraform language tell Terraform what plugins to install, what infrastructure to create, and what data to fetch.
The configuration file being used to declare the S3 bucket, main.tf, looks like this:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
provider "aws" {
region = "us-east-1"
}
resource "aws_s3_bucket" "terraform_state" {
bucket = "terraform-state-devopsthehardway"
versioning {
enabled = true
}
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
}
Note that the file above is shown exactly as it is provided in the DevOps the Hard Way repository. Before running the commands below, additional lines must be added to the provider section, in order to supply Terraform with the required AWS API credentials.
With the modified main.tf saved (including the additions for provider credentials), below are the Terraform CLI commands to be run in the same directory as main.tf, and what they do:
terraform init
This command reads the provider entry in main.tf, and downloads the corresponding provider plugin into a subdirectory of the current directory. Since the value of provider is "aws", the AWS provider plugin is downloaded (which is about 450MB as of this writing).
This “provider” is the AWS-specific code that will be used locally by Terraform to communicate with the AWS API and create the required S3 bucket.
Once the terraform init command has completed, the CLI outputs a message stating that Terraform has been successfully initialized.
terraform plan
This command, per the author of the lab repo, checks to verify that the configurations specified in main.tf are valid. In this case, when run the command gives some warnings about deprecated terraform syntax, but determines that main.tf is valid.
terraform apply
This command actually creates the real-world infrastructure declared in main.tf. However, when this command is run against main.tf as provided by the repository author (and customized with aws credentials), it returns an error:
1
2
Error: creating S3 Bucket (terraform-state-devopsthehardway [...]
BucketAlreadyExists: [...]
It turns out that the template provided by the author of this lab does not work, because it contains a hard-coded bucket name. Since S3 bucket names must be globally unique, this main.tf file is not portable. After anyone, anywhere successfully runs this configuration, the bucket name terraform-state-devopsthehardway is reserved on the AWS global network. It is likely that this is exactly what led to the error above.
A quick fix is to add some random characters to the bucket name declared in main.tf, in order to make the name globally unique on AWS.
After running the plan and apply commands again (this time with a globally unique S3 bucket name in main.tf) and typing yes to confirm changes, this section is complete. The new S3 bucket appears in the specified region for the AWS account whose credentials were supplied.
In the next post, Terraform is used to create an AWS Elastic Container Registry, where a container image for the Uber API will be saved.