
ARTICLE | MAY, 7
Deploy cloud environments with Terraform: step-by-step guide
By Jorge Contreras
Terraform makes it very easy to deploy new instances, virtual machines, databases, storage accounts, or any service provided by cloud computing services like Azure, AWS, Digital Ocean, or GCP, among others (yes, it’s agnostic).
With Terraform, you can quickly set up development, testing, or production environments in seconds. This means you can eliminate the need to manually create services, configure them, verify necessary libraries or dependencies, and check network configurations. Let’s walk through the steps to create a new environment from scratch.
What is Terraform?
If you’re here, you likely have some knowledge of Terraform, whether you’ve heard of it or have a general understanding of its purpose. Officially, Terraform is defined as “the automation of infrastructure to provision and manage resources in any cloud or data center.” In other words, it’s a service that automates infrastructure deployment using code (IaC or Infrastructure as Code) in any cloud computing service.
Practical guide: deploying your own AI service in the cloud
Multiple reasons exist to create your own AI service: data privacy, user security, costs, greater simplicity or complexity (pick your poison), and specific model selection, among many others. This can apply to you and many of your clients, so it’s a good idea to automate the deployment to allow your team to focus only on developing and evolving your solution.
In this example, we’ll deploy a backend service on an instance, a frontend service in a storage account, a database to store data permanently, and a content delivery system for better performance. This example can be used for any type of product, but in this case, we have a simple chatbot service with a backend responsible for generating responses, a frontend with which the user interacts, and a small database to store data permanently.

A web solution can be as simple as the one in the image, or it can become much more complicated by adding more instances, external notification and messaging services, a cache database, load balancers, etc. However, that’s not our focus in this article. For our exercise, we’ll start by creating an infrastructure repository.
Step 1: Create and instantiate a Terraform repository for IaC
As we discussed, we don’t want to repeat the task of creating instances, servers, or services every time we deploy a solution for a specific client. We want to automate the process so that everything is created with a single script. Terraform comes to our rescue here, and our first task is to create the project in a directory. In other words, we’ll have a specific directory for the infrastructure:
// Project folder directory
.
├── backend-fastapi-chatbot
├── frontend-react-chatbot
└── infraestructura-chatbot
We will save our code in the infrastructure-chatbot directory, so we will go to the folder and work from there. First, we install Terraform following the instructions in the Terraform portal for our OS. There’s no need to go into detail here, as the instructions are quite simple.
For this guide, we will be using the AWS CLI. We need to follow these instructions to install it. We will also configure our computer profile with the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY variables, which act like the username and password to call the AWS API. We can use the default profile or create one with another name. I will use the name tutorial-terraform-aws to avoid confusing it.
The next step is to prepare this repository for Git using the git init command in order to run it from within the infrastructure directory. However, we’ll skip this part since this example is contained within a single repository that includes the services.
We created a README.md to store instructions for use, version usage, and any relevant information.
Step 2: Create the infrastructure project structure
While we have initialized our project or repository, we haven’t finished organizing the infrastructure directories. Since Terraform is a tool for managing environment infrastructure (servers, databases, networks, functions, etc.), we’ll start with a folder called environments and another called modules, and within environments, we’ll create the production and staging folders.
// Folder structure so far
.
├── README.md
├── envs
│ ├── production
│ └── staging
│ └── provider.tf
└── modules
Next, inside the staging folder, we’ll create our first Terraform (.tf) file or document, which is the provider.tf file. Providers are, in general terms, a resource-based location that the Terraform source code can interact with to create, manage, or delete them. In code terms, it is the plugin that tells Terraform where to connect and what to configure on that platform.
// ‘provider.tf’ file
terraform {
required_providers {
aws = {
source = “hashicorp/aws”
version = “~> 5.0″
}
}
}
# Configure the AWS provider
provider “aws” {
region = “us-west-2”
profile = “tutorial-terraform-aws”
}
With this, we are already informing Terraform that we will be using AWS to create services, the region where we want to deploy the resources, the AWS CLI profile we want to use, and the version of the provider plugin.
Let’s tweak the code in this file a bit more to introduce the backend. Two things are stored here:
- state data: to track the resources you manage.
- state locking (optional): to lock the state of resources during writing or updating. This helps prevent resource corruption when multiple developers try to manipulate the infrastructure through Terraform at the same time.
After this update, our code looks as follows:
// ‘provider.tf’ updated file
terraform {
required_providers {
aws = {
source = “hashicorp/aws”
version = “~> 5.0″
}
}
backend “s3” {
bucket = “tutorial-terraform-aws-staging”
key = “terraform.tfstate”
region = “us-west-2”
}
}
# Configure the AWS provider
provider “aws” {
region = “us-west-2”
profile = “tutorial-terraform-aws”
}
Step 3: Create the resources (in code)
How do we continue? What’s the modules directory for?
Modules are different “.tf” files or pieces of code that contain the configuration of a resource. As we mentioned earlier, our project has a backend, which we’ll deploy on an EC2 instance, a frontend that will be deployed in an S3 bucket (another bucket, configured for static HTML), a database in an AWS RDS resource, and a CloudFront configuration for the content delivery service.
So, let’s start with one module: the EC2 server module. We know that to create a resource of this type from the console or AWS CLI, we need at least certain data, and we can add a lot of optional information to customize our instance. For this case, we’ll create a folder named after the service type, backend-server, and create configuration files that will manage the basic information, either within the file itself or through an external variable (we’ll discuss these variables later).
It’s the classic “chicken or egg” question: When starting an Infrastructure as Code (IaC) project, should you first set up the S3 bucket (with versioning enabled) to store your Terraform state, or should you create it with Terraform itself?
There are a couple of ways to approach this. Some suggest initially skipping the S3 backend setup, deploying your resources, and then adding the S3 bucket with versioning, finally updating the Terraform configuration to use it. However, in this tutorial, we’ll assume the S3 bucket is already in place. Creating it is straightforward, whether you use the AWS Management Console or the AWS Command Line Interface (CLI). This is a prerequisite for moving forward. This approach prevents you from having to modify your Terraform code after you’ve already deployed to your local state.
// Directory folder for my backend module
.
├── backend-server
│ ├── ec2.tf
│ ├── eip.tf
│ ├── outputs.tf
│ ├── security_group.tf
│ ├── security_group_rule.tf
│ ├── variables.tf
│ └── vpc.tf
… Other services
Each of these files will have some configuration related to the EC2 instance. This “backend-server” folder functions as a module in Terraform and groups certain resources that can be reused. By simply calling the backend-server module and assigning values to the predefined variables, we can create any other similar service. There are two important files to keep in mind:
- outputs.tf: File related to output variables after finishing configuring resources. These variables can be reused for other modules or resources.
- variables.tf: File with the definition of the input variables used by the resources for their configuration and deployment.
File examples:
// ‘ec2.tf’
resource “aws_instance” “api_server” {
ami = var.ec2_ami
instance_type = var.ec2_instance_type
key_name = var.ec2_keyname
vpc_security_group_ids = [aws_security_group.sg.id]
tags = var.ec2_tags
}
// ‘eip.tf’
resource “aws_eip” “api_server_eip” {
domain = “vpc”
tags = {
Env = “Dev”
Name = “API REST Server EIP”
}
}
// There are other files in the `backend-server` folder just like these 2 examples
For the other modules, we will create their configuration, outputs, and variables files, so that in the end we have a file structure like this:
// Folder ‘modules’
.
├── backend-server
│ ├── ec2.tf
│ ├── eip.tf
│ ├── outputs.tf
│ ├── security_group.tf
│ ├── security_group_rule.tf
│ ├── variables.tf
│ └── vpc.tf
├── cdn
│ ├── cloudfront.tf
│ ├── outputs.tf
│ └── variables.tf
├── database
│ ├── outputs.tf
│ ├── rds.tf
│ └── variables.tf
└── frontend-static-html
├── outputs.tf
├── s3.tf
└── variables.tf
Now we have our modules. How do we use them?
Step 4: Create the environments using predefined modules
Let’s create the “staging environment“. Since Terraform allows us to reuse modules, we simply call each one, assign variables, and that’s it. The hardest part is over, which was creating the module configurations in AWS from Terraform. Now we can invoke our code, or rather, apply the work plan so that Terraform takes care of using the appropriate code to deploy the necessary services or resources.
provider “aws” {
region = var.aws_region
}
module “backend_server” {
source = “../../modules/backend-server”
ec2_ami = var.ec2_ami
ec2_instance_type = var.ec2_instance_type
ec2_keyname = var.ec2_keyname
ec2_tags = var.ec2_tags
sg_name = var.sg_name
sg_description = var.sg_description
sg_rule_description = var.sg_rule_description
sg_rule_from_port = var.sg_rule_from_port
sg_rule_to_port = var.sg_rule_to_port
sg_rule_source_id = module.cdn.sg_id # Esto asocia CloudFront con el servidor si es necesario
}
module “database” {
source = “../../modules/database”
db_name = var.db_name
db_username = var.db_username
db_password = var.db_password
db_engine = var.db_engine
db_engine_version = var.db_engine_version
db_instance_class = var.db_instance_class
db_allocated_storage = var.db_allocated_storage
db_port = var.db_port
subnet_ids = var.db_subnet_ids
allowed_cidr_blocks = [“${module.backend_server.private_id}/32”] # Solo accesible desde el Backend
}
module “frontend_static_html” {
source = “../../modules/frontend-static-html”
bucket_name = var.bucket_name
}
module “cdn” {
source = “../../modules/cdn”
s3_bucket_id = module.frontend_static_html.s3_bucket_name
s3_bucket_arn = module.frontend_static_html.s3_bucket_arn
s3_bucket_domain_name = module.frontend_static_html.static_site_url
}
output “backend_server_public_ip” {
value = module.backend_server.public_id
}
output “backend_server_private_ip” {
value = module.backend_server.private_id
}
output “database_endpoint” {
value = module.database.rds_endpoint
}
output “frontend_bucket_name” {
value = module.frontend_static_html.s3_bucket_name
}
output “cloudfront_url” {
value = module.cdn.cloudfront_domain_name
}
As we can see, we simply name each module and assign each variable to it. With this folder logic, we have clearly defined which is our staging environment and which is our production environment.
Step 5: Work with plan and apply changes with Terraform
First, we start with the terraform plan command, which is used to preview the changes Terraform applies to the infrastructure before executing them. This command analyzes the infrastructure code and compares the current state of resources in AWS with the configuration defined in the Terraform files. As a result, it displays a summary of the actions that will be performed, such as creating, modifying, or deleting resources. It’s a good practice to run terraform plan before applying any changes to avoid errors and ensure the changes are as expected.
On the other hand, the terraform apply command is responsible for executing changes to the infrastructure. Based on the generated plan, Terraform proceeds to create, update, or delete the necessary resources in AWS. This command requests confirmation before proceeding, unless the -auto-approve parameter is used, which allows changes to be executed without manual confirmation. It is recommended to use terraform apply only after reviewing the output of terraform plan to avoid errors or unintended configurations in the infrastructure.
Now, we use terraform apply to see all the changes that will occur in our cloud. What happens internally is:
- With the Backend Server (EC2) Module:
- An EC2 instance is created with the configured AMI and instance type.
- Security groups are associated with the backend to restrict access.
- A public IP address and a private IP address are assigned to allow access and internal communication.
- Result: Backend server ready to process requests.
- Database (RDS):
- An RDS instance is created in a private VPC.
- The database engine (MySQL or PostgreSQL) is configured.
- It only allows connections from the private IP of the backend server.
- Result: Secure database accessible only by the backend.
- Frontend (S3):
- An S3 bucket is created with web hosting enabled.
- A policy is applied to allow public reading of HTML/CSS/JS files.
- The home document (index.html) is defined.
- Result: Static frontend hosted on S3.
- CDN (CloudFront):
- A CloudFront distribution is created with the S3 bucket as the origin.
- The cache is optimized to improve loading speed.
- A global URL is generated in CloudFront.
- Result: Distributed website with lower latency anywhere in the world.
After applying the changes, Terraform will show us key information:
backend_server_public_ip = “18.220.225.34”
backend_server_private_ip = “10.0.1.25”
database_endpoint = “staging-db.abcdefg.us-west-2.rds.amazonaws.com”
frontend_bucket_name = “staging-frontend-static-site”
cloudfront_url = “https://d3example.cloudfront.net”
After this, we have completed our IaC. If we want to deploy another environment, we simply reuse the code, define new variable names, and deploy. It’s great!
// ‘ec2.tf’
resource “aws_instance” “api_server” {
ami = var.ec2_ami
instance_type = var.ec2_instance_type
key_name = var.ec2_keyname
vpc_security_group_ids = [aws_security_group.sg.id]
tags = var.ec2_tags
}
// ‘eip.tf’
resource “aws_eip” “api_server_eip” {
domain = “vpc”
tags = {
Env = “Dev”
Name = “API REST Server EIP”
}
}
// There are other files in the `backend-server` folder just like these 2 examples
Drawing the line between developers and DevOps in IaC projects
Previously, we talked about the reasons why using IaC is good and healthy for a project: it helps us save time and be more efficient when delivering multiple solutions to various clients. However, whose time are we saving? Who is becoming more efficient? This relates to the role of DevOps.
Developers should not concern themselves with deployments or the specific servers where their code will run. Their focus should be on making the code operational, optimizing resource usage, and ensuring that the business logic meets all requirements. Usually, in a project with a team, the DevOps (Development and Operations) role or the SRE (Site Reliability Engineer) role are the ones responsible for deploying the code to the servers where it will be executed for each client.
“The role of a DevOps engineer will vary from one organization to another, but invariably entails some combination of release engineering, infrastructure provisioning and management, system administration, security, and DevOps advocacy.”
According to the description on the Attlasian website, DevOps is responsible for delivering solutions to clients and deploying them to servers. Normally, they focus on automating the deployment process, and since Terraform manages this aspect, the DevOps team is also in charge of maintaining the repository and source code for the infrastructure.

Can we skip learning AWS or Azure? Thanks, Terraform!
No! We definitely need to continue learning, practicing, and following the latest trends and technologies of our preferred cloud services. Terraform helps you keep your abstract of the cloud. This means that, for example, we don’t need to know exactly how to use “Amazon Cloud Formation” in AWS, or “Azure Resource Manager” in Azure platforms. With Terraform, you can manage resources across different clouds and on-premises infrastructure using a single codebase.
It’s really important for the DevOps team working on the project to understand the platform where the services will be deployed and reside. Terraform uses the same APIs provided by the services and adjusts their plugins accordingly. For example, for Azure Microsoft provides the plugin for Terraform, including the necessary documentation to facilitate its use.
Adding multiple providers: Is Terraform really agnostic?
The answer is not so straightforward. Many people refer to Terraform (or similar technologies like Ansible) as “agnostic” which implies that simply switching providers will do the trick and everything will be deployed in another cloud. However, what we have mentioned in this guide is not entirely true. We still need to add specific resources for that cloud provider and configure them for our project. Terraform does help us keep track of deployments and allows for a mix of providers across our environments.
Let’s put this into practice. Suppose the client wants to use Supabase. Supabe is an open-source Firebase alternative that offers a platform with a PostgreSQL database, authentication, real-time subscriptions, storage, and other tools for building web and mobile applications. It has a free tier and additional pricing tiers that can meet a variety of needs.
If the client wants to migrate the AWS RDS database to Supabase, what do we do? The first step is planning the migration. After that, we will probably need to create the resource in Terraform (skipping the migrations, permissions, schedule, and other tasks). So let’s create our provider, resource, and environment.
// ‘envs/staging/provider.tf’
terraform {
required_providers {
aws = {
source = “hashicorp/aws”
version = “~> 5.0″
}
supabase = {
source = “supabase/supabase”
version = “~> 1.0″
}
}
backend “s3” {
bucket = “tutorial-terraform-aws-staging”
key = “terraform.tfstate”
region = “us-west-2”
}
}
# Configure the AWS provider
provider “aws” {
region = “us-west-2”
profile = “tutorial-terraform-aws”
}
# Configure the Supabase provider
provider “supabase” {
access_token = file(“${path.module}/access-token”)
}
We have attached the new provider, which ensures that our project and everyone involved is aware that we now have several cloud services available. Let’s review our previous database and incorporate the new one into our Infrastructure as Code (IaC).
// ‘modules/database_supabase.tf’
resource “supabase_project” “supabase_db” {
organization_id = var.organization_id
name = var.project_name
database_password = var.database_password
region = “ap-southeast-1”
instance_size = var.instance_size
lifecycle {
ignore_changes = [database_password]
}
}
# Configure api settings for the linked project
resource “supabase_settings” “supabase_db” {
project_ref = supabase_project.supabase_db.id
api = jsonencode({
db_schema = “public,storage,graphql_public”
db_extra_search_path = “public,extensions”
max_rows = 1000
})
}
// ‘envs/staging/main.tf’
…
# module “database” {
# source = “../../modules/database”
# db_name = var.db_name
# db_username = var.db_username
# db_password = var.db_password
# db_engine = var.db_engine
# db_engine_version = var.db_engine_version
# db_instance_class = var.db_instance_class
# db_allocated_storage = var.db_allocated_storage
# db_port = var.db_port
# subnet_ids = var.db_subnet_ids
# allowed_cidr_blocks = [“${module.backend_server.private_id}/32”] # Only accesible by backend server
# }
module “database” {
source = “../../modules/database-supabase”
organization_id = var.organization_id
project_name = var.project_name
database_password = var.database_password
instance_size = var.instance_size
}
…
That’s basically it! We have just created the module that uses our staging environment, and we are calling it from within our environment. When we apply the changes, a new project will be created, with its name and the associated database. In our Supabase account, we can retrieve the necessary connection information to later integrate it into our CI/CD pipeline. Additionally, we can take advantage of other resources provided by the plugin or use an HTTP plugin to fetch the data through the API.

Accelerate software development with Terraform
Terraform accelerates development by automating infrastructure provisioning through IaC. This eliminates the need for manual configurations and ensures that environments remain consistent and easily reproducible across development, testing, and production stages.
By using modular configurations, our teams can quickly deploy standardized resources. Additionally, Terraform allows organizations to integrate services from multiple providers without the need to write new code, providing flexibility and avoiding vendor lock-in.
Furthermore, Terraform adds collaboration and version control, enabling multiple developers to manage infrastructure changes through repositories like Git. By incorporating Terraform, teams can significantly minimize setup time, reduce errors, and focus more on development rather than infrastructure management.
Bringing it all together: automating cloud infrastructure
Using Terraform to manage cloud infrastructure empowers teams to automate deployments, maintain consistency, and reduce manual configuration errors. By structuring the infrastructure into reusable modules, as demonstrated in this project, it becomes easier to manage, scale, and adapt to different environments such as staging and production.
Integrating multiple cloud providers, such as AWS and Supabase, showcases Terraform’s flexibility in handling diverse infrastructure requirements within a single framework. Organizations can choose the best tools for their needs while keeping deployments standardized and efficient.
Following best practices, such as maintaining environment-specific configurations, using version control for infrastructure code, and leveraging automation, ensures that the infrastructure remains reliable and secure. Terraform offers a robust solution for infrastructure as code, simplifying cloud management and allowing teams to focus on development and innovation.
At Patagonian, we’re always exploring new ways to improve our workflows and stay aligned with industry trends. If you have a project that could benefit from Terraform or cloud infrastructure, feel free to get in touch to discuss your needs.
You can find a link to the repository for this project here.
Latest blog posts
- All Posts
- Technology
- Insurance
- Healthcare
- Finance
- Energy
- Education


