Getting Started With Terraform Runner
To use Terraform Runner, you will need the following:
- A docker image with your Terraform module
- A Kubernetes Runtime from which to run the Terraform binary
1. Build a Terraform Docker Image
Terraform Runner works by running a Docker image containing your Terraform module, in your own Kubernetes cluster (and namespace) of choice. The Docker image must meet the following requirements:
- Have the Terraform binary installed.
- Have Prodvana's pvn-wrapper utility installed
- This utility is used to support features like storing plan files between plan and apply operations.
- Have a shell installed (
/bin/sh
and/bin/bash
both work great). - Contain your Terraform module as well as any other dependencies you need to run your Terraform module
- For example, if you are using Terraform to configure GCP resources, your Docker image must have the
gcloud
CLI installed at a version compatible with the Terraform module you have written.
Here is a simple example of a Dockerfile
that meets these requirements:
FROM us-docker.pkg.dev/pvn-infra/pvn-public/pvn-wrapper:latest as pvn-wrapper
FROM hashicorp/terraform:1.5
# copy in the pvn-wrapper binary
COPY --from=pvn-wrapper /pvn-wrapper /bin/pvn-wrapper
COPY . /terraform # copy your module code into the image
Build and push the docker image to a registry you have linked to Prodvana. We recommend building one image per commit or release, just like you would with Kubernetes services.
2. Prepare a Kubernetes Runtime
Create a Kubernetes Runtime using your cloud provider of choice, as well as a namespace you want Prodvana to run Terraform commands in.
Ensure that your cluster and namespace has the permission necessary to run your Terraform modules. Any combination of the following methods are supported:
- Make sure that the cluster default service account has the permission to manage cloud resources on the cloud provider.
- Store any prerequisite credentials in one or more Kubernetes secrets.
Your credentials never leave your environment.
Link your Kubernetes Runtime to Prodvana. See Configuring a Runtime.
3. Create a Terraform Runner
Create a Terraform Runner Runtime by defining the following config file:
runtime:
name: my-terraform-runner # replace the name as you wish
terraformRunner:
proxyRuntime:
runtime: my-kubernetes-runtime # replace with the name of your runtime
containerOrchestration:
k8s:
namespace: my-namespace # replace with the name of your namespace you precreated
You may create as many Terraform Runner Runtimes as is appropriate for your use case. For example, you may wish to create different Runtimes pointed at different clusters and/or namespaces with different permissions for the Terraform binary, one for staging and one for production.
Passing Credentials
If you defined your namespace default service account to have the correct permissions already, skip this step.
To pass credentials stored in a Kubernetes secret to Terraform jobs, do one or more of the following:
Use a Dedicated Service Account
If you have a specific Service Account that already has permissions to run Terraform, you can configure Terraform Runner to use it.
runtime:
name: my-terraform-runner # replace the name as you wish
terraformRunner:
proxyRuntime: ... # defined previously
serviceAccount: my-service-account
Pass Secret as Environment Variable
Assuming you created your Kubernetes secret with a command like:
cat <<EOF | kubectl -n terraform-runner apply -f /dev/stdin
apiVersion: v1
kind: Secret
metadata:
name: vol-secret
stringData:
test_file: a secret
EOF
Reference the secret by key in your Terraform Runner config file.
runtime:
name: my-terraform-runner # replace the name as you wish
terraformRunner:
proxyRuntime: ... # defined previously
env:
MY_SECRET:
kubernetesSecret:
secretName: vol-secret
key: test_file
Mount Secret as a Volume
runtime:
name: my-terraform-runner # replace the name as you wish
terraformRunner:
proxyRuntime: ... # defined previously
volumes:
- name: test-vol
source:
secret:
secretName: vol-secret
mount:
mountPath: /testmount
Running Commands Before Terraform Plan/Apply
Depending on your modules, you may need to run commands before terraform plan
/terraform apply
to set up credentials properly. This can be accomplished by preRun
.
runtime:
name: my-terraform-runner
terraformRunner:
proxyRuntime: ... # defined previously
preRun:
- cmd: gcloud auth login ...
preRun
commands have access to all the environment variables and mounts you defined.
3. Create an Application and Services
Create an Application(s) and Services for your Terraform modules. An Application should map to a set of environments, while a Service should contain the logically equivalent modules that run in each environment.
application:
name: infra
releaseChannels:
- name: staging
runtimes:
- runtime: my-terraform-runner
type: EXTENSION
- name: production
runtimes:
- runtime: my-terraform-runner # can use a different runner here if needed
type: EXTENSION
preconditions:
- releaseChannelStable:
releaseChannel: staging
- manualApproval: {}
service:
name: terraform
application: infra
terraform:
image: "{{.Params.Image}}"
path: "/terraform/{{.Builtins.ReleaseChannel.Name}}"
parameters:
- name: image
dockerImage:
defaultTag: my-tag
imageRegistryInfo:
containerRegistry: my-registry-name
imageRepository: my-repository
In the above example, the Terraform Docker image is at my-repository/my-repository-name:my-tag
, with two modules inside it: /terraform/staging
and /terraform/production
.
For more information about how to configure Applications, see Configuring Applications.
For more information about how to configure Services, see Configuring Services.
For the list of parameters that Terraform Runner supports from your Service config file, see Terraform Runner Parameters.
Updated 9 months ago