Integration testing with Terraform 0.15

Seems like we are getting closer and closer to Terraform 1.0 with v0.15 getting to final stages before the general availability(GA). Hashicorp freshly released 0.15.0-beta2, which marks the second pre-release testing period for v0.15. There will also be a release candidate before the final v0.15.0.

HashiCorp Terraform 0.15

It probably goes without saying, beta releases are not recommended for use in a production environment. Nevertheless I have decided to give it a go and check what is coming up in the next ‘minor’ release and get some early exposure. (Terraform is like Kubernetes with its minor releases - you get quite a few features and sometimes important deprecations too, so they feel more like major releases).

Quickly looking at the changelog seems like Terraform 0.15 is largely a housekeeping release in order to prepare for the awaited Terraform 1.0. Also number of deprecations have been completed with removals and/or emitting error messages.

Highlights from the 0.15.0-beta2 release

  • New sensitive and nonsensitive functions allow module authors to explicitly override Terraform’s default inference of value sensitivity for situations where it is too conservative or not conservative enough.
  • New -lockfile=readonly flag, which suppresses writing changes to the dependency lock file.
  • Provider-defined sensitive attributes redaction is no longer experimental and a default behavior.
  • Improved virtual terminal & utf-8 support across all platforms, including Windows.

Module testing

Besides already mentioned changes there is another new experimental feature that landed in 0.15 CLI and got me intrigued. It is Terraform module testing and I wanted to have a more detailed look at it! If you are coming from a coding background then adding tests to your code feels natural, but for people with an infrastructure background this could feel new as there was no “native” testing available in Terraform until now.

Current extension to Terraform for this experimental feature consists of the following parts:

  • Temporary experimental provider terraform.io/builtin/test, which acts as a placeholder for potential new language features related to test assertions.
  • A new terraform test command for more conveniently running multiple tests in a single action.
  • An experimental convention of placing test configurations in subfolders of a tests directory within your module, which terraform test will then discover and run.

Writing Tests for a Module

Current implementation arranges module tests into test suites, each of which is a root Terraform module which includes a module block calling the module under test, and ideally also a number of test assertions to verify that the module outputs match expectations.

To get started create a subfolder called tests/ in the same directory where you keep your module’s .tf and/or .tf.json source files. In that directory, make another directory which will serve as your first test suite, with a directory name that concisely describes what the suite is aiming to test.

So an example of a typical directory structure with the addition of a test suite called defaults would look like:

main.tf
outputs.tf
providers.tf
variables.tf
versions.tf
tests/
  defaults/
    test_defaults.tf

The tests/defaults/test_defaults.tf file contains a call to the main module with a suitable set of arguments and also one or more resources that will serve as the temporary syntax for defining test assertions. Let’s have a look at an example root module where we deploy S3 bucket and want to test if our code correctly created one:

terraform {
  required_providers {
    # This provider is only
    # available when running tests, so you shouldn't use it
    # in non-test modules.
    test = {
      source = "terraform.io/builtin/test"
    }
  }
}

provider "aws" {
  region = module.main.region
}

locals {
  bucket_name = format("mb-%s", module.main.a_pet)
}

module "main" {
  source = "../.."
}

resource "test_assertions" "s3" {
  # "component" is an unique identifier for this
  # particular set of assertions in the test results.
  component = "bucket"

  equal "name" {
    description = "Check bucket name"
    got         = local.bucket_name
    want        = module.main.bucket
  }

  check "name_prefix" {
    description = "Check for prefix"
    condition   = can(regex("^mb-", local.bucket_name))
  }
}

# We can also use data resources to respond to the
# behavior of the real remote system, rather than
# just to values within the Terraform configuration.
data "aws_s3_bucket" "s3_response" {
  bucket = module.main.bucket

  depends_on = [test_assertions.s3]
}

resource "test_assertions" "s3_response" {
  component = "bucket_response"

  check "valid_name" {
    description = "Has resource a valid name"
    condition   = can(data.aws_s3_bucket.s3_response.id == local.bucket_name)
  }
}

You can also create additional directories alongside the defaults/ directory to define additional test suites that pass different variable values into the main module, and then include assertions that verify that the result has changed in the expected way.

Running Your Tests

To check all our test suites we can just run terraform test and that would get us output of testing results (errors or success).

The current experimental command expects to be run from your main module directory and not the specific test suite directory containing test_defaults.tf.

Because these test suites are integration tests rather than unit tests, you will need to set up any credentials files or environment variables needed by the providers your module uses before running terraform test. When run the test command will, for each suite:

  • Install the providers and any external modules the test configuration depends on.
  • Create an execution plan to create the objects declared in the module.
  • Apply that execution plan to create the objects in the real remote system.
  • Collect all of the test results from the apply step, which would also have “created” the test_assertions resources.
  • Destroy all of the objects recorded in the temporary test state, as if running terraform destroy against the test configuration.

Example output:

$ terraform test
─── Failed: defaults.bucket.name (Check bucket name) ───────────────────
wrong value
    got:  "1mb-quiet-midge"
    want: "mb-quiet-midge"

In this case the module returned an incorrect bucket name value and so the defaults.bucket_name assertion failed.

The test_assertions resource captures any assertion failures, but does not return an error. If Terraform encounters any errors while processing the test configuration it will halt processing, which may cause some of the test assertions to be skipped or resources not destroyed.

Also be mindful that these integration tests might incur additional costs, if we are interacting with providers that spin up or use Cloud infrastructure.

Known Limitations

Initial experiment might seem a bit rough around the edges and some bits not completely ironed out (such as module provider settings are not picked from variables), but it does demonstrates broad strokes of what testing Terraform might look like in the future. Some current limitations are:

  • We can only test create and destroy behaviours and not subsequent updates to an existing deployment of a module.
  • For a module that includes variable validation rules and data resources that function as assertion checks, the current experimental feature doesn’t have any way to test those or report a failure.
  • As this prototype is using a provider as an approximation for new assertion syntax, the terraform test command is limited in how much context it is able to gather about failures.
  • There’s no unit-level testing (and no place to use mocks) - right now you only have integration testing available, so resources take time to create or destroy, and you have to pay for them.

Other breaking changes

There are also few other breaking changes coming with v0.15 release that I would like to mention before wrapping up:

  • You can now define provider aliases using the configuration_aliases argument within the required_providers block. Also empty provider configuration blocks should be removed. This is replacing the need for an empty “proxy configuration block” as a placeholder. In order to declare configuration aliases, add the desired names to the configuration_aliases argument for the provider requirements:
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 2.7.0"
      configuration_aliases = [ aws.alternate ]
    }
  }
}
  • Warnings will be emitted now where empty configuration blocks are present but no longer required and they will continue to work unchanged in the 0.15 release. There are a few cases where existing configurations may also return new errors:
    • The providers map in a module call cannot override a provider configured within the module. This is not a supported configuration, but was previously missed in validation and now returns an error.
    • A provider alias within a module that has no configuration requires a provider configuration be supplied in the module providers map.
    • All entries in the providers map in a module call must correspond to a provider name within the module. Passing in a configuration to an undeclared provider is now also an error.
  • Terraform v0.14 introduced a new global option -chdir which you can use before the subcommand name, causing Terraform to run the subcommand as if the given directory had been the current working directory. Terraform CLI v0.15 no longer accepts configuration directories on any command except terraform fmt. (terraform fmt is special compared to the others as it primarily deals with individual source code files, rather than modules or configurations as a whole.)

How to get started

So where to go next and how to get started using Terraform 0.15? If you want to give it a spin before GA, you can already download and install the appropriate binary from releases.hashicorp.com! If you are using Terraform Cloud (TFC), it can also be enabled by sending an email to support@hashicorp.com and requesting enabling beta releases.

There is also a draft upgrade guide with some initial details. In order to get your code prepared to run v0.15, you need to follow the upgrade steps for v0.14, which will make it compatible with v0.15. You can read more about it in my previous Terraform 0.14 article.

If you would like to leave feedback on that upcoming release please use the community discussion forum created thread, or report bugs via GitHub.


We offer hands-on AWS training as part of our SaaS Growth subscription, to help your team make the best use of the AWS cloud. Book a free chat to find out more.

For some topics, you can also get the same training just on the topic you need - see our Terraform training and Kubernetes training pages.


This blog is written exclusively by The Scale Factory team. We do not accept external contributions.

Free Healthcheck

Get an expert review of your AWS platform, focused on your business priorities.

Book Now

Discover how we can help you.


Consulting packages

Advice, engineering, and training, solving common SaaS problems at a fixed price.

Learn more >

Growth solutions

Complete AWS solutions, tailored to the unique needs of your SaaS business.

Learn more >

Support services

An ongoing relationship, providing access to our AWS expertise at any time.

Learn more >