Skip to content
Back to blog Terraform 0.11 to 1.11 Migration - The Full Journey

Terraform 0.11 to 1.11 Migration - The Full Journey

TerraformAWS

Terraform 0.11 to 1.11 Migration - The Full Journey

Last year I helped a client migrate their Terraform codebase from 0.11 all the way to 1.11. Their infrastructure had been running on 0.11 for years - nobody wanted to touch it because “it works, don’t break it.” Sound familiar?

This post documents the entire journey: the syntax changes, the resource splits, the state surgery, and most importantly - how to verify nothing breaks at each step.

The Golden Rule

Before we dive in, here’s the rule that guided every step of this migration:

After each upgrade, terraform plan must show no changes.

If plan shows changes, you’ve broken something. Stop, fix it, then continue. This is non-negotiable.

The Upgrade Path

You can’t jump directly from 0.11 to 1.11. Terraform versions have breaking changes that require stepping stones:

0.11 → 0.12 → 0.13 → 0.14 → 0.15 → 1.0 → 1.1+ → 1.11

Each jump has its own gotchas. Here’s what we hit at each stage.

Code Repository: All code from this post is available at github.com/moabukar/blog-code/terraform-migration


Phase 1: 0.11 to 0.12 - The Big Syntax Change

This is the hardest upgrade. Terraform 0.12 introduced HCL2, which changed almost everything about how you write Terraform.

Before: 0.11 Syntax

# 0.11 - String interpolation everywhere
resource "aws_instance" "web" {
  ami           = "${var.ami_id}"
  instance_type = "${var.instance_type}"
  
  tags {
    Name = "${var.environment}-web-${count.index}"
  }
}

# 0.11 - Conditional with empty string hack
resource "aws_eip" "web" {
  count    = "${var.create_eip ? 1 : 0}"
  instance = "${aws_instance.web.id}"
}

# 0.11 - Element function for list access
output "first_subnet" {
  value = "${element(var.subnet_ids, 0)}"
}

After: 0.12 Syntax

# 0.12 - No interpolation needed for simple references
resource "aws_instance" "web" {
  ami           = var.ami_id
  instance_type = var.instance_type
  
  tags = {
    Name = "${var.environment}-web-${count.index}"
  }
}

# 0.12 - Proper boolean conditionals
resource "aws_eip" "web" {
  count    = var.create_eip ? 1 : 0
  instance = aws_instance.web.id
}

# 0.12 - Native list indexing
output "first_subnet" {
  value = var.subnet_ids[0]
}

The 0.12upgrade Tool

Terraform 0.12 shipped with a built-in upgrade tool:

# First, make sure you're on the latest 0.11
terraform-0.11 init
terraform-0.11 plan  # Should show no changes

# Run the upgrade tool
terraform-0.12 0.12upgrade

# Review the changes
git diff

# Test the upgrade
terraform-0.12 init
terraform-0.12 plan  # MUST show no changes

What the Tool Doesn’t Fix

The upgrade tool handles most syntax changes, but it can’t fix everything:

1. Quoted Type Constraints

# 0.11
variable "instance_count" {
  type = "string"  # Quotes around type
}

# 0.12
variable "instance_count" {
  type = string  # No quotes - tool usually fixes this
}

2. Computed Maps in Resources

# 0.11 - This worked
resource "aws_instance" "web" {
  tags = "${merge(var.common_tags, map("Name", "web"))}"
}

# 0.12 - Need to update
resource "aws_instance" "web" {
  tags = merge(var.common_tags, { Name = "web" })
}

3. Count on Modules

# 0.11 - count on modules didn't exist
# If you hacked it with null_resource, you need to refactor

# 0.12 - Still no count on modules (that comes in 0.13)

Verification

After the upgrade tool runs:

terraform init
terraform plan -out=plan.out

# The output MUST say:
# No changes. Infrastructure is up-to-date.

If you see any planned changes, stop. Something went wrong. Common issues:

  • State file version incompatibility (run terraform state pull > state.json and check the version)
  • Provider version changes (pin your providers!)
  • Syntax the tool missed

Phase 2: 0.12 to 0.13 - Provider Requirements

Terraform 0.13 introduced required_providers blocks and count/for_each on modules.

New Required Providers Block

# 0.12 - Provider declared implicitly or with version constraint
provider "aws" {
  version = "~> 3.0"
  region  = "eu-west-1"
}

# 0.13 - Explicit required_providers block
terraform {
  required_version = ">= 0.13"
  
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.0"
    }
  }
}

provider "aws" {
  region = "eu-west-1"
}

The 0.13upgrade Tool

terraform-0.13 0.13upgrade

# This adds the required_providers block automatically
# Review and test
terraform init
terraform plan  # Must show no changes

Module Count/For_Each

If you had workarounds for conditional modules, now you can do it properly:

# 0.13 - count on modules finally works
module "monitoring" {
  source = "./modules/monitoring"
  count  = var.enable_monitoring ? 1 : 0
}

Phase 3: 0.13 to 0.14 - Provider Lock Files

Terraform 0.14 introduced the .terraform.lock.hcl file.

terraform init
# Creates .terraform.lock.hcl

# Commit this file!
git add .terraform.lock.hcl
git commit -m "Add Terraform provider lock file"

The lock file pins exact provider versions and checksums. This prevents “works on my machine” issues.

Sensitive Variables

0.14 also introduced the sensitive argument:

variable "db_password" {
  type      = string
  sensitive = true  # Won't show in plan output
}

Phase 4: 0.14 to 0.15 - Deprecation Warnings

0.15 removed a lot of deprecated syntax and prepared for 1.0.

Key changes:

  • terraform state mv behavior changed
  • Provider source addresses are now required
  • Deprecated interpolation-only expressions removed
terraform init
terraform plan

# Address any deprecation warnings before moving to 1.0

Phase 5: 0.15 to 1.0 - The Stability Release

Terraform 1.0 was mostly a stability release. If you got through 0.15 cleanly, 1.0 should be painless.

terraform init
terraform plan  # Should show no changes

Phase 6: 1.0 to 1.1+ - The S3 Bucket Split

This is where things get interesting.

Starting in AWS Provider 4.0 (which you’ll likely adopt when moving through Terraform 1.x), the aws_s3_bucket resource was broken up into multiple resources.

The Old Way (AWS Provider 3.x)

resource "aws_s3_bucket" "data" {
  bucket = "my-data-bucket"
  acl    = "private"

  versioning {
    enabled = true
  }

  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm = "aws:kms"
        kms_master_key_id = aws_kms_key.bucket_key.arn
      }
    }
  }

  lifecycle_rule {
    id      = "archive"
    enabled = true

    transition {
      days          = 90
      storage_class = "GLACIER"
    }

    expiration {
      days = 365
    }
  }

  logging {
    target_bucket = aws_s3_bucket.logs.id
    target_prefix = "data-bucket/"
  }

  cors_rule {
    allowed_headers = ["*"]
    allowed_methods = ["GET", "PUT"]
    allowed_origins = ["https://example.com"]
    max_age_seconds = 3000
  }

  website {
    index_document = "index.html"
    error_document = "error.html"
  }

  tags = {
    Environment = "production"
  }
}

One massive resource block with everything crammed in.

The New Way (AWS Provider 4.0+)

resource "aws_s3_bucket" "data" {
  bucket = "my-data-bucket"

  tags = {
    Environment = "production"
  }
}

resource "aws_s3_bucket_versioning" "data" {
  bucket = aws_s3_bucket.data.id

  versioning_configuration {
    status = "Enabled"
  }
}

resource "aws_s3_bucket_server_side_encryption_configuration" "data" {
  bucket = aws_s3_bucket.data.id

  rule {
    apply_server_side_encryption_by_default {
      sse_algorithm     = "aws:kms"
      kms_master_key_id = aws_kms_key.bucket_key.arn
    }
  }
}

resource "aws_s3_bucket_lifecycle_configuration" "data" {
  bucket = aws_s3_bucket.data.id

  rule {
    id     = "archive"
    status = "Enabled"

    transition {
      days          = 90
      storage_class = "GLACIER"
    }

    expiration {
      days = 365
    }
  }
}

resource "aws_s3_bucket_logging" "data" {
  bucket = aws_s3_bucket.data.id

  target_bucket = aws_s3_bucket.logs.id
  target_prefix = "data-bucket/"
}

resource "aws_s3_bucket_cors_configuration" "data" {
  bucket = aws_s3_bucket.data.id

  cors_rule {
    allowed_headers = ["*"]
    allowed_methods = ["GET", "PUT"]
    allowed_origins = ["https://example.com"]
    max_age_seconds = 3000
  }
}

resource "aws_s3_bucket_website_configuration" "data" {
  bucket = aws_s3_bucket.data.id

  index_document {
    suffix = "index.html"
  }

  error_document {
    key = "error.html"
  }
}

resource "aws_s3_bucket_acl" "data" {
  bucket = aws_s3_bucket.data.id
  acl    = "private"
}

resource "aws_s3_bucket_public_access_block" "data" {
  bucket = aws_s3_bucket.data.id

  block_public_acls       = true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true
}

Yes, one resource became nine. But here’s why this is actually better:

  1. Granular state management - You can import/move individual settings
  2. Cleaner diffs - Changing versioning doesn’t show the entire bucket in the plan
  3. Independent lifecycle - Each setting can be managed separately
  4. Better module composition - Modules can manage specific aspects

The Migration Strategy

Here’s the critical part. You have two options:

Option A: Let Terraform Recreate (DON’T DO THIS IN PROD)

If you just upgrade the provider and update your code, Terraform will want to:

  1. Remove the old inline configuration
  2. Create new standalone resources

This might work for non-critical buckets, but for production data? Absolutely not.

Option B: State Surgery (The Safe Way)

# 1. First, upgrade your code to the new format
# 2. Then import the existing configuration into the new resources

# Import versioning
terraform import aws_s3_bucket_versioning.data my-data-bucket

# Import encryption
terraform import aws_s3_bucket_server_side_encryption_configuration.data my-data-bucket

# Import lifecycle rules
terraform import aws_s3_bucket_lifecycle_configuration.data my-data-bucket

# Import logging
terraform import aws_s3_bucket_logging.data my-data-bucket

# Continue for each resource...

Option C: Use moved Blocks (Terraform 1.1+)

Terraform 1.1 introduced moved blocks, which are perfect for this:

# Tell Terraform that the inline config moved to a new resource
moved {
  from = aws_s3_bucket.data
  to   = aws_s3_bucket.data
}

# For the child resources, you still need imports
# But moved blocks help when refactoring your own resources

The Import Script We Used

For the client, we wrote a script to handle all their S3 buckets:

#!/bin/bash
# s3-migration-import.sh

set -e

BUCKETS=$(terraform state list | grep "aws_s3_bucket\." | grep -v "aws_s3_bucket_")

for bucket_resource in $BUCKETS; do
  bucket_name=$(terraform state show "$bucket_resource" | grep "bucket " | head -1 | awk -F'"' '{print $2}')
  base_name=$(echo "$bucket_resource" | sed 's/aws_s3_bucket\.//')
  
  echo "Processing: $bucket_name ($base_name)"
  
  # Check if versioning exists
  if aws s3api get-bucket-versioning --bucket "$bucket_name" --query 'Status' --output text | grep -q "Enabled\|Suspended"; then
    echo "  Importing versioning..."
    terraform import "aws_s3_bucket_versioning.${base_name}" "$bucket_name" || true
  fi
  
  # Check if encryption exists
  if aws s3api get-bucket-encryption --bucket "$bucket_name" 2>/dev/null; then
    echo "  Importing encryption..."
    terraform import "aws_s3_bucket_server_side_encryption_configuration.${base_name}" "$bucket_name" || true
  fi
  
  # Check if lifecycle rules exist
  if aws s3api get-bucket-lifecycle-configuration --bucket "$bucket_name" 2>/dev/null; then
    echo "  Importing lifecycle..."
    terraform import "aws_s3_bucket_lifecycle_configuration.${base_name}" "$bucket_name" || true
  fi
  
  # Check if logging exists
  if aws s3api get-bucket-logging --bucket "$bucket_name" --query 'LoggingEnabled' --output text | grep -v "None"; then
    echo "  Importing logging..."
    terraform import "aws_s3_bucket_logging.${base_name}" "$bucket_name" || true
  fi
  
  # Always import public access block (should exist on all buckets)
  echo "  Importing public access block..."
  terraform import "aws_s3_bucket_public_access_block.${base_name}" "$bucket_name" || true
  
done

echo "Done. Run 'terraform plan' to verify."

Verification After S3 Migration

terraform plan

# You should see:
# No changes. Your infrastructure matches the configuration.

# If you see changes, common issues:
# - Lifecycle rule IDs don't match (AWS auto-generates if not specified)
# - ACL differences (check if bucket-owner-full-control vs private)
# - Public access block settings differ from defaults

Phase 7: 1.1+ to 1.11 - Incremental Updates

After surviving the S3 split, the remaining upgrades are gentler.

Notable Changes by Version

Terraform 1.2:

  • precondition and postcondition blocks
  • replace_triggered_by lifecycle argument

Terraform 1.3:

  • optional() function for object type defaults
variable "config" {
  type = object({
    name     = string
    enabled  = optional(bool, true)  # Default value!
    retries  = optional(number, 3)
  })
}

Terraform 1.4:

  • terraform_data resource (replaces null_resource)

Terraform 1.5:

  • import blocks for config-driven imports
  • check blocks for continuous validation
# 1.5 style import - no more CLI imports!
import {
  to = aws_s3_bucket.data
  id = "my-data-bucket"
}

# Continuous validation
check "bucket_versioning_enabled" {
  data "aws_s3_bucket_versioning" "data" {
    bucket = aws_s3_bucket.data.id
  }

  assert {
    condition     = data.aws_s3_bucket_versioning.data.versioning_configuration[0].status == "Enabled"
    error_message = "Bucket versioning must be enabled"
  }
}

Terraform 1.6:

  • terraform test framework

Terraform 1.7:

  • removed blocks for safe resource removal from state
# Instead of terraform state rm, use this
removed {
  from = aws_instance.old_server

  lifecycle {
    destroy = false  # Don't destroy the actual resource
  }
}

Terraform 1.8-1.11:

  • Provider-defined functions
  • Various performance improvements
  • Better error messages

The Final Verification

After reaching 1.11:

terraform init -upgrade
terraform plan

# Must show:
# No changes. Your infrastructure matches the configuration.

# Run a full validate too
terraform validate

Common Issues and Fixes

Issue: State Version Mismatch

Error: state snapshot was created by Terraform v0.14.0, which is newer than current v0.13.0

Fix: You can’t downgrade state. Always move forward.

Issue: Provider Version Conflict

Error: Failed to query available provider packages

Fix: Pin your provider versions before upgrading Terraform:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.75.0"  # Pin before upgrade
    }
  }
}

Issue: Module Source Changed

Error: Module not installed

Fix: Run terraform init -upgrade after each Terraform version upgrade.

Issue: Deprecated Interpolation

Warning: Interpolation-only expressions are deprecated

Fix: Remove unnecessary ${}:

# Bad
name = "${var.name}"

# Good
name = var.name

Issue: S3 Bucket ACL Conflicts

Error: error putting S3 Bucket ACL: AccessControlListNotSupported

Fix: For buckets with ownership controls, you can’t use ACLs:

# If you have this:
resource "aws_s3_bucket_ownership_controls" "data" {
  bucket = aws_s3_bucket.data.id
  rule {
    object_ownership = "BucketOwnerEnforced"
  }
}

# Then you can't have this:
# resource "aws_s3_bucket_acl" "data" { ... }  # REMOVE THIS

The Migration Checklist

Here’s the checklist we used for each environment:

## Pre-Migration
- [ ] Backup state file: `terraform state pull > state-backup-$(date +%Y%m%d).json`
- [ ] Document current Terraform version
- [ ] Document current provider versions
- [ ] Run `terraform plan` - confirm no changes
- [ ] Commit all code changes

## Per Version Upgrade
- [ ] Install new Terraform version
- [ ] Run upgrade tool if available (0.12upgrade, 0.13upgrade)
- [ ] Run `terraform init -upgrade`
- [ ] Run `terraform plan`
- [ ] Verify: "No changes"
- [ ] Commit changes with version in message

## S3 Migration (Provider 3.x → 4.x)
- [ ] Update code to use separate resources
- [ ] Run import script for all buckets
- [ ] Run `terraform plan` - verify no changes
- [ ] Test in dev/staging first
- [ ] Commit and document

## Post-Migration
- [ ] Update CI/CD pipelines with new Terraform version
- [ ] Update documentation
- [ ] Train team on new syntax/features
- [ ] Remove old Terraform binaries

Timeline

For reference, here’s how long this took for a ~200 resource codebase:

PhaseDurationNotes
0.11 → 0.122 daysMost syntax changes
0.12 → 0.134 hoursMostly automated
0.13 → 0.142 hoursLock file setup
0.14 → 0.152 hoursDeprecation fixes
0.15 → 1.01 hourSmooth
1.0 → 1.5 (S3 split)3 daysThe big one
1.5 → 1.114 hoursIncremental

Total: ~1 week of focused work


Key Takeaways

  1. Never skip versions - Follow the upgrade path
  2. Plan must show no changes - After every upgrade
  3. Backup state - Before every upgrade
  4. Pin provider versions - Upgrade Terraform and providers separately
  5. Test in non-prod first - Always
  6. The S3 split is the hard part - Budget time for it
  7. Document everything - Future you will thank present you

The Terraform ecosystem moves fast. What was bleeding edge in 0.11 is now ancient history. But if you follow this guide methodically, you’ll get there without losing any infrastructure along the way.


Questions? Find me on LinkedIn or GitHub.

Found this helpful?

Comments