Skip to content
Back to blog LocalStack Deep Dive - AWS on Your Laptop

LocalStack Deep Dive - AWS on Your Laptop

AWSDevOps

LocalStack Deep Dive - AWS on Your Laptop

Developing against AWS is expensive. Not just in cloud costs, but in feedback time. Deploy Lambda, wait, test, fail, redeploy, wait again.

LocalStack emulates AWS services locally. S3, Lambda, DynamoDB, SQS - running on your laptop. Changes take seconds, not minutes. Tests run without hitting real AWS. No credentials needed.

This is how fast cloud development should feel.

TL;DR

Code Repository: All code from this post is available at github.com/moabukar/blog-code/localstack-deep-dive

  • LocalStack emulates 80+ AWS services locally
  • Free tier covers most common services
  • Perfect for development and integration testing
  • Works with standard AWS SDKs and CLI
  • Docker-based, runs anywhere

Code Repository: All code from this post is available at github.com/moabukar/blog-code/localstack-deep-dive


Getting Started

Installation

# Using pip
pip install localstack

# Start LocalStack
localstack start

# Or use Docker directly
docker run -d \
  --name localstack \
  -p 4566:4566 \
  -p 4510-4559:4510-4559 \
  -e DEBUG=1 \
  localstack/localstack
# docker-compose.yml
version: '3.8'

services:
  localstack:
    image: localstack/localstack:latest
    ports:
      - "4566:4566"            # LocalStack Gateway
      - "4510-4559:4510-4559"  # External services port range
    environment:
      - DEBUG=1
      - DOCKER_HOST=unix:///var/run/docker.sock
      - PERSISTENCE=1          # Persist data between restarts
    volumes:
      - "./localstack-data:/var/lib/localstack"
      - "/var/run/docker.sock:/var/run/docker.sock"
docker-compose up -d

Configure AWS CLI

# Create a LocalStack profile
aws configure --profile localstack
# AWS Access Key ID: test
# AWS Secret Access Key: test
# Default region: us-east-1
# Default output format: json

# Or use environment variables
export AWS_ACCESS_KEY_ID=test
export AWS_SECRET_ACCESS_KEY=test
export AWS_DEFAULT_REGION=us-east-1
export AWS_ENDPOINT_URL=http://localhost:4566

S3: Object Storage

# Create bucket
aws --endpoint-url=http://localhost:4566 s3 mb s3://my-bucket

# Upload file
aws --endpoint-url=http://localhost:4566 s3 cp myfile.txt s3://my-bucket/

# List objects
aws --endpoint-url=http://localhost:4566 s3 ls s3://my-bucket/

# Generate presigned URL
aws --endpoint-url=http://localhost:4566 s3 presign s3://my-bucket/myfile.txt

Python SDK

import boto3

# Create S3 client pointing to LocalStack
s3 = boto3.client(
    's3',
    endpoint_url='http://localhost:4566',
    aws_access_key_id='test',
    aws_secret_access_key='test',
    region_name='us-east-1'
)

# Create bucket
s3.create_bucket(Bucket='my-bucket')

# Upload file
s3.upload_file('local_file.txt', 'my-bucket', 'remote_key.txt')

# Download file
s3.download_file('my-bucket', 'remote_key.txt', 'downloaded.txt')

Lambda: Serverless Functions

Deploy a Lambda

# Create function code
cat > handler.py << 'EOF'
def lambda_handler(event, context):
    name = event.get('name', 'World')
    return {
        'statusCode': 200,
        'body': f'Hello, {name}!'
    }
EOF

# Zip it
zip function.zip handler.py

# Create Lambda function
aws --endpoint-url=http://localhost:4566 lambda create-function \
  --function-name hello-function \
  --runtime python3.9 \
  --handler handler.lambda_handler \
  --zip-file fileb://function.zip \
  --role arn:aws:iam::000000000000:role/lambda-role

# Invoke it
aws --endpoint-url=http://localhost:4566 lambda invoke \
  --function-name hello-function \
  --payload '{"name": "LocalStack"}' \
  output.txt

cat output.txt
# {"statusCode": 200, "body": "Hello, LocalStack!"}

Lambda with S3 Trigger

# Create S3 bucket notification
aws --endpoint-url=http://localhost:4566 s3api put-bucket-notification-configuration \
  --bucket my-bucket \
  --notification-configuration '{
    "LambdaFunctionConfigurations": [{
      "LambdaFunctionArn": "arn:aws:lambda:us-east-1:000000000000:function:hello-function",
      "Events": ["s3:ObjectCreated:*"]
    }]
  }'

# Now uploading to S3 triggers the Lambda
aws --endpoint-url=http://localhost:4566 s3 cp test.txt s3://my-bucket/

DynamoDB: NoSQL Database

# Create table
aws --endpoint-url=http://localhost:4566 dynamodb create-table \
  --table-name Users \
  --attribute-definitions AttributeName=userId,AttributeType=S \
  --key-schema AttributeName=userId,KeyType=HASH \
  --billing-mode PAY_PER_REQUEST

# Insert item
aws --endpoint-url=http://localhost:4566 dynamodb put-item \
  --table-name Users \
  --item '{"userId": {"S": "123"}, "name": {"S": "Alice"}}'

# Query
aws --endpoint-url=http://localhost:4566 dynamodb get-item \
  --table-name Users \
  --key '{"userId": {"S": "123"}}'

Python with boto3

import boto3

dynamodb = boto3.resource(
    'dynamodb',
    endpoint_url='http://localhost:4566',
    aws_access_key_id='test',
    aws_secret_access_key='test',
    region_name='us-east-1'
)

# Create table
table = dynamodb.create_table(
    TableName='Users',
    KeySchema=[{'AttributeName': 'userId', 'KeyType': 'HASH'}],
    AttributeDefinitions=[{'AttributeName': 'userId', 'AttributeType': 'S'}],
    BillingMode='PAY_PER_REQUEST'
)
table.wait_until_exists()

# Insert
table.put_item(Item={'userId': '123', 'name': 'Alice', 'email': 'alice@example.com'})

# Query
response = table.get_item(Key={'userId': '123'})
print(response['Item'])

SQS: Message Queues

# Create queue
aws --endpoint-url=http://localhost:4566 sqs create-queue \
  --queue-name my-queue

# Send message
aws --endpoint-url=http://localhost:4566 sqs send-message \
  --queue-url http://localhost:4566/000000000000/my-queue \
  --message-body "Hello from LocalStack"

# Receive message
aws --endpoint-url=http://localhost:4566 sqs receive-message \
  --queue-url http://localhost:4566/000000000000/my-queue

SQS + Lambda Integration

# Create event source mapping
aws --endpoint-url=http://localhost:4566 lambda create-event-source-mapping \
  --function-name hello-function \
  --event-source-arn arn:aws:sqs:us-east-1:000000000000:my-queue \
  --batch-size 10

# Messages sent to SQS now trigger the Lambda

SNS: Pub/Sub Messaging

# Create topic
aws --endpoint-url=http://localhost:4566 sns create-topic \
  --name my-topic

# Subscribe SQS queue to topic
aws --endpoint-url=http://localhost:4566 sns subscribe \
  --topic-arn arn:aws:sns:us-east-1:000000000000:my-topic \
  --protocol sqs \
  --notification-endpoint arn:aws:sqs:us-east-1:000000000000:my-queue

# Publish message
aws --endpoint-url=http://localhost:4566 sns publish \
  --topic-arn arn:aws:sns:us-east-1:000000000000:my-topic \
  --message "Broadcast message"

Secrets Manager

# Create secret
aws --endpoint-url=http://localhost:4566 secretsmanager create-secret \
  --name my-secret \
  --secret-string '{"username":"admin","password":"secret123"}'

# Retrieve secret
aws --endpoint-url=http://localhost:4566 secretsmanager get-secret-value \
  --secret-id my-secret

Terraform with LocalStack

# providers.tf
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
  }
}

provider "aws" {
  region                      = "us-east-1"
  access_key                  = "test"
  secret_key                  = "test"
  skip_credentials_validation = true
  skip_metadata_api_check     = true
  skip_requesting_account_id  = true

  endpoints {
    s3             = "http://localhost:4566"
    dynamodb       = "http://localhost:4566"
    lambda         = "http://localhost:4566"
    sqs            = "http://localhost:4566"
    sns            = "http://localhost:4566"
    secretsmanager = "http://localhost:4566"
    iam            = "http://localhost:4566"
  }
}

# main.tf
resource "aws_s3_bucket" "app_bucket" {
  bucket = "my-app-bucket"
}

resource "aws_dynamodb_table" "app_table" {
  name         = "AppData"
  billing_mode = "PAY_PER_REQUEST"
  hash_key     = "id"

  attribute {
    name = "id"
    type = "S"
  }
}

resource "aws_sqs_queue" "app_queue" {
  name = "app-processing-queue"
}
# Apply against LocalStack
terraform init
terraform apply

Integration Testing Pattern

pytest with LocalStack

# conftest.py
import pytest
import boto3
import os

@pytest.fixture(scope='session')
def localstack_endpoint():
    return os.getenv('AWS_ENDPOINT_URL', 'http://localhost:4566')

@pytest.fixture(scope='session')
def s3_client(localstack_endpoint):
    return boto3.client(
        's3',
        endpoint_url=localstack_endpoint,
        aws_access_key_id='test',
        aws_secret_access_key='test',
        region_name='us-east-1'
    )

@pytest.fixture(scope='function')
def test_bucket(s3_client):
    bucket_name = 'test-bucket'
    s3_client.create_bucket(Bucket=bucket_name)
    yield bucket_name
    # Cleanup
    objects = s3_client.list_objects_v2(Bucket=bucket_name).get('Contents', [])
    for obj in objects:
        s3_client.delete_object(Bucket=bucket_name, Key=obj['Key'])
    s3_client.delete_bucket(Bucket=bucket_name)
# test_s3_operations.py
def test_upload_and_download(s3_client, test_bucket):
    # Upload
    s3_client.put_object(
        Bucket=test_bucket,
        Key='test-file.txt',
        Body=b'Hello, LocalStack!'
    )
    
    # Download
    response = s3_client.get_object(Bucket=test_bucket, Key='test-file.txt')
    content = response['Body'].read().decode('utf-8')
    
    assert content == 'Hello, LocalStack!'

def test_list_objects(s3_client, test_bucket):
    # Create multiple objects
    for i in range(5):
        s3_client.put_object(
            Bucket=test_bucket,
            Key=f'file-{i}.txt',
            Body=f'Content {i}'.encode()
        )
    
    # List
    response = s3_client.list_objects_v2(Bucket=test_bucket)
    
    assert len(response['Contents']) == 5

CI/CD Integration

# .github/workflows/test.yml
name: Integration Tests
on: [push, pull_request]

jobs:
  test:
    runs-on: ubuntu-latest
    services:
      localstack:
        image: localstack/localstack:latest
        ports:
          - 4566:4566
        env:
          SERVICES: s3,dynamodb,lambda,sqs
    
    steps:
      - uses: actions/checkout@v4
      
      - name: Setup Python
        uses: actions/setup-python@v5
        with:
          python-version: '3.11'
      
      - name: Install dependencies
        run: pip install -r requirements.txt pytest boto3
      
      - name: Wait for LocalStack
        run: |
          pip install awscli-local
          for i in {1..30}; do
            if awslocal s3 ls 2>/dev/null; then
              echo "LocalStack is ready"
              break
            fi
            echo "Waiting for LocalStack..."
            sleep 2
          done
      
      - name: Run tests
        env:
          AWS_ENDPOINT_URL: http://localhost:4566
          AWS_ACCESS_KEY_ID: test
          AWS_SECRET_ACCESS_KEY: test
          AWS_DEFAULT_REGION: us-east-1
        run: pytest tests/ -v

awslocal CLI Wrapper

# Install
pip install awscli-local

# Use without --endpoint-url
awslocal s3 mb s3://my-bucket
awslocal dynamodb list-tables
awslocal lambda list-functions

# It automatically adds the LocalStack endpoint

Pro Tips

1. Use Initialization Scripts

# docker-compose.yml
services:
  localstack:
    image: localstack/localstack:latest
    volumes:
      - "./init-aws.sh:/etc/localstack/init/ready.d/init-aws.sh"
# init-aws.sh
#!/bin/bash
awslocal s3 mb s3://app-bucket
awslocal dynamodb create-table \
  --table-name Users \
  --attribute-definitions AttributeName=id,AttributeType=S \
  --key-schema AttributeName=id,KeyType=HASH \
  --billing-mode PAY_PER_REQUEST
awslocal sqs create-queue --queue-name app-queue
echo "LocalStack initialized!"

2. Enable Persistence

environment:
  - PERSISTENCE=1
volumes:
  - "./localstack-data:/var/lib/localstack"

3. Debug Lambda Execution

environment:
  - DEBUG=1
  - LAMBDA_EXECUTOR=docker  # Run Lambdas in separate containers
  - LAMBDA_REMOTE_DOCKER=0

4. Check Service Status

# Health check
curl http://localhost:4566/_localstack/health

# Service status
curl http://localhost:4566/_localstack/info

Supported Services (Free Tier)

ServiceCoverage
S3Full
DynamoDBFull
LambdaFull
SQSFull
SNSFull
CloudWatch LogsFull
IAMBasic
Secrets ManagerFull
SSM Parameter StoreFull
CloudFormationMost resources
API GatewayFull
KinesisFull
Step FunctionsFull

Pro tier adds: RDS, ECS, EKS, ElastiCache, and more.


When NOT to Use LocalStack

  • Performance testing - Local != cloud performance
  • IAM policy testing - IAM simulation is limited
  • Network testing - VPCs, Transit Gateway, etc.
  • Managed service features - RDS failover, Aurora Serverless v2
  • Final pre-production testing - Always test against real AWS

Quick Reference

# Start
docker-compose up -d

# AWS CLI with endpoint
aws --endpoint-url=http://localhost:4566 s3 ls

# Or use awslocal
awslocal s3 ls

# Check health
curl localhost:4566/_localstack/health

# View logs
docker-compose logs -f localstack

# Reset (delete all data)
docker-compose down -v
docker-compose up -d

Conclusion

LocalStack transforms AWS development:

  1. Faster feedback - Seconds instead of minutes
  2. No cloud costs - Run everything locally
  3. Offline development - Work on planes, trains, anywhere
  4. Better testing - Integration tests without mock complexity
  5. Team consistency - Everyone runs the same environment

Your AWS bill will thank you. Your iteration speed will skyrocket.


References

Found this helpful?

Comments