Serverless Container Framework - Deploy Containers to Lambda and Fargate with Ease
Deploying containers to AWS typically means writing Terraform, CloudFormation, or CDK. You need to configure ECR repositories, task definitions, services, load balancers, target groups, security groups, IAM roles… the list goes on.
The Serverless Container Framework takes a different approach: define your containers in a simple YAML file, and it handles the rest. Want to run on Lambda? One line change. Switch to Fargate? Another line change. Same container, different compute.
TL;DR
- Deploy containers to AWS Lambda or Fargate with simple YAML
- Automatic ECR, routing, health checks, and IAM configuration
- Switch compute types (Lambda ↔ Fargate) with a single config change
- Supports multiple languages: Node.js, Go, Python, etc.
- Local development with Docker Compose and LocalStack
Why Serverless Containers?
Traditional container deployment:
Write Dockerfile
→ Build image
→ Push to ECR
→ Write Terraform/CDK for:
- ECS cluster
- Task definition
- Service
- ALB + target group
- Security groups
- IAM roles
- Auto-scaling
→ Deploy
→ Debug IAM issues
→ Redeploy
With Serverless Container Framework:
Write Dockerfile
→ Define serverless.containers.yml
→ Deploy
The framework handles all the AWS infrastructure automatically.
Quick Start
Installation
npm install -g serverless
serverless --version
Project Structure
my-app/
├── serverless.containers.yml # Configuration
└── service/ # Your container
├── Dockerfile
├── package.json
└── src/
└── index.js
Basic Configuration
# serverless.containers.yml
name: my-app
deployment:
type: awsApi@1.0
containers:
service:
src: ./service
routing:
pathPattern: /*
pathHealthCheck: /health
environment:
NODE_ENV: production
compute:
type: awsLambda # or awsFargateEcs
Deploy
# Set AWS credentials
export AWS_ACCESS_KEY_ID=your_access_key
export AWS_SECRET_ACCESS_KEY=your_secret_key
export AWS_REGION=eu-west-2
# Deploy to AWS
serverless deploy
# Or with explicit region
AWS_REGION=eu-west-2 serverless deploy
# Deploy to specific stage
serverless deploy --stage prod
# Tail container logs
serverless logs --container api-fargate --tail
# Tear down
serverless remove
serverless remove --force
That’s it. The framework creates everything: ECR repository, pushes your image, configures Lambda or Fargate, sets up API Gateway, and returns your endpoint URL.
Example: Express.js API
A simple Express app deployed as a serverless container:
Dockerfile
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY src/ ./src/
EXPOSE 8080
CMD ["node", "src/index.js"]
Application Code
// src/index.js
const express = require("express");
const app = express();
const port = 8080;
// Middleware
app.use(express.json());
app.use((req, res, next) => {
res.header("Access-Control-Allow-Origin", "*");
res.header("x-powered-by", "serverless-container-framework");
next();
});
// Health check - required for the framework
app.get("/health", (req, res) => {
res.status(200).send("OK");
});
// API routes
app.get("/api/info", (req, res) => {
res.json({
namespace: process.env.SERVERLESS_NAMESPACE,
container: process.env.SERVERLESS_CONTAINER_NAME,
stage: process.env.SERVERLESS_STAGE,
compute: process.env.SERVERLESS_COMPUTE_TYPE,
});
});
app.get("/api/users", (req, res) => {
res.json([
{ id: 1, name: "Alice" },
{ id: 2, name: "Bob" },
]);
});
// Catch-all 404
app.use((req, res) => {
res.status(404).json({ error: "Not found" });
});
app.listen(port, "0.0.0.0", () => {
console.log(`App listening on port ${port}`);
});
Configuration
# serverless.containers.yml
name: express-api
deployment:
type: awsApi@1.0
containers:
service:
src: ./service
routing:
pathPattern: /*
pathHealthCheck: /health
environment:
NODE_ENV: production
compute:
type: awsLambda
Deploy and Test
# Deploy
serverless deploy --stage dev
# Output:
# Deploying express-api to dev...
# Building container...
# Pushing to ECR...
# Deploying to Lambda...
#
# Endpoint: https://abc123.execute-api.eu-west-1.amazonaws.com
# Test
curl https://abc123.execute-api.eu-west-1.amazonaws.com/health
# OK
curl https://abc123.execute-api.eu-west-1.amazonaws.com/api/info
# {"namespace":"express-api","container":"service","stage":"dev","compute":"lambda"}
Example: Go API
The framework works with any language. Here’s a Go API:
Dockerfile
FROM golang:1.21-alpine AS builder
WORKDIR /app
COPY go.mod ./
COPY *.go ./
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o main .
# Runtime stage
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=builder /app/main .
EXPOSE 8080
CMD ["./main"]
Application Code
// main.go
package main
import (
"encoding/json"
"log"
"net/http"
"os"
"time"
)
type HealthResponse struct {
Status string `json:"status"`
Timestamp time.Time `json:"timestamp"`
Compute string `json:"compute"`
}
type MessageResponse struct {
Message string `json:"message"`
Path string `json:"path"`
Method string `json:"method"`
Timestamp time.Time `json:"timestamp"`
}
func healthHandler(w http.ResponseWriter, r *http.Request) {
compute := os.Getenv("COMPUTE_TYPE")
if compute == "" {
compute = "local"
}
response := HealthResponse{
Status: "healthy",
Timestamp: time.Now(),
Compute: compute,
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(response)
}
func apiHandler(w http.ResponseWriter, r *http.Request) {
response := MessageResponse{
Message: "Hello from Serverless Containers with Go!",
Path: r.URL.Path,
Method: r.Method,
Timestamp: time.Now(),
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(response)
}
func userHandler(w http.ResponseWriter, r *http.Request) {
type User struct {
ID int `json:"id"`
Name string `json:"name"`
Email string `json:"email"`
}
users := []User{
{ID: 1, Name: "Mo", Email: "mo@example.com"},
{ID: 2, Name: "Alice", Email: "alice@example.com"},
{ID: 3, Name: "Bob", Email: "bob@example.com"},
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(users)
}
func main() {
http.HandleFunc("/health", healthHandler)
http.HandleFunc("/api/hello", apiHandler)
http.HandleFunc("/api/users", userHandler)
port := os.Getenv("PORT")
if port == "" {
port = "8080"
}
log.Printf("Starting server on port %s", port)
log.Fatal(http.ListenAndServe(":"+port, nil))
}
Configuration
# serverless.containers.yml
name: go-api
deployment:
type: awsApi@1.0
containers:
api:
src: ./
routing:
pathPattern: /*
pathHealthCheck: /health
environment:
PORT: 8080
COMPUTE_TYPE: lambda
compute:
type: awsLambda
Switching to Fargate
Lambda has limits: 15-minute timeout, 10GB memory, cold starts. For long-running or high-memory workloads, switch to Fargate:
# serverless.containers.yml
name: my-app
deployment:
type: awsApi@1.0
containers:
service:
src: ./service
routing:
pathPattern: /*
pathHealthCheck: /health
environment:
NODE_ENV: production
compute:
type: awsFargateEcs # Changed from awsLambda
Redeploy:
serverless deploy --stage prod
The framework now deploys to Fargate instead of Lambda - same container, different compute.
When to Use Each
| Use Case | Compute | Why |
|---|---|---|
| API endpoints | Lambda | Cost-effective, scales to zero |
| Bursty traffic | Lambda | Instant scaling |
| Long-running tasks | Fargate | No 15-min timeout |
| High memory (>10GB) | Fargate | Lambda limit is 10GB |
| Consistent traffic | Fargate | No cold starts |
| WebSocket/streaming | Fargate | Lambda has connection limits |
Custom IAM Policies
Need DynamoDB access? S3 access? Add custom IAM policies:
# serverless.containers.yml
name: my-app
deployment:
type: awsApi@1.0
containers:
service:
src: ./service
routing:
pathPattern: /*
pathHealthCheck: /health
environment:
TABLE_NAME: my-table
compute:
type: awsLambda
awsIam:
customPolicy:
Version: "2012-10-17"
Statement:
- Effect: Allow
Action:
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:Query
- dynamodb:Scan
Resource:
- "arn:aws:dynamodb:*:*:table/my-table"
- Effect: Allow
Action:
- s3:GetObject
- s3:PutObject
Resource:
- "arn:aws:s3:::my-bucket/*"
Multiple Containers
Deploy multiple containers in one configuration:
# serverless.containers.yml
name: microservices
deployment:
type: awsApi@1.0
containers:
api:
src: ./api
routing:
pathPattern: /api/*
pathHealthCheck: /api/health
environment:
SERVICE: api
compute:
type: awsLambda
auth:
src: ./auth
routing:
pathPattern: /auth/*
pathHealthCheck: /auth/health
environment:
SERVICE: auth
compute:
type: awsLambda
admin:
src: ./admin
routing:
pathPattern: /admin/*
pathHealthCheck: /admin/health
environment:
SERVICE: admin
compute:
type: awsFargateEcs # Admin on Fargate
Local Development
Docker Compose
Test locally before deploying:
# docker-compose.yml
version: '3.8'
services:
api:
build: ./service
ports:
- "8080:8080"
environment:
- NODE_ENV=development
- SERVERLESS_NAMESPACE=my-app
- SERVERLESS_CONTAINER_NAME=service
- SERVERLESS_STAGE=local
- SERVERLESS_COMPUTE_TYPE=docker
- SERVERLESS_LOCAL=true
docker-compose up
curl http://localhost:8080/health
LocalStack Integration
Test with LocalStack for AWS service mocking:
# docker-compose.localstack.yml
version: '3.8'
services:
localstack:
image: localstack/localstack:latest
ports:
- "4566:4566"
environment:
- SERVICES=dynamodb,s3,sqs
- DEBUG=1
api:
build: ./service
ports:
- "8080:8080"
environment:
- AWS_ENDPOINT=http://localstack:4566
- AWS_ACCESS_KEY_ID=test
- AWS_SECRET_ACCESS_KEY=test
- AWS_REGION=us-east-1
depends_on:
- localstack
# serverless.containers.yml (for LocalStack)
service: my-app
provider:
name: aws
region: eu-west-2
stage: ${opt:stage, 'dev'}
plugins:
- serverless-localstack
custom:
localstack:
stages:
- local
host: http://localhost
edgePort: 4566
GraphQL Example
Deploy a GraphQL API:
# serverless.containers.yml
name: graphql-api
deployment:
type: awsApi@1.0
containers:
service:
src: ./service
routing:
pathPattern: /*
pathHealthCheck: /health
environment:
NODE_ENV: production
compute:
type: awsLambda
// service/src/index.js
const express = require('express');
const { graphqlHTTP } = require('express-graphql');
const { buildSchema } = require('graphql');
const schema = buildSchema(`
type Query {
hello: String
users: [User]
}
type User {
id: Int
name: String
email: String
}
`);
const root = {
hello: () => 'Hello from Serverless GraphQL!',
users: () => [
{ id: 1, name: 'Alice', email: 'alice@example.com' },
{ id: 2, name: 'Bob', email: 'bob@example.com' },
],
};
const app = express();
app.get('/health', (req, res) => res.send('OK'));
app.use('/graphql', graphqlHTTP({
schema,
rootValue: root,
graphiql: true,
}));
app.listen(8080, () => console.log('GraphQL server running'));
Production Configuration
Environment-Specific Configs
# serverless.containers.yml (development)
name: my-app
deployment:
type: awsApi@1.0
containers:
service:
src: ./service
routing:
pathPattern: /*
pathHealthCheck: /health
environment:
NODE_ENV: development
LOG_LEVEL: debug
compute:
type: awsLambda
# serverless.containers.prod.yml (production)
name: my-app
deployment:
type: awsApi@1.0
containers:
service:
src: ./service
routing:
pathPattern: /*
pathHealthCheck: /health
environment:
NODE_ENV: production
LOG_LEVEL: warn
compute:
type: awsFargateEcs # Fargate for production
# Deploy to different stages
serverless deploy --stage dev
serverless deploy --stage prod --config serverless.containers.prod.yml
CI/CD Integration
# .github/workflows/deploy.yml
name: Deploy
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: ${{ secrets.AWS_ROLE_ARN }}
aws-region: eu-west-1
- name: Install Serverless
run: npm install -g serverless
- name: Deploy
run: serverless deploy --stage prod
Best Practices
1. Always Include Health Checks
routing:
pathPattern: /*
pathHealthCheck: /health # Required for load balancer health checks
app.get('/health', (req, res) => {
// Check dependencies if needed
res.status(200).send('OK');
});
2. Use Multi-Stage Dockerfiles
# Build stage
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Production stage
FROM node:20-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
EXPOSE 8080
CMD ["node", "dist/index.js"]
3. Keep Images Small
# Use alpine base images
FROM node:20-alpine
# Only copy production dependencies
RUN npm ci --only=production
# Don't include dev files
COPY src/ ./src/
4. Handle Graceful Shutdown
process.on('SIGTERM', () => {
console.log('SIGTERM received, shutting down gracefully');
server.close(() => {
console.log('Server closed');
process.exit(0);
});
});
Comparison with Alternatives
| Feature | SCF | AWS CDK | Terraform | Serverless Framework |
|---|---|---|---|---|
| Container support | Native | Yes | Yes | Plugin |
| Config complexity | Low | High | Medium | Medium |
| Lambda + Fargate | Yes | Yes | Yes | Plugin |
| Learning curve | Low | High | Medium | Low |
| Flexibility | Medium | High | High | Medium |
Choose Serverless Container Framework when:
- You want simple container deployment
- You need to switch between Lambda and Fargate easily
- You don’t want to write infrastructure code
- You’re building APIs or microservices
Conclusion
The Serverless Container Framework removes the infrastructure complexity from container deployment. Define your containers in YAML, deploy, and get an endpoint. Switch between Lambda and Fargate with a config change.
It won’t cover every use case - complex architectures still need Terraform or CDK. But for straightforward container deployments, it’s hard to beat the simplicity.
Give it a try: github.com/moabukar/serverless-container-framework