Skip to content
Back to blog Tailscale in Production: WireGuard Mesh for Hybrid Cloud

Tailscale in Production: WireGuard Mesh for Hybrid Cloud

NetworkingSecurity

Tailscale in Production: WireGuard Mesh for Hybrid Cloud

Tailscale builds a WireGuard mesh network that just works. No port forwarding, no firewall rules, no certificates to manage. Every device gets a stable IP and can reach every other device.

This guide covers production deployment patterns for Kubernetes, multi-cloud, and hybrid environments.

TL;DR

  • Tailscale = WireGuard mesh with identity-based access
  • Works through NAT/firewalls without port forwarding
  • SSO integration (Okta, Google, Azure AD)
  • ACLs for fine-grained access control
  • Kubernetes operator for service exposure

Architecture

┌─────────────────────────────────────────────────────────────────┐
│                     Tailscale Coordination                       │
│                    (control plane, not data)                     │
└─────────────────────────────────────────────────────────────────┘

          ┌────────────────────┼────────────────────┐
          │                    │                    │
          ▼                    ▼                    ▼
   ┌────────────┐       ┌────────────┐       ┌────────────┐
   │   AWS VPC  │◄─────►│   Office   │◄─────►│    GCP     │
   │ 100.64.0.x │       │ 100.64.1.x │       │ 100.64.2.x │
   └────────────┘       └────────────┘       └────────────┘
          │                    │                    │
     WireGuard            WireGuard            WireGuard
     (direct P2P)         (direct P2P)         (direct P2P)

All traffic flows directly between nodes - the coordination server only handles key exchange and discovery.

Install Tailscale

Linux Server

# Install
curl -fsSL https://tailscale.com/install.sh | sh

# Authenticate
sudo tailscale up --authkey=tskey-xxx --hostname=prod-server-1

# Verify
tailscale status
tailscale ip -4

Kubernetes Operator

# Add Helm repo
helm repo add tailscale https://pkgs.tailscale.com/helmcharts
helm repo update

# Install operator
helm upgrade --install tailscale-operator tailscale/tailscale-operator \
  --namespace tailscale --create-namespace \
  --set oauth.clientId="xxx" \
  --set oauth.clientSecret="xxx"

Subnet Router

Expose entire subnets to the Tailscale network:

# Enable IP forwarding
echo 'net.ipv4.ip_forward = 1' | sudo tee -a /etc/sysctl.conf
echo 'net.ipv6.conf.all.forwarding = 1' | sudo tee -a /etc/sysctl.conf
sudo sysctl -p

# Advertise subnets
sudo tailscale up \
  --authkey=tskey-xxx \
  --hostname=aws-subnet-router \
  --advertise-routes=10.0.0.0/16,10.1.0.0/16 \
  --accept-routes

Approve in admin console or via API:

# Using Tailscale API
curl -X POST "https://api.tailscale.com/api/v2/tailnet/-/routes" \
  -H "Authorization: Bearer $TAILSCALE_API_KEY" \
  -d '{"routes": ["10.0.0.0/16"]}'

Kubernetes Integration

Expose Services via Tailscale

# Expose a service to Tailscale network
apiVersion: v1
kind: Service
metadata:
  name: internal-api
  annotations:
    tailscale.com/expose: "true"
    tailscale.com/hostname: "internal-api"
spec:
  selector:
    app: api-server
  ports:
    - port: 8080

The service becomes available at internal-api.tailnet-xxx.ts.net.

Ingress via Tailscale

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: private-ingress
  annotations:
    tailscale.com/expose: "true"
    tailscale.com/hostname: "grafana"
spec:
  ingressClassName: tailscale
  rules:
    - host: grafana
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: grafana
                port:
                  number: 3000

Sidecar Pattern

apiVersion: apps/v1
kind: Deployment
metadata:
  name: api-server
spec:
  template:
    spec:
      containers:
        - name: app
          image: api-server:latest
          ports:
            - containerPort: 8080
        
        - name: tailscale
          image: tailscale/tailscale:latest
          env:
            - name: TS_AUTHKEY
              valueFrom:
                secretKeyRef:
                  name: tailscale-auth
                  key: authkey
            - name: TS_HOSTNAME
              value: "api-server"
            - name: TS_KUBE_SECRET
              value: "tailscale-state"
            - name: TS_USERSPACE
              value: "true"
          securityContext:
            capabilities:
              add: ["NET_ADMIN"]

ACL Configuration

Tailscale ACLs control who can access what:

{
  "acls": [
    // Admins can access everything
    {
      "action": "accept",
      "src": ["group:admin"],
      "dst": ["*:*"]
    },
    
    // Developers can access dev/staging
    {
      "action": "accept",
      "src": ["group:developers"],
      "dst": [
        "tag:dev:*",
        "tag:staging:*"
      ]
    },
    
    // Production access is limited
    {
      "action": "accept",
      "src": ["group:sre"],
      "dst": ["tag:production:*"]
    },
    
    // Everyone can access monitoring
    {
      "action": "accept",
      "src": ["*"],
      "dst": [
        "grafana:3000",
        "prometheus:9090"
      ]
    }
  ],
  
  "tagOwners": {
    "tag:dev": ["group:developers"],
    "tag:staging": ["group:developers"],
    "tag:production": ["group:sre"]
  },
  
  "groups": {
    "group:admin": ["admin@company.com"],
    "group:developers": ["dev-team@company.com"],
    "group:sre": ["sre-team@company.com"]
  },
  
  "autoApprovers": {
    "routes": {
      "10.0.0.0/16": ["tag:subnet-router"],
      "10.1.0.0/16": ["tag:subnet-router"]
    }
  }
}

Exit Nodes

Route all traffic through a specific node:

# On the exit node
sudo tailscale up \
  --authkey=tskey-xxx \
  --hostname=exit-eu-west \
  --advertise-exit-node

# On clients that want to use it
tailscale up --exit-node=exit-eu-west

Multi-Cloud Connectivity

┌─────────────────────────────────────────────────────────────────┐
│                         Tailscale Mesh                           │
├─────────────────────────────────────────────────────────────────┤
│                                                                  │
│  ┌──────────────┐    ┌──────────────┐    ┌──────────────┐       │
│  │     AWS      │    │     GCP      │    │    Azure     │       │
│  │  10.0.0.0/16 │    │  10.1.0.0/16 │    │  10.2.0.0/16 │       │
│  │              │    │              │    │              │       │
│  │ ┌──────────┐ │    │ ┌──────────┐ │    │ ┌──────────┐ │       │
│  │ │ Subnet   │ │    │ │ Subnet   │ │    │ │ Subnet   │ │       │
│  │ │ Router   │ │    │ │ Router   │ │    │ │ Router   │ │       │
│  │ └──────────┘ │    │ └──────────┘ │    │ └──────────┘ │       │
│  └──────────────┘    └──────────────┘    └──────────────┘       │
│         │                   │                   │                │
│         └───────────────────┴───────────────────┘                │
│                    All subnets routable                          │
└─────────────────────────────────────────────────────────────────┘

Terraform Configuration

# AWS subnet router
resource "aws_instance" "tailscale_router" {
  ami           = data.aws_ami.ubuntu.id
  instance_type = "t3.micro"
  
  subnet_id              = aws_subnet.private.id
  source_dest_check      = false  # Required for routing
  
  user_data = <<-EOF
    #!/bin/bash
    curl -fsSL https://tailscale.com/install.sh | sh
    echo 'net.ipv4.ip_forward = 1' >> /etc/sysctl.conf
    sysctl -p
    tailscale up \
      --authkey=${var.tailscale_authkey} \
      --hostname=aws-router \
      --advertise-routes=${var.vpc_cidr} \
      --accept-routes
  EOF
  
  tags = {
    Name = "tailscale-subnet-router"
  }
}

# GCP subnet router
resource "google_compute_instance" "tailscale_router" {
  name         = "tailscale-router"
  machine_type = "e2-micro"
  zone         = "europe-west2-a"

  boot_disk {
    initialize_params {
      image = "ubuntu-os-cloud/ubuntu-2204-lts"
    }
  }

  network_interface {
    subnetwork = google_compute_subnetwork.private.id
    access_config {}
  }

  can_ip_forward = true  # Required for routing

  metadata_startup_script = <<-EOF
    #!/bin/bash
    curl -fsSL https://tailscale.com/install.sh | sh
    echo 'net.ipv4.ip_forward = 1' >> /etc/sysctl.conf
    sysctl -p
    tailscale up \
      --authkey=${var.tailscale_authkey} \
      --hostname=gcp-router \
      --advertise-routes=${var.gcp_cidr} \
      --accept-routes
  EOF
}

SSH via Tailscale

Tailscale SSH provides identity-aware SSH access:

# Enable Tailscale SSH on a node
sudo tailscale up --ssh

# Connect (no keys needed!)
ssh user@hostname

# Or using Tailscale directly
tailscale ssh hostname

ACL for SSH:

{
  "ssh": [
    {
      "action": "accept",
      "src": ["group:sre"],
      "dst": ["tag:production"],
      "users": ["root", "ubuntu"]
    },
    {
      "action": "accept",
      "src": ["group:developers"],
      "dst": ["tag:dev"],
      "users": ["autogroup:nonroot"]
    }
  ]
}

Monitoring

# Check connectivity
tailscale ping hostname

# Debug connection
tailscale netcheck

# Status
tailscale status --json | jq

# Prometheus metrics (enterprise)
curl http://localhost:9100/metrics | grep tailscale

Troubleshooting

Can’t connect to subnet:

# Check routes are advertised
tailscale status

# Check routes are approved
tailscale status --json | jq '.Self.AllowedIPs'

# Check IP forwarding
sysctl net.ipv4.ip_forward

Slow performance:

# Check if using relay (DERP)
tailscale netcheck

# Force direct connection
tailscale ping --until-direct hostname

References

======================================== Tailscale + WireGuard + Kubernetes

Mesh networking that just works.

Found this helpful?

Comments